Tip:
Highlight text to annotate it
X
Welcome to my quick walk-through of building a custom SAN for home or work purposes.
It actually ends up putting out quite a bit of IO. For the minimal amount of money involved
in this project I went with a Super Micro motherboard, and a disk array subsystem by
Super Micro. I'll be listing the components in the description and in the video annotation
so that you can kind of collect that information. Everything turned out really nicely compatibility-wise.
I ended up using it for a VM iSCSI build-out for my VMware home server. And here's the
Here's the actual case with the Lian-Li. I got one of the disk enclosures in there; I ended up getting
two of them for a total of 10 hot swappable hard drive bays there. And I was really careful
to keep the wires nice and tidy to maximize that airflow there. You can see one of them
bundled there. It came with a motherboard, it came with one network port and on-board
VGA video card, minimizing the amount of hardware you have to get, keep the case open if possible.
I cleaned up the backside of that, eliminated some of the wiring from the internal side
of the case.
I ended up going with that Super Micro enclosure. It seemed to be really solid.
I was very happy with the quality of the construction, and it's very affordable comparatively with the
other SATA hot swap type enclosures. Here's the workhorse of the system, the SATA 2,
RAID 6 capable, JBOD actual RAID controller there, a gray card, just [a] very fast performer.
Of course, it's a little bit on the pricey side with having 16 ports on it, but I find
it to be true that you typically get what you pay for. I've got the CPU basic 2.2 Intel,
and I've got about 4 Gig of RAM on this box. It probably won't hurt if you can afford a
little extra RAM for caching purposes to bump the performance up there. I ended up labeling
all of my cables for troubleshooting purposes; it just makes things a little easier when
you've got everything bundled and tie-strapped like that.
Here's the close-up of the card and you can see the on-board cache there and a fan for
the CPU, the on-board CPU. It actually has a HTTP web GUI configuration for, kind of,
out-of-band management, and that's really slick, that works great. You control all the
miscellaneous features of the card itself, too. So, that's fun to play around with if
you don't want to go through the BIOS; you don't need any additional software, it's actually
worked out great in our situation. Here I have the second RAID enclosure installed.
I popped out the CD ROM drive, it actually has some on board USB slots for booting a
flash drive or running an install off that. I ended up installing Open Filer, a solid,
solid free version of a SAN software implementation there. And of course I got all the wires managed
for improved airflow, I got a fan on the back of each enclosure assembly, and a fan, you
can't really see it here, in front of those two hard drives. I went with two 250 Gig hard
drives there for the operating system. Those are RAID 1 off the motherboard RAID controller
because I didn't want to get into a situation where I was putting the OS in the actual RAID
bay enclosures and end up pulling one of those by accident, that wouldn't go over too well.
Here I have the drives removed so you can see the slide rails, or the bays. They've
each got a Terabyte drive in there I have a configured RAID 5. Here when I clip the
drives back in the drives seem to be nice and appropriately sized. They click back in
nice and clean. They aren't really sloppy or flimsy plastic, they are nice, solid; they
have an LED light comes out to the front of the actual bay so that you can see drive activity
lights so you get some kind of indication of the activity on the system. They also have
a warning light next to the activity light indicator, which is nice; a lot of the enclosures
didn't have that.
Okay I got the Open Filer installed, and we're booting here: the BIOS.
There's the RAID controller configuration. I'll be skipping that; I already have that preconfigured.
Here's the Open Filer boot screen, using rPath Linux. Running with version 2.3 for the time being.
All right, now we've got it booted up here. And you can see some activity here. I'm hitting it with
iSCSI partition configured, hitting it with Iometer. I got about 16,000 IOPS for 4K blocks,
and I've got about 105 Megabytes using about a 32K size block with 16 queue depth on the disk,
and it's humming right along. It's very impressive for the larger green-type drives
that aren't really performance-oriented. All right, that concludes our video. I hope you
enjoyed watching it, I had a ton of fun putting this system together and it turned out to
be much nicer than I ever thought it would be. So, hopefully I've inspired you to come
up with something of your own or similar.