• AnyStream is having some DRM issues currently, Netflix is not available in HD for the time being.
    Situations like this will always happen with AnyStream: streaming providers are continuously improving their countermeasures while we try to catch up, it's an ongoing cat-and-mouse game. Please be patient and don't flood our support or forum with requests, we are working on it 24/7 to get it resolved. Thank you.

What Does Your Storage Environment/Media Server Look Like?

dharris

Active Member
Thread Starter
Joined
Feb 15, 2007
Messages
37
Likes
0
All,

I post this in this thread because its really only an issue as we have started burning BRs and HDDVDs. Given the massive size of these files, storing over 25 starts becoming a real investment (yes i know you can shrink them but for now, lets assume your not).

My set-up has been a separate media server (basic intel board running a P4) with 7 x 500 GB drives set up in a RAID 5 using an expensive Areca RAID card with 8 ports. This gives me 3TB of storage to-date. I am now maxed out and I am adding my last 500GB hard drive for this card...but it got me thinking...what are the rest of you doing about storing your BRs/HDDVDs? Is my RAID 5 overkill? Are you using software RAID 5? How big is your media server?

I am curious because I am facing having to buy another RAID card (either a 16 port or another 8) and I would like to know what the majority of you are doing before i make the investment...

PS If this should be moved to another thread, I understand....
 
Media Storage Server Issues

I'm familiar with the prices for those raid cards and man, you are really spending some coin on that.

Anyways, my setup is similiar to your setup in that I have a separate storage computer. Right now it has 3x1Tb hard drives in there, with the space to add 3 more disks. They aren't in any raid array as I don't have the money to waste a 1Tb hard drive and you wouldn't want to use an array for this unless there was some redundancy built in. The drives are just single drives and if one fails, hopefully i'll get some warning so i can copy the files off the drive. I have't had a drive fail on me yet without some sort of warning (click, error, etc). When that space gets filled up, i'll probaby move to get an add in card for extra storage. My case can hold up to 16 drives so i'm okay in that regard. And by that time, i'm sure someone will have create a CloneHD program for us to use to drastically shrink the size of the rips.

So my opinion is to get a separate box to hold your hard drives in a single drive configuration (no raid). Connect it to your router by way of the gigabit ethernet port and leave it running 24/7(basically creating a NAS). I would also recommend 1 or 2 hard drive enclosures that allow you to set up your hard drives in your 5.25 slots. They allow hot swapping for quick hard drive additions so that is a big convenience for quickly adding another terabyte. I expect terabyte hdd prices to drop in the next 6 months so unless you are ripping like a madman, by the time you need another terabyte drive, you WILL be paying less. I've only quarter filled my third drive and it took me more than two years to fill up the other two with the OS and a ghost backup partition being on the first one.

I just wanted to add that if you are concerned about the performance of the single drive setup, you really don't need to be worried. Terabyte drives have 32mb of ram buffer and thier single drive performance is on par with Raptors. I tested the streaming quality of the drives by having two wireless computers access the same file and different file on the drives (simple file sharing) and saw no performance decrease in the movie playing on both computers so unless your movies server has a large number of clients, i doubt that the raid array will justify the performance for the cost (right now at least). Just make sure you connect your server to your router using ethernet instead of wirless to take advantage of the gigabyte lan speeds and make sure your terabyte drive has 32Mb of cache and you should be fine without a raid setup. If you do decide to go raid, raid5 is the best way to go. It gives the best read performance (for multiple clients) and has the best backup capability for the price as you only give up one drive (a 4x1Tb disk set will yield a 3Tb raid5 array) however not many motherboards support this setup so you will most likey have to get an add in card such as the Arca cards(best in my opinion).
 
Last edited:
Thanks for the info. I should also mention a key reason i was using the RAID was to have one "big" hard drive (one drive letter) to store the movies as opposed to separates...
 
All,

I post this in this thread because its really only an issue as we have started burning BRs and HDDVDs. Given the massive size of these files, storing over 25 starts becoming a real investment (yes i know you can shrink them but for now, lets assume your not).

My set-up has been a separate media server (basic intel board running a P4) with 7 x 500 GB drives set up in a RAID 5 using an expensive Areca RAID card with 8 ports. This gives me 3TB of storage to-date. I am now maxed out and I am adding my last 500GB hard drive for this card...but it got me thinking...what are the rest of you doing about storing your BRs/HDDVDs? Is my RAID 5 overkill? Are you using software RAID 5? How big is your media server?

I am curious because I am facing having to buy another RAID card (either a 16 port or another 8) and I would like to know what the majority of you are doing before i make the investment...

PS If this should be moved to another thread, I understand....

Hi,

I currently use 3 WD My Book Premium II with each having 2TB capacity.
This gives me a total of 6,5 TB space with one internal 500GB HD.
I have connected the WD's via Firewire 800 which is a lot faster than USB2.0 (about 80 MB/s).
This is a very good solution for me, as the WD's have an integrated power management, switching to standby when not in use.

Regards
 
Uber Storage Server

I've worked for awhile now on a "low cost" server solution that can handle up to 40 SATA hard drives. I use this for my primary online data storage needs and then also use a USB removable storage solution from TQ for things that don't necessarily need to be online all the time.

In the process of building the storage rig/server, I had to come to terms and figure out a few things.

#1 - Keep It Simple! Keep It Simple! Keep It Simple! Keep It Simple!

#2 - RAID 5 or similar RAID configurations don't make sense in these types of applications (low usage, online/near-line type storage). The thing that took me forever to come to terms with is that if I lose a hard drive, I lose the contents and have to go back to the originals. For whatever reason, this was extremely tough for me with my server background and constant worrying about losing data at the server level, but in this case it just doesn't apply and the costs of keeping the safety net outweigh any benefits. Also, performance isn’t an issue as individual drives have more than enough performance for home-based media needs.

FWIW, I was tempted and for awhile did use RAID5 type solutions. The problem is that there is/was no RAID controller that I was aware when building the server that will power down the drives when not in use. This may have changed recently with constant push for server-side power efficiency, but I still don't want 5 to 15 drives powering up and generating unnecessary heat whenever one file is accessed in the RAID.

I'm currently using 40 1TB drives and having them all up and running in various RAID5 arrays would be a huge power waste and heat generator. It also locks the hard drives into the RAID controller's formatting and this meant changing out RAID arrays or RAID controllers is a major pain. Having just recently upgraded from 500GB to 1TB drives, it was much easier just being able to swap the drives in/out one at time as needed versus having to deal with large RAID5 type setups and needing to setup an array using hard drive slots that don't exist, etc.

Lastly, rebuilding large RAID5 arrays using SATA drives takes forever.

#2 - Drive Letters would be an issue so I'd needed a work around for that. In my case, I choose to use Windows Server and use its ability to mount a drive to folder and then share out the master folder. For instance, setup a share called HDBANK (for Hard Drive Bank) and then create folders for each hard drive (i.e. HD00, HD01, ...). You can then use Disk Manager to link the hard drive to an HDxx folder. This way, you only need one share and/or one drive letter for all the drives on the storage server. I used Windows as it was available to me, but Linux should have a similar type solution available and should work just as well.

#3 - The drive setup/format had to use a standard format (FAT, NTFS, etc.). No special formatting, not part of any RAID, not tied to any RAID Controller, etc. I needed the ability to pull the drive and read it on any system as wanted. I ended up using NTFS as it was the path of least resistance for me.

#4 - Heat is the #1 enemy for large storage solutions. Not only in the longevity of the hard drive(s), but in requiring more power for the hard drives, more power to cool the room, bigger power supplies, and so on. All of which ultimately leads to higher utility bills. As such, I needed to choose hard drives based on heat and power usage instead of performance – this is something that I’m not particular used to or good at :).

I ultimately ended up using Western Digital's 1TB drives. They run noticeable cooler (you can actually touch them after you power them off), and draw less power when operating compared to other 1TB drives plus include aggressive power management as part of the drive’s firmware.

With those basic ideas in mind, here's my setup:

1 - Windows Server 2008 (was 2003 as a few days ago)
1 - Silicon Image eSATA PCIe Controller w/ 4 External Ports
4 - USB to SATA Cables
8 - 5 Port Multipliers
8 - 5 Bay Storage Removable Racks from Super Micro
2 - External 750 Watt Supplies for the 40 External Hard Drives

The storage setup is like this:
SI eSATA Card
#1 -> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro
...
...
#4 -> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro

USB to SATA Cable
-> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro
...
...
-> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro

FYI - I've tried using multiple SI eSATA cards without luck. Driver issues. As such, to get the extra storage I had to go with the slower USB to SATA Cables type solution for second set of 20 Hard Drives.

Things I’m still considering or will experiment with…

To me, a perfect solution would be racks and racks of hard drives that were connected to a eSATA card and an uber relay card of some type. Essentially all the hard drives would be off, but from a program you could trip a relay and power up any hard drive. My concern is that you'd also need to switch the SATA connections as well and I'm concerned about the integrity of the high speed serial links thru a typically relay switch would have issues, but don't have the experience to create a custom FGPA type solution or ??? for it. You could use port multipliers like I have above, but since I ran into problems with multiple SI cards and almost any card that supports PMs seem to use SI's chips. As such, it would be hard to scale above 20 drives. I’ve also yet to find a 100% reliable eSATA Hot Plug card/solution. Until eSATA becomes like USB’s Hot Plug, I always worry that I’m going loss data when hot swapping a drive connected to an eSATA port. USB to SATA is fine, but the performance stinks in comparison to eSATA.

Now with iSCSI coming down into the price range of mere mortals. It may make sense to have multiple low cost Linux boxes as iSCSI hosts for 4 or more hard drives and either access directly and combine up the iSCSI Targets/Drives via a server share. I'm currently looking into this as a solution. My biggest worry is again power management. Can you even power down an iSCSI target/drive when not in use? Will the recipient tolerant the 5 to 10 second delay of waiting for a drive to power up or will it error out, etc. If this would work, then I think this solution would be the ultimate in scalability and if possibly combined with Wake-On-Lan or Line Voltage Relay Power Boards, could allow for tons of “online storage” with essentially little to no power or heat issues. FYI - Intel just released a new storage box that can run Linux or a host of other OSes. Looks great and almost no noise if combined with low power/noise hard drives.

Hope this helps! For me, it’s about the process. If you now any one that builds kit cars, kit planes, etc. Most of them enjoy the building and not the driving or flying. So for me, it’s that same, but I enjoy working building the solutions, etc. Storing lots of stuff is just an excuse to build it in the first place.
 
currently my media setup is a couple of xboxes running xbmc into 24/32" crt's and hifi's. one has a wireless bridge, the other uses ethernet.

i just got a mac mini to run xbmc as well (for hd stuff that the xbox1 can't handle) that's currently got a 19" tft, but might go to a 22-24" widescreen soon, that plugs into the same gigabit switch as my fileserver.

my linux server streams to all three, which internally has just under 2tb and a few 250gb's in esata enclosures for backup and portability. there's over 3tb of storage split between my machines, but i could easily fill that up if i ripped my dvd collection, which i've been dreading having to do (some old dvd's are starting to get bit rot).

i don't have a windows machine, just a vmware instance of 2003 for using the slysoft suite with - until they get off their behinds and port to mac/linux :rock:

currently got no hd stuff - was just about to buy a 360 hddvd drive as the war was lost, don't really want to deal with bluray.

as far as hardware raid cards go - you're safer with software raid (not bios raid that's useless) as if the card dies you are in trouble. you can make all the drives look like one big drive using lvm on linux, or just have a single samba/nfs mountpoint.
 
Freedom,

Thats great advice. It got me thinking and your right, i do not need the redundancy. But still want the appearance of single interface - through one drive letter...

I like your approach. My only question is can i do the folder combining method without windows server? perhaps a 3rd party program? I am not well versed in linux so prefer not to do that...
 
40 drives!!! Good bejeezus. Are you trying to compete with Google video;)

That sounds like some serious setup. I guess i'm limited with 16 drives, but hopefully by the time I reach that, writable Blu Ray blanks will be a few bucks:confused:

Anyway, i know i won't be able to store everything on HDD, but i'm mostly interested in the ones my kid watches and that we watch frequently. I figure once we stop watching it, it can be erased from the server and backed up to writable disk as server space gets low.

At 16Tb though and only purchasing 2 to 3 blu rays a month, i don't plan on hitting that limit anytime soon:D
 
Freedom,

Thats great advice. It got me thinking and your right, i do not need the redundancy. But still want the appearance of single interface - through one drive letter...

I like your approach. My only question is can i do the folder combining method without windows server? perhaps a 3rd party program? I am not well versed in linux so prefer not to do that...

I'm pretty sure you can set up multiple drives to appear as one drive under Windows XP or Vista. If my memory serves me right, you have to do this when the drives are initialized and the partitions for the drives are first created.

You can delete the existing drive configurations and recreate them, however, all data currently on the drives will be lost so make sure you have everything backed up before you do so.

A possible disadvantage to doing this would be that, if one drive fails, you may loose access to the data on the other drives, but I believe this problem exists with raid configurations as well.

To do this go to Control Panel>Administrative Tools>Computer Management then select Disk Management. From there you can delete the current drive configurations and then recreate them joined together as a single drive. It's easer to find these options if you switch to classic view under Control Panel.
 
Last edited:
My setup works pretty well and I prefer it over standard "RAID" solutions. I use a system called "unRAID" from Lime Technology (down ATM for some reason). unRAID uses a single parity drive to cover all of the data, data is NOT striped across multiple drives. If I lose a drive the parity allows me to recover the data. If I lose two drives then I lose two drives worth of data - NO more!

I have a serious issue with using "RAID" at home in that if you lose two drives you're toast. RAID is *not* a backup for data you cannot afford to lose, information I do not have media for that I cannot afford to lose is kept backed up (and offsite) even with my chosen system. Hot sparing is possible of course but it wastes energy, costs money, and I like to be able to buy whatever drive is on sale when needed for expansion which standard RAId doesn't like. If I lose a motherboard whatever is on sale with SATA ports works for me - the F/S on my drives is REISER so data recovery can be done with standard Linux tools too. I can even pull a drive, use it elsewhere, and reinsert it into my array - if data was added I have to regenerate parity though.

Because the data isn't striped my drives all spin down when not in use and if it's just a data access only that one drive spins up. If I write to one drive the parity drive and the data drive spin up, no others. Power usage is LOW because I use a cheap Celeron, an 80+ rated P/S, and "green" drives when I can afford them. It's NOT the fastest system - you wouldn't run a database on it - but it's fast enough for HD streaming video which is what it was designed specifically for. Boots from a cheap USB stick, only a gig of memory, no swap space needed, GigE network interface. 4port Promise SATA cards expand from what the motherboard provides but others supported by Linux work too including some port expanders but I've not tried them.

My primary video system has a 1TB parity drive, 1TB data drive, 4x 750Gig data drives, and two scratch drives that are IDE and about 500Gigs apiece. This system will max out at 16drives when I need the space - I buy drives as needed one at a time and can upgrade any of them at will to larger drives but must have a parity drive as big or bigger than the others (it can be upgraded too). My drives are in SATA racks 5 to a rack on edge with an 80mm fan cooling each one, they are easy to swap. This system holds 600+ DVD, 150+Gig of MP3, and about 20 HD-DVD rips (and one BD) that have been compressed - IMO keeping them full size is nutz right now. My SD DVD are all uncompressed.

I have a second system with 12 IDE drives, 500Gigs to 400Gigs each. I use this for backups of my workstations\HTPC, VM images, and for downloaded content like Linux ISOs, Windows ISO so I don't have to find disks, Torrents, pictures, and TV Shows I download.

I've not added up total storage but it's over 7TB easily.:rock: The unRAID software allows me to access each drive as a share or have shares that span disks - I use both. The software continues to evolve and there are features added often. Since it's Linux it's possible to add additional functionality yourself or with the help of others - we may even begin using a drive for swap on big software loads - we'll see.

Anyway, that is what I chose and for the storage involved it was pretty cheap:clap:

P.S. XBMC on an XBOX or XBMC on my Linux HTPC is what this feeds for my media needs.
 
Last edited:
A highly pertinent question...I did a lot of research on storage and I found that the lowest pence per gigabite is to be found with the Mybook world series from WD...There are some issues that you to overcome to get these things working properly but its not very difficult...

You should be able to get the 2TB version for about 13-15p per Gigabite.

I have a 1TB, 1.5TB and a 2TB connected to a netgear gigabit ethernet switch.

I configure all of these for Raid 1 as I keep all my other data on these things too...music pictures etc etc....works a treat...:rock:

just making the move to Blue ray now so havent tried HD yet but shouldnt be too much of a problem as network utilisation on streaming DVds is less that 12%...

Cheers Rich
 
I have configured my WD MyBooks 2TB as RAID 0.
A friend of mine has the exact same harddisks and we are frequently mirroring our harddisks, so the mirroring needn't be done on the harddisks itsself.
This is a great thing because I wont waste any space for RAID1.

In addition to that, we both have all movies in original, so there is no fear of losing data.
I just place the encrypted isos on me harddisk for more comfort.
 
Last edited:
I have a 3U server with space for 8 drives that I dug out of the trash at work. Put my motherboard and CPU in it and started adding 500MB drives to it in a software RAID5. A few weeks ago I added my last, so soon I'll be trying to figure out if I need to change what I'm storing or buy an external enclosure to keep growing.

I have to disagree with the comments about power usage: while my RAID does spin up every time I access it (because of the striping) it stays down a large percentage of the time simply because it's night and we're asleep or it's day and we're at work/school. People on this thread are right in that technically that's unnecessary, but it's a really small period of time compared to the whole.

The parity drive with no striping is intriguing. My biggest concern right now is that mucking with the drive (fsck or rebuilding the array if a disk bails) does take forever. When I have to expand next, I might buy much more space and see about splitting apart into multiple software RAID 5s or trying this parity with no striping just for rebuilding. Most likely, though, instead of one big RAID5 over 8 disks, I'll make it many smaller RAID5s over 8 disks (each 1.2TB or so instead of one big 3.5TB).

Linux software RAID 5 is crazy fast, easy to set up, well documented, and growable. When I start to run low on space, I just add a drive. In several hours, I have a bigger system with no fuss.

To those who "can't give up a disk" worth of storage to parity: it's painful at 3 (when I first put together my system), but at 8 it's negligible. Go ahead and pay the pain at 3 or 4 and gloat when a disk dies at 6 and you lose nothing.
 
Mount Points for Hard Drives...

Freedom,

Thats great advice. It got me thinking and your right, i do not need the redundancy. But still want the appearance of single interface - through one drive letter...

I like your approach. My only question is can i do the folder combining method without windows server? perhaps a 3rd party program? I am not well versed in linux so prefer not to do that...

You can do this in XP and Vista as well, not sure about Windows 2000 though.

The only catch is that the drive you are setting up the mount points on must be formatted in NTFS - for most systems this shouldn't be an issue and your boot drive was probably setup as NTFS as it is the default for XP and Vista. You may also need the drives you are mounting to a folder to be formatted in NTFS as well - just haven't tried so I'm not 100% sure on that one, but if you have problems that would be the first place I'd look.

Step #1:
Make a folder off your boot OS drive (most likely your C-Drive). I call mine HDBANKS. Make a subfolder in HDBANKS for each hard drive that you want to access without a drive letter (i.e. HD00, HD01, HD02, etc.).

Step #2:
Make sure the hard drive you want to create a mount point for is available and currently accessible on this system.

Step #3
Go to the Control Panel, Administrative Tools, and run Computer Management. Once the program is run, select Disk Management on the left side tree menu. This will display all the drives in your system. Find the drive you want to setup the mount point for in the UPPER list of drives and right click on it. Select 'Change Drive Letters and Paths'.

A new dialog box will appear, press the Add Button and make sure 'Mount in the following EMPTY NTFS folder' is selected and click the Browse Button. Find the folder you created in Step #1 for this drive and OK on out.

Step #4
Assuming you don't want the drive letter any more for this drive, go back into the 'Change Drive Letters and Paths' option and highlight the drive letter and select remove.

Step #5
If you want network access to the files, go ahead and share out the HDBANKS folder you created in Step #1

You won't lose your data when doing this so you don't have to worry about that (or more accurately, I've done this literally hundreds of times and have never lost any data :) - your results may vary!!!!). Seriously though, if your data was visible before you started the process it will be visible after wards - just now it will be visible as a folder on your boot drive.

If you have virus protection and you don't store any files you are concerned about being infected on your media drives, I'd exclude this folder from your virus scanner engine. You don't necessarily won't the virus engine scanning 20 to 50GB ISOs or large meda folders/files any way, and some virus protection may not know how to deal with mount points.

Lastly, if you want a totally KIS solution and you are okay with keeping bare HDs on a shelf and just grabbing the one you want and putting it into your Media PC to play without needing to reboot your Media PC, etc., the ThermalTake BlackX Solution is awesome. (http://thermaltakeusa.com/product/Storage/hdd_station/blacx/blacx.asp) They also make an SE version which is nice because it includes a USB hub, but the cover mechanism gets in the way and you can't power off the drive without powering off the USB hub. It essentially makes "bare hard drives" as easy to use a USB Flash Memory Stick!
 
I've worked for awhile now on a "low cost" server solution that can handle up to 40 SATA hard drives. I use this for my primary online data storage needs and then also use a USB removable storage solution from TQ for things that don't necessarily need to be online all the time.

In the process of building the storage rig/server, I had to come to terms and figure out a few things.

#1 - Keep It Simple! Keep It Simple! Keep It Simple! Keep It Simple!

#2 - RAID 5 or similar RAID configurations don't make sense in these types of applications (low usage, online/near-line type storage). The thing that took me forever to come to terms with is that if I lose a hard drive, I lose the contents and have to go back to the originals. For whatever reason, this was extremely tough for me with my server background and constant worrying about losing data at the server level, but in this case it just doesn't apply and the costs of keeping the safety net outweigh any benefits. Also, performance isn’t an issue as individual drives have more than enough performance for home-based media needs.

FWIW, I was tempted and for awhile did use RAID5 type solutions. The problem is that there is/was no RAID controller that I was aware when building the server that will power down the drives when not in use. This may have changed recently with constant push for server-side power efficiency, but I still don't want 5 to 15 drives powering up and generating unnecessary heat whenever one file is accessed in the RAID.

I'm currently using 40 1TB drives and having them all up and running in various RAID5 arrays would be a huge power waste and heat generator. It also locks the hard drives into the RAID controller's formatting and this meant changing out RAID arrays or RAID controllers is a major pain. Having just recently upgraded from 500GB to 1TB drives, it was much easier just being able to swap the drives in/out one at time as needed versus having to deal with large RAID5 type setups and needing to setup an array using hard drive slots that don't exist, etc.

Lastly, rebuilding large RAID5 arrays using SATA drives takes forever.

#2 - Drive Letters would be an issue so I'd needed a work around for that. In my case, I choose to use Windows Server and use its ability to mount a drive to folder and then share out the master folder. For instance, setup a share called HDBANK (for Hard Drive Bank) and then create folders for each hard drive (i.e. HD00, HD01, ...). You can then use Disk Manager to link the hard drive to an HDxx folder. This way, you only need one share and/or one drive letter for all the drives on the storage server. I used Windows as it was available to me, but Linux should have a similar type solution available and should work just as well.

#3 - The drive setup/format had to use a standard format (FAT, NTFS, etc.). No special formatting, not part of any RAID, not tied to any RAID Controller, etc. I needed the ability to pull the drive and read it on any system as wanted. I ended up using NTFS as it was the path of least resistance for me.

#4 - Heat is the #1 enemy for large storage solutions. Not only in the longevity of the hard drive(s), but in requiring more power for the hard drives, more power to cool the room, bigger power supplies, and so on. All of which ultimately leads to higher utility bills. As such, I needed to choose hard drives based on heat and power usage instead of performance – this is something that I’m not particular used to or good at :).

I ultimately ended up using Western Digital's 1TB drives. They run noticeable cooler (you can actually touch them after you power them off), and draw less power when operating compared to other 1TB drives plus include aggressive power management as part of the drive’s firmware.

With those basic ideas in mind, here's my setup:

1 - Windows Server 2008 (was 2003 as a few days ago)
1 - Silicon Image eSATA PCIe Controller w/ 4 External Ports
4 - USB to SATA Cables
8 - 5 Port Multipliers
8 - 5 Bay Storage Removable Racks from Super Micro
2 - External 750 Watt Supplies for the 40 External Hard Drives

The storage setup is like this:
SI eSATA Card
#1 -> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro
...
...
#4 -> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro

USB to SATA Cable
-> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro
...
...
-> 5 Port Multiplier -> 5 Bay Hard Drive Storage Rack from Super Micro

FYI - I've tried using multiple SI eSATA cards without luck. Driver issues. As such, to get the extra storage I had to go with the slower USB to SATA Cables type solution for second set of 20 Hard Drives.

Things I’m still considering or will experiment with…

To me, a perfect solution would be racks and racks of hard drives that were connected to a eSATA card and an uber relay card of some type. Essentially all the hard drives would be off, but from a program you could trip a relay and power up any hard drive. My concern is that you'd also need to switch the SATA connections as well and I'm concerned about the integrity of the high speed serial links thru a typically relay switch would have issues, but don't have the experience to create a custom FGPA type solution or ??? for it. You could use port multipliers like I have above, but since I ran into problems with multiple SI cards and almost any card that supports PMs seem to use SI's chips. As such, it would be hard to scale above 20 drives. I’ve also yet to find a 100% reliable eSATA Hot Plug card/solution. Until eSATA becomes like USB’s Hot Plug, I always worry that I’m going loss data when hot swapping a drive connected to an eSATA port. USB to SATA is fine, but the performance stinks in comparison to eSATA.

Now with iSCSI coming down into the price range of mere mortals. It may make sense to have multiple low cost Linux boxes as iSCSI hosts for 4 or more hard drives and either access directly and combine up the iSCSI Targets/Drives via a server share. I'm currently looking into this as a solution. My biggest worry is again power management. Can you even power down an iSCSI target/drive when not in use? Will the recipient tolerant the 5 to 10 second delay of waiting for a drive to power up or will it error out, etc. If this would work, then I think this solution would be the ultimate in scalability and if possibly combined with Wake-On-Lan or Line Voltage Relay Power Boards, could allow for tons of “online storage” with essentially little to no power or heat issues. FYI - Intel just released a new storage box that can run Linux or a host of other OSes. Looks great and almost no noise if combined with low power/noise hard drives.

Hope this helps! For me, it’s about the process. If you now any one that builds kit cars, kit planes, etc. Most of them enjoy the building and not the driving or flying. So for me, it’s that same, but I enjoy working building the solutions, etc. Storing lots of stuff is just an excuse to build it in the first place.
hi,great to see you use windows server 2008 ,are you useing it with vista or is this possible or do you use it as a os.thanks.:rock:
 
I might point out that given the cost of blank Blu-Ray media, as well as the cost of HD storage, You're better off buying 2 copies of the DVD. :D

-W
 
My first post, yay...

I haven't built my media server yet, but I've done a little research. Maybe this will help some people.

Ideally, my server would be very close to what is currently offered by unRaid - a software RAID 4 solution. (1 dedicated parity disk and multiple unstripped data disks) My only gripe with it is the 32bit Slackware OS and less than desirable hardware support. Also, being the extremely poor student I am, the hundred dollar price tag is less than appealing. And thus, I'm still daydreaming and collecting parts.

(1x) LIAN LI PC-A17A Silver Aluminum ATX Mid Tower Computer Case ~$170
(2x) 5 x 3.5" Hot Swappable SATA backplane. ~ $100 each. Athena Power makes a nice one I think. Should change the 80mm fan speed based on temperature.
(4x) 500GB SATA hard drives ~$100 each
(3x) 320GB SATA hard drives ~$70 each
(1x) 64 bit CPU, single core or dual, doesn't really matter. (AMD 3500+ is what I currently have)
(1x) relatively cheap motherboard ~$50
(1x) Powersupply with single 120mm fan ~$75
(2x) 512MB memory stick for dual channel 1GB ~$50
Aftermarket fanless heatsink for the CPU and if needed the chipset aswell.
Sythe S-FLEX 120mm fans
Noise silencing material (didn't pick out a Lain Li case to keep it hidden in the closet)

I'd probably end up replacing the stock 80mm fans on the HDD racks with quieter ones. All in all, this should be an extremely quiet server when the hard drives are powered down. It should also be pretty cheap comparatively; you could make it cheaper by getting rid of the sound-oriented parts and eye-candy case.

OS: a linux distro installed onto a 2GB USB stick, probably gentoo.
Various software to get the desired server functionality (Samba, Electric sheep, etc...)

I decided RAID 4 (yes 4, not 5) was ideal for a home media server. Gigabit LAN can only handle 125MBytes/s theoretically (correct me if I'm wrong), and a single 500GB drive reads over 100MB/s so striping the data across multiple disks wouldn't provide any benefits. In addition, keeping the data on separate disks allows for the disks to be powered down when not in use, reducing noise and heat all the while increasing longevity of the drives (hopefully).

That's basically all I got... And again, this is all hypothetical.
 
Last edited:
I think you'll find the power usage is noticeable not spinning drives down. Not to mention that the most power draw is done at startup, fire up all 12 drives at once a few times a day and see how your P/S likes it.;) My system with two P/S makes my UPS squeal pretty good at cold startup with 12 IDE drives - I'll be beefing that up soon!

I agree that covering the data with a parity of some sort is a must. I can rip a DVD in about 10 mins or so but figure I have 600 of them. Redoing that work would suck! You really need something to CYA. Losing just two drives on most RAID means you've got lots of work ahead of you which is why I do not like the RAID that stripe. Speaking of work, adding a drive just means a format and some prep with the unRAID software, maybe an hour no matter how many drives are onboard. If I'm expanding a drive I have access to the data while it's going on, likewise while a rebuild or parity check is done. Pull a drive and the data still appears to be there. One thing to consider with a hardware RAID is what to do when the hardware pukes, recovery companies advertise recovery for RAID (at a premium) for a reason! If the hardware is no longer available or what is out there is a different firmware you could be hosed, I like software better. Proprietary on disk formats are scary IMO so I prefer something standard given a choice.

unRAID has better hardware support than you realize, the WEB page doesn't list all of the supported hardware by far - the forums are more up to date than even the Wiki. It uses Linux after all and the source to add more drivers is there too if you want to get bold. <shrug> I just buy hardware I know will work is all, $100 motherboards, $60 CPUs, and cheap memory! eBay works great for Promise cards for instance and NewEgg has them as OEM parts too.

Cases, I have the Lian Li aluminum midtower for a desktop - I would NOT use this for a NAS, too small. The case I have wouldn't expose many SATA racks. I'm using the CoolerMaster Stackers and REALLY like them. One model allows for two P/S and one doesn't so pay attention to that if it's important to you and IF you can find them for sale. <sigh> I have one of each and honestly the SATA system with just one P/S is better IMO. The single P/S uses a bunch less power (duh) and the 80+ P/S is better quality overall. These are beasts make no mistake but they have nice casters and can be rolled easily even on carpet.

If you want to save a pile of cash use the Stacker 4n3 adapters BTW, they are like $20 and have a 120mm fan per unit but swapping drives is a PITA. The CM case will hold at least 3 before you have to pull out the USB panel up top. I cannot hear either of my systems but I do have some other ambient noise in my office already. Oh, the 5n3 SATA SATA cages only use 3 power taps which is nice. I'm hoping to run 15 drives off the one P/S I've got - we'll see. I can easily fit 3 of these cages in my Stacker without pulling the USB ports up top and still have more drives internal. The SATA server AND my C2D desktop together pull 309Watts fully spun up right now - both use 80+ P/S though. Thermal fans in the P/S are nice too, they only spin as needed which again is less power consumed...

P.S. Striped data WILL be faster over the network. A single drive cannot max out the port but multiple can. A single drive will rise and fall, striped drives will be pretty steady. For media streaming it won't matter, for something like a database or maybe Photoshop and certainly DVD ripping you could see a difference. I rip local and then copy overnight when I have a few.
 
Last edited:
I forgot about the speed curve hard drives have, thanks for pointing that out. To edit my previous statement, 60MBytes/s - 110MBytes/s depending on where the data is stored on the disk.

I have a Lain Li desktop aswell (PC-65), and am well aware of the size. But there is no reason a Lain Li PC-A17A isn't big enough. (although one possible downside is that it is made out of aluminum, which doesn't help with sound dampening at all.)

And yes, the unRaid forums are definitely the place for information pertaining to it. I've been a member for a few years now. Using recycled parts is where hardware problems arise. For a new build however, that is practically nonexistent so long as you've checked the forums for compatibility issues beforehand. I like unRaid. :)
 
Now THAT case is more like it! I wouldn't sweat the aluminum too much, they don't tend to rattle in my experience - well made! Do look at the SATA and CM cages, two extremes on the price range.

Throughput on unRAID is okay but nothing super special, it will meet media streaming needs though so that is well covered. Guys have run multiple 1080 streams with no problems in testing, mine never sees more than one though.
 
Back
Top