• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Help with storing multiple SSDs

bloeff

New Member
Joined
Apr 27, 2014
Messages
6 (0.00/day)
Hi guys,

Call me crazy, but this is my plan. I don't want to spend thousands of dollars on storage racks. I don't even want to use old computer cases - too crammed internally and take up too much space. I'm thinking of buying a plastic storage rack from Lowe's for $14.98! It's got 4 levels - one each for 32 SSDs. Right now I have 28 SSDs. My plan - somehow secure the 500 watt power supply to the bottom shelf - gorilla tape?. Secure a SAS expander on each shelf (I have one now). Run cable from expander to controller. Run fanout cables from expander to drives. Put drives in plastic bag! Am I crazy? Will this work? From what I've read, the total power consumption of expanders and 128 SSDs (controller limit) will be under 500 watts. It appears I can get molex cable extensions. Not sure on SAS cables. Let me know what you think.

Thanks,
Bruce
 
Joined
Aug 10, 2007
Messages
4,267 (0.70/day)
Location
Sanford, FL, USA
Processor Intel i5-6600
Motherboard ASRock H170M-ITX
Cooling Cooler Master Geminii S524
Memory G.Skill DDR4-2133 16GB (8GB x 2)
Video Card(s) Gigabyte R9-380X 4GB
Storage Samsung 950 EVO 250GB (mSATA)
Display(s) LG 29UM69G-B 2560x1080 IPS
Case Lian Li PC-Q25
Audio Device(s) Realtek ALC892
Power Supply Seasonic SS-460FL2
Mouse Logitech G700s
Keyboard Logitech G110
Software Windows 10 Pro
Kinda get the gist of your plan and since SSDs don't give a hoot about orientation or mounting, you can't really go wrong there.

Just wondering, are they all large capacity (512GB/1TB) SSDs? If not, why not start replacing the oldest/smallest units with larger ones to keep things simple? You've got a nice LSI controller and now an expander. If you went 8-channels to the expander you could have 16 of your best drives connected or alternatively, 24 total drives if 4 were directly connected to the controller and 4 channels to the expander. With either of these amounts it seems like you'd always have ample performance and capacity.
 
Joined
Apr 2, 2011
Messages
2,657 (0.56/day)
....The simple answer is likely no, but it can be done.

The power calculation you need to do is the operational wattage, plus about 20%. I'm basing the 20% off of two things, initial power will surge to provide adequate power to the array. This surge will exceed operational wattage, and given that the power supply will be starting up you'll need some substantial head room not to either fry drives or the PSU.

Now, the power. If you're using the PSU just to power the array, you'll have to have either a single +12 and +5 volt rail, or make sure that the rail you are connecting to actually has that much power. A 500 watt supply doesn't just have 500 watts available anywhere. Flip a PSU over, and make sure you've got that operational+20% wattage on the lines you are using to power that sucker.


Once power is applied you'll need some cooling. A few fans per shelf should do you well, assuming they are configured to push and pull in each shelf. SSDs don't get hugely hot, but that much power dissipation in a sealed cabinet is asking for trouble.



My biggest problem here is the insanity that this project entails. You'll spend more money than you would for a single new high capacity SSD. If you've already burned some insane money, and have big SSDs, then I can only assume this is a huge RAID array for retention and editing. Please note then that if you're using Windows your drive letter limit is c-z. That falls way short of being power that you can tap with a Windows PC. I don't see how this project can get off the ground, given the demonstrated technical knowledge. On top of the hardware, you'd need some custom operating system to take advantage of such a vast array. It would do me well if you proved me wrong, but please don't set your hopes high for this project. It's not something your skill set would likely allow, and it's beyond my software/hardware engineering skills to make functional.
 

bloeff

New Member
Joined
Apr 27, 2014
Messages
6 (0.00/day)
Thank you guys so much for taking time for an obvious novice! The motherboard is ASUS P9X79 WS which purports to be compatible with my raid card - LSI 9260-8i. I've created a bootable array with the controller and expander and installed WIN8. Everything is working fine - so I don't see any problems there - even going up to the controller limit of 128 drives with the expanders daisy-chained together. I've decided to go with one power supply per expander/drive configuration. In the past, this has been plenty to power 28 SSDs. Instead of the 4-shelf storage rack, I'm going with some nice stackable plastic bins with holes on every side and an almost-open face for access. I'll add some fans as suggested. The 28 drives have been accumulated over the last 5 years or so. I'm using mainly smaller drives to create a 2TB Raid 0 boot array. Is this still windows limit for a boot partition?

The reason I'm doing such a big build is mainly speed. I'm processing HUGE audio files and need blazing transfer. I'll probably use a Ramdisk drive for current working projects, but need the much larger array for other projects and apps.
 

Athlon2K15

HyperVtX™
Joined
Sep 27, 2006
Messages
7,909 (1.23/day)
Location
O-H-I-O
Processor Intel Core i9 11900K
Motherboard MSI Z590 Carbon EK X
Cooling Custom Water
Memory Team DDR4 4000MHz
Video Card(s) ASUS TUF RTX 3080 OC
Storage WD WN850 1TB
Display(s) 43" LG NanoCell 4K 120Hz
Power Supply Asus Thor 1200w
Mouse Asus Strix Evolve
Keyboard Asus Strix Claymore
Your going to need fans on any SAS drives. I have two of the Optimus and they get hot as hell under load.
 
Joined
Apr 2, 2011
Messages
2,657 (0.56/day)
Thank you guys so much for taking time for an obvious novice! The motherboard is ASUS P9X79 WS which purports to be compatible with my raid card - LSI 9260-8i. I've created a bootable array with the controller and expander and installed WIN8. Everything is working fine - so I don't see any problems there - even going up to the controller limit of 128 drives with the expanders daisy-chained together. I've decided to go with one power supply per expander/drive configuration. In the past, this has been plenty to power 28 SSDs. Instead of the 4-shelf storage rack, I'm going with some nice stackable plastic bins with holes on every side and an almost-open face for access. I'll add some fans as suggested. The 28 drives have been accumulated over the last 5 years or so. I'm using mainly smaller drives to create a 2TB Raid 0 boot array. Is this still windows limit for a boot partition?

The reason I'm doing such a big build is mainly speed. I'm processing HUGE audio files and need blazing transfer. I'll probably use a Ramdisk drive for current working projects, but need the much larger array for other projects and apps.


Sweet crispy jebus.

Let's touch back on a few points, so you understand why I'm saying what I'm saying.

Let's imagine every single one of those drives running at their theoretical maximum speeds, and let's be generous. They each can perform read/write cycles at about 100 MBps. That means that the 28 drives you've got a read/write of 2800 MBps. To put that into reasonable figures, you could theoretically move more data to and from that array in any given moment than some super computers.

That obviously isn't the case, or people would buy a ton of cheaper little drives and RAID them for basically instantaneous performance. Obviously, that doesn't happen. The limiting factor becomes the bandwidth of the card and the interface. In this case, you've got a PCI-e 2.0 x8 interface. That means maximum theoretical throughput is 6.4 GBps for the RAID card. Remove the overhead of that interface, the overhead of managing all the SSD controllers, and the rat nest of wiring and you've got significantly less speed.

SATA III connections are rated at a maximum of 6Gbps. Please note that that's bits not byte. Yes, the interface is slower, but it is one drive talking to one controller. If you utilize a couple of SATA III connections from the board, and two 256 GB SSDs you'll come out miles ahead, and without having to worry about anything.

Intel baked RAID support into its PCH. Run a RAID 0 of SSDs on your SATA III ports, all of your permanent storage HDDs on the RAID controller (setup with at least 3 drives in a RAID 5 configuration to allow one drive sudden failure protection, and keep the rest of the SATA ports on the motherboard free for optical drives. You'll have insanely fast storage for processing (RAMDISK), fast storage for programs and a few pending projects (the RAID 0 SSDs), and permanent storage for the projects once completed.



In short, please don't make this a kludge. You'll spend hours working out the arcana, connecting drives, and setting up an array. Whenever the first drive fails or becomes corrupt (remember, 28 drives can fail and they sound as though they are repurposed from other builds....) you'll swear. Finding which of the 28 failed, rebuilding the array, then discovering that the performance wasn't what you expected would be a colosal waste of time. If you want to continue with this idea best of luck, but from where I sit it looks like a huge pain waiting to happen.
 

bloeff

New Member
Joined
Apr 27, 2014
Messages
6 (0.00/day)
You guys are awesome! Spending any time on an idiot like me! I appreciate all your suggestions. I wouldn't even dream of such a build if I hadn't already had a couple of years of solid performance from my old faithful RocketRaid 2782 fully poplulated with 32 SSDs in a RAID 0 array. My audio files were approaching 1 GB each and I was getting almost 3000 MB/sec reads and almost 2000 MB/sec writes! Obviously all data was auto-synced to multiple drives for backup. I even used the same array for my OS and all my apps. I worked the crap out of that configuration, and still can't believe in all that time, I never once had to rely on backed-up data. When my processing needs outgrew the X58 chipset, I made the move to the X79. Unfortunately, the 2782 didn't make the move with me, and only works on older motherboards. I certainly don't have any illusions of a 128-drive array never having any problems and I'm not sure yet what I'll do with all the extra space. In case you're interested, the audio is me singing all the parts to choral music and overlaying the tracks. I just kept playing around with it until the CEO of a leading classical record company actually wants me on his label! Here's my latest recording:


Thanks again,
Bruce
 
Joined
Apr 2, 2011
Messages
2,657 (0.56/day)
...ok now, I'm immensely perplexed. Allow me to see if I understand where we actually stand.

1) You have experience with RAID arrays, and have already constructed a 32 disk array in the past (post #7).
2) You want to construct a new SSD array out of 28 drives (post #4).
3) You already have all the necessary hardware to do everything (various posts).


Based upon these facts, and piecing together your other statements, you already know how to build a massive RAID array. What then is this thread about? The only real difference between a HDD and SSD is going to be TRIM. The thing is, TRIM isn't exactly a feature on the RAID card you've cited. Is it power? It can't be that, because anyone who's already built an HDD array should easily be a able to build a smaller power array for another array (SSDs use considerably less power than HDDs). Is the question about just stuffing SSDs in a drawer while they are running? Shouldn't be; anyone who owns more than a pair of SSDs has done plenty of research into them. A two minute youtube search yields videos of SSDs operating regularly while being bounced on a trampoline. A slightly more time consuming search on Google yields SSDs operating happily even when crammed right next to a CPU putting off a swelering 70 C heatwave. Is it a budget shortcut? Nope, X79 is crazy expensive. If you've got that kind of money then a cheap 2.5" drive enclosure should be peanuts compared to the SSDs inside it and the other hardware in the system.


You're going to have to help me here, because I'm lost. What is the question?

All I continue to read is statements that you've got everything in hand. I feel like you either want kudos for a cool idea, or you can't articulate the question you actually want to ask. Help me here, because I'm out of ways that I could actually see you needing advice.
 

bloeff

New Member
Joined
Apr 27, 2014
Messages
6 (0.00/day)
My main question is in the thread title. I wanted to know if anyone has had experience storing SSDs unmounted for an extended period. Also, I wanted to get ideas of things I've never heard of. In another forum, someone mentioned Ramdisk - which I didn't know anything about. That sounds like a great idea for working on smaller projects. LSI's latest PCIe 3.0 raid card was brought up. It's not in the compatibility list for my mobo. A while back, I went back and forth with ASUS and Highpoint to find out why the 2782 raid card didn't work on ASUS X79 board. Believe it or not, in all that time neither company had the other's product to test on! I really don't want to go down that road again! But if anyone has tried this card, I'd like to know about it. Also, a dedicated 5 volt power supply was recommended. I asked about it and got a reply that I didn't understand. I'll be honest with you guys - more than half of what I read in these forums leaves me beyond clueless! I'm just too embarrassed to ask questions until I understand! Another basic question I still have is this - Is there an alternative method that will provide me with at least 2TB of high speed storage? My audio files are approaching 1TB and I would really like them on a high speed configuration. I don't want to have to load a 20GB project from an HDD every time I want to access or modify it. Thanks again for your patience!
 
Top