Thank you guys so much for taking time for an obvious novice! The motherboard is ASUS P9X79 WS which purports to be compatible with my raid card - LSI 9260-8i. I've created a bootable array with the controller and expander and installed WIN8. Everything is working fine - so I don't see any problems there - even going up to the controller limit of 128 drives with the expanders daisy-chained together. I've decided to go with one power supply per expander/drive configuration. In the past, this has been plenty to power 28 SSDs. Instead of the 4-shelf storage rack, I'm going with some nice stackable plastic bins with holes on every side and an almost-open face for access. I'll add some fans as suggested. The 28 drives have been accumulated over the last 5 years or so. I'm using mainly smaller drives to create a 2TB Raid 0 boot array. Is this still windows limit for a boot partition?
The reason I'm doing such a big build is mainly speed. I'm processing HUGE audio files and need blazing transfer. I'll probably use a Ramdisk drive for current working projects, but need the much larger array for other projects and apps.
Sweet crispy jebus.
Let's touch back on a few points, so you understand why I'm saying what I'm saying.
Let's imagine every single one of those drives running at their theoretical maximum speeds, and let's be generous. They each can perform read/write cycles at about 100 MBps. That means that the 28 drives you've got a read/write of 2800 MBps. To put that into reasonable figures, you could theoretically move more data to and from that array in any given moment than some super computers.
That obviously isn't the case, or people would buy a ton of cheaper little drives and RAID them for basically instantaneous performance. Obviously, that doesn't happen. The limiting factor becomes the bandwidth of the card and the interface. In this case, you've got a PCI-e 2.0 x8 interface. That means maximum theoretical throughput is 6.4 GBps for the RAID card. Remove the overhead of that interface, the overhead of managing all the SSD controllers, and the rat nest of wiring and you've got significantly less speed.
SATA III connections are rated at a maximum of 6Gbps. Please note that that's bits not byte. Yes, the interface is slower, but it is one drive talking to one controller. If you utilize a couple of SATA III connections from the board, and two 256 GB SSDs you'll come out miles ahead, and without having to worry about anything.
Intel baked RAID support into its PCH. Run a RAID 0 of SSDs on your SATA III ports, all of your permanent storage HDDs on the RAID controller (setup with at least 3 drives in a RAID 5 configuration to allow one drive sudden failure protection, and keep the rest of the SATA ports on the motherboard free for optical drives. You'll have insanely fast storage for processing (RAMDISK), fast storage for programs and a few pending projects (the RAID 0 SSDs), and permanent storage for the projects once completed.
In short, please don't make this a kludge. You'll spend hours working out the arcana, connecting drives, and setting up an array. Whenever the first drive fails or becomes corrupt (remember, 28 drives can fail and they sound as though they are repurposed from other builds....) you'll swear. Finding which of the 28 failed, rebuilding the array, then discovering that the performance wasn't what you expected would be a colosal waste of time. If you want to continue with this idea best of luck, but from where I sit it looks like a huge pain waiting to happen.