• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASRock Shows Off Z87-Extreme11/ac Build with 22 SSDs

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,775 (7.41/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
ASRock Z87-Extreme11/ac motherboard can connect to no less than 22 drives over its six SATA 6 Gb/s, and sixteen SAS3 ports, so why not show it off? It's just that ASRock chose 22 Plextor M5 Pro SSDs, which make for a creepy cemetery-like sight. Sadly, they couldn't give us performance numbers because the 22 drives aren't exactly striped across in any RAID configuration, but at least you know you can take something like this to your next Left4Dead LAN, and pull crowds. It's not just the drives, ASRock also fitted the board with four Radeon HD 7970 cards in CrossFireX, maxed out the memory, and wired the board to its Wi-SD box accessory, a 3.5-inch front-panel that features the board's WiFi+Bluetooth antenna, a couple of USB 3.0 ports, and a multi-format card reader.



View at TechPowerUp Main Site
 
Raid 0 needed
 
What is the purpose of this? Everything in this system is so bandwidth constrained.
 
I spy a DMI bottleneck with all those SSDs.
 
. Sadly, they couldn't give us performance numbers because the 22 drives aren't exactly striped across in any RAID configuration,

:shadedshu

I was about to get excited and whip out Mr. Visa..........
 
12 TB of SSDs in Raid 0 would be pretty sweet.
 
Look at all them SATA ports! :eek:
 
is this LSI thing like on Z77 Extreme11? I heard that thing doesn't work quite well.
 
12 TB of SSDs in Raid 0 would be pretty sweet.

There comes a point where more bandwidth on mass storage won't benefit you because you'll be waiting on the CPU most of the time like we already do now. The speed up from adding any additional SSDs is pretty low. It's starts high with one because regular hard drives are relatively slow.

Lets say you need to read a 400MB file and process it. The CPU takes 3 seconds to process this said data (which will remain constant between all the cases). On a regular hard drive with say 100MB/s of bandwidth (we're assuming sequential reads here,) we'll spend 4 seconds reading the file and 3 seconds processioning it totaling 7 seconds. If we take a SATA3 SSD that can sustain 500MB/s, we will read that file in 0.8 seconds, leaving us with 3.8 seconds to run.

That's a nice speed up, it now takes half the time to run, but now we want to RAID it and double our bandwidth. So we have 1000MB/s, which takes 0.4 seconds to read the file, and it takes 3.4 seconds versus 3.8 seconds with just one SSD. In most cases, I/O is kept to a minimum unless its needed and very quickly do you reach a point of diminishing returns, but you see what I mean when there comes a point where it's just useless for the cost?

is this LSI thing like on Z77 Extreme11? I heard that thing doesn't work quite well.

It doesn't do RAID-5 which is what turns me off.
 
Last edited:
u tell me there no bottlenick with this
 
12 TB of SSDs in Raid 0 would be pretty sweet.

You be better of with more than one array with less drives.

There comes a point where more bandwidth on mass storage won't benefit you because you'll be waiting on the CPU most of the time like we already do now. The speed up from adding any additional SSDs is pretty low. It's starts high with one because regular hard drives are relatively slow.

Lets say you need to read a 400MB file and process it. The CPU takes 3 seconds to process this said data (which will remain constant between all the cases). On a regular hard drive with say 100MB/s of bandwidth (we're assuming sequential reads here,) we'll spend 4 seconds reading the file and 3 seconds processioning it totaling 7 seconds. If we take a SATA3 SSD that can sustain 500MB/s, we will read that file in 0.8 seconds, leaving us with 3.8 seconds to run.

That's a nice speed up, it now takes half the time to run, but now we want to RAID it and double our bandwidth. So we have 1000MB/s, which takes 0.4 seconds to read the file, and it takes 3.4 seconds versus 3.8 seconds with just one SSD. In most cases, I/O is kept to a minimum unless its needed and very quickly do you reach a point of diminishing returns, but you see what I mean when there comes a point where it's just useless for the cost?



It doesn't do RAID-5 which is what turns me off.

Shii that kills part of the reason i would get it.. As if i was going run a bunch of SSD's HDD's at least one array would be Raid 5..
 
Fitting graveyard picture, wonder how long they will last.
 
You be better of with more than one array with less drives.



Shii that kills part of the reason i would get it.. As if i was going run a bunch of SSD's HDD's at least one array would be Raid 5..

I've been pretty happy with my 2x120GB SSDs and 3x1TB in RAID-5. It's a pretty nice balance but sometimes the SSDs feel a bit small, even with 240GB.

It's a big reason why the Extreme11 turned me off.
 
Back
Top