• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Apex Storage Add-In-Card Hosts 21 M.2 SSDs, up to 168 TBs of Storage

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,002 (1.07/day)
Apex Storage, a new company in the storage world, has announced that its X21 add-in-card (AIC) has room for 21 (you read that right) PCIe 4.0 M.2 NVMe SSDs. The card supports up to 168 TBs with 8 TB M.2 NVMe SSDs and 336 TBs of storage with future 16 TB M.2 SSDs drives and can withstand speeds of up to 30.5 GB/s. Packed inside a single-slot, full-length, full-height AIC, the X21 card is built for a snug fit inside workstations and applications such as machine learning and hyper-converged infrastructure that enterprises need to develop inside servers and workstations across the site.

The X21 AIC has 100 PCIe lanes on the board, which indicates the presence of a PCIe switch, likely placed under the heatsink. To power all the storage, the PCIe slot itself needs to be more, and the card also has two 6-pin PCIe power connectors that provide 225 Watts of power in total. Interestingly, the heatsink is passively cooled, but Apex Storage suggests that there should be an active airflow with a minimum of 400 LFM to ensure the regular operation of the card. In the example application, the company laid out X21 with Samsung's 990 Pro SSDs; however, the card also supports Intel Optane drives. Read and Write IOPS are higher than 10 million. Additionally, the average read and write access latencies are 79 ms and 52 ms. Apex Storage didn't reveal the pricing and availability of the card; however, expect it to come with a premium.



View at TechPowerUp Main Site | Source
 
So this is for people that want huge capacities in SSD form, because in performance, still intel optane smokes it.
 
IOPS are high, but latency is high too (higher than one single optane unit) , so if you build a raid of Optanes, it will smoke it in performance.
Optanes are eol and with too small capacity. Yes, it's can still be bought and they are still too expensive. But that's not really my point, and in fact, the user point of view from the angle of a home computer and gamer or poor semi-professional is completely irrelevant here.
 
This is kinda wild looking, reminds me of the GTX 295.
 
Optanes are eol and with too small capacity. Yes, it's can still be bought and they are still too expensive. But that's not really my point, and in fact, the user point of view from the angle of a home computer and gamer or poor semi-professional is completely irrelevant here.
EOL but still the kings of solid state disks, that's a fact.
 
*The Power of 7 begins playing*

was... was this made for me?
Can I get a 'sample'? (I'll buy the 17 additional 118GB P1600Xs)



Seriously though, after catching so much 'shit' about enjoying PCIe switches and NVMe RAID, seeing products like this makes me feel a lot less mad.

I wonder what Switch it's using? PLX's offerings max out at 98 lanes/ports on Gen4.
PEX88096
98 lane, 98 port, PCI Express Gen 4.0 ExpressFabric Platform
Maybe, it's Microchip / Microsemi (<- what an unfortunate name...) ?

X21 AIC has 100 PCIe lanes on the board
Could be a 116-lane switch, termed that way. The 'Uplink' x16 might be subtracted.
 
Last edited:
I really want to see this tested with all optane drives.
 
*The Power of 7 begins playing*

was... was this made for me?
Can I get a 'sample'? (I'll buy the 17 additional 118GB P1600Xs)



Seriously though, after catching so much 'shit' about enjoying PCIe switches and NVMe RAID, seeing products like this makes me feel a lot less mad.

I wonder what Switch it's using? PLX's offerings max out at 98 lanes/ports on Gen4.

Maybe, it's Microchip / Microsemi (<- what an unfortunate name...) ?


Could be a 116-lane switch, termed that way. The 'Uplink' x16 might be subtracted.
With the look on the back where all the caps and stuff are, I'm thinking it's a Pair of switches in parralel or serial maybe
 
So this is for people that want huge capacities in SSD form, because in performance, still intel optane smokes it.
Makes utilitarian sense, if you're a studio-level UHD+ 'media pro'.
My madness has me imagining a hydra's nest of oculink-m.2 cards running to 21x U.2 P5810Xs. Just as unrealistic (for me), but I could probably add 17 more P1600Xs (which are @ liquidation pricing) if I eventually find one of these (years down the road). Optane 'lasts' I expect to have my Optane drives for decades to come (which, was part of its 'problem' as a "consumer product")
With the look on the back where all the caps and stuff are, I'm thinking it's a Pair of switches in parralel or serial maybe
Good eye.
I haven't taken the time to 'play with the concept' but I've recently been researching PCI-e switches. Can confirm 'series' switches are a pretty common thing, even on finished-products (like mobos and HBAs, etc). TBQH, I'd liken PCIe a lot to Ethernet, but the PCB is actually like routing WANs and LANs (inter-strewn across and aside power and other comm. 'circuits') in a tiny cityscape.
 
Last edited:
Makes utilitarian sense, if you're a studio-level UHD+ 'media pro'.
My madness has me imagining a hydra's nest of oculink-m.2 cards running to 21x U.2 P5810Xs. Just as unrealistic (for me), but I could probably add 17 more P1600Xs (which are @ liquidation pricing) if I eventually find one of these (years down the road). Optane 'lasts' I expect to have my Optane drives for decades to come (which, was part of its 'problem' as a "consumer product")
Well the P5810X are rare, I couldn't find one so I got a P5800.

And those P5800 are still at MSRP.
 
Last edited:
Additionally, the average read and write access latencies are 79 ms and 52 ms.
Is that really several times longer latencies than HDDs? Or should it be microseconds (us)?
 
Is that really several times longer latencies than HDDs? Or should it be microseconds (us)?
Yes, the fastest (lower latency) I remember for Hard Drives was on a Velociraptor 10k RPM with 1ms, SSD are measured in μs, intel optane P5810x is supposed to have 5 μs on average.
 
For what use case would this card + 21 hot M.2 drives be preferable to a smaller bunch of U.2 drives?

21 x Corsair or Sabrent M.2 (8 TB) = ~24,000 € for 168 TB - not including the card
11 x Micron 7450 Pro U.3 (15.36 TB) = 18,700 € for 169 TB
6 x Micron 9400 Pro U.3 (30.72 TB) = 24,600 € for 184 TB

This card is meant for workstations anyway, so PCIe lane count and bifurcation abilities should not be an issue, without the need for a hot PCIe switch.
 
EOL but still the kings of solid state disks, that's a fact.
This is for those that actually want the fastest sustained speeds possible. Optane does like 7 GBps sustained. 21x 990 Pros that do 1.4 GBps each is 29.4 GBps, PCIe 4.0 x16 bandwidth is 32 GBps. Yes the 990 Pro is absolute fucking trash for sustained writes (and doesn't even deserve the "Pro" nomenclature), but hey if someone wants 42TB of storage, or maybe 20TB + 22TB, just get 21x 2TB 990 Pros, and boom. Though I am curious how this would even be powered, because 21 NVMes is capable of taking over 75W under full load. I'd buy it if I absolutely needed super fast storage for my video editing, but not really worth the cost to me. Maybe for movie makers though. What would make more sense though, is getting 16x 980 Pro 2TBs and putting those in RAID 0, still 42TB of storage but your sustained writes will be 39.9 GBps (limited by the 32GBps limit of PCIe 4.0 x16), and it would be probably half the cost.
 
This is for those that actually want the fastest sustained speeds possible. Optane does like 7 GBps sustained. 21x 990 Pros that do 1.4 GBps each is 29.4 GBps, PCIe 4.0 x16 bandwidth is 32 GBps. Yes the 990 Pro is absolute fucking trash for sustained writes (and doesn't even deserve the "Pro" nomenclature), but hey if someone wants 42TB of storage, or maybe 20TB + 22TB, just get 21x 2TB 990 Pros, and boom. Though I am curious how this would even be powered, because 21 NVMes is capable of taking over 75W under full load. I'd buy it if I absolutely needed super fast storage for my video editing, but not really worth the cost to me. Maybe for movie makers though. What would make more sense though, is getting 16x 980 Pro 2TBs and putting those in RAID 0, still 42TB of storage but your sustained writes will be 39.9 GBps (limited by the 32GBps limit of PCIe 4.0 x16), and it would be probably half the cost.
Still, if you want better specs, a raid of Optanes will smoke this.
 
Still, if you want better specs, a raid of Optanes will smoke this.
Kindly show me a faster optane m.2 drive? Because AFAIK that doesn't exist. They do make fast NVMe optane drives but those aren't m.2, those are u.2 and in 2.5" form factor.
 
Kindly show me a faster optane m.2 drive? Because AFAIK that doesn't exist. They do make fast NVMe optane drives but those aren't m.2, those are u.2 and in 2.5" form factor.

Yes, they are u.2, but in a server you can connect them with adapters to M.2
 
Yes, they are u.2, but in a server you can connect them with adapters to M.2
So... you didn't understand the entire point of this. This is a compact solution, versus having twenty one 2.5" drives stacked side by side.
 
Good eye.
I haven't taken the time to 'play with the concept' but I've recently been researching PCI-e switches. Can confirm 'series' switches are a pretty common thing, even on finished-products (like mobos and HBAs, etc). TBQH, I'd liken PCIe a lot to Ethernet, but the PCB is actually like routing WANs and LANs (inter-strewn across and aside power and other comm. 'circuits') in a tiny cityscape.
21 x4 slots equal 84 lanes. Add the x16 connection and you have x100. Perfect marketing logic at work...

Which is my bet for where the 100 lanes onboard figure comes from
 
So... you didn't understand the entire point of this. This is a compact solution, versus having twenty one 2.5" drives stacked side by side.

I doubt people are looking for a "compact" solution at this price point.
 
I doubt people are looking for a "compact" solution at this price point.
Do you really not see the difference between having one expansion card you can fit in a regular sized workstation computer, versus having to find a damn chassis with room for 21x 2.5" drives (and cables)? Which probably end up having to be a couple rack mounted chassis for "convenience".

Same thing goes for server usage. Any space you don't need for drives can be used for other purposes
 
Do you really not see the difference between having one expansion card you can fit in a regular sized workstation computer, versus having to find a damn chassis with room for 21x 2.5" drives (and cables)? Which probably end up having to be a couple rack mounted chassis for "convenience".

Same thing goes for server usage. Any space you don't need for drives can be used for other purposes

If we are talking about performance, there's no other way, if not, look at apple, even them had no other choice to use a Mac Pro, which still uses a huge space.

So if performance is what you are looking at, this drive is not for you.
 
Back
Top