• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel SSD DC P4500 Series: "ruler" form factor? Huh?

Joined
Dec 12, 2020
Messages
1,755 (1.09/day)
I realize this is a data center product (called Cliffdale) but it's 'interface' is 'PCIe 3.1 x4, NVMe'?

I was looking up SSD's that have 4 TiB or more of storage and came across this bizarre, discontinued datacenter product from Intel.

https://ark.intel.com/content/www/u...0-series-4-0tb-ruler-pcie-3-1-x4-3d1-tlc.html

Here's a picture of the bizarre ruler format:

216mRv-2mSL._AC_.jpg


Even more bizarre is the door on the end which makes it look like a really long lighter
315bX2O5KPL._AC_.jpg
 
If it's discontinued, it's because because current products are 15 TB and 30 TB. Intel makes the DC P4510 and D5-P5316, and Samsung has a shorter lighter they call PM9A3 that even runs PCIe 4.0 x4. The "ruler" form factor is officially called E1.L (long) and E1.S (short) and doesn't always come enclosed in such a nice box.
 
I realize this is a data center product (called Cliffdale) but it's 'interface' is 'PCIe 3.1 x4, NVMe'?

I was looking up SSD's that have 4 TiB or more of storage and came across this bizarre, discontinued datacenter product from Intel.

https://ark.intel.com/content/www/u...0-series-4-0tb-ruler-pcie-3-1-x4-3d1-tlc.html

Here's a picture of the bizarre ruler format:

216mRv-2mSL._AC_.jpg


Even more bizarre is the door on the end which makes it look like a really long lighter
315bX2O5KPL._AC_.jpg
Nothing new or bizarre, it was introduced in 2017.
 
Interesting write-up on the E1.S form factor that predicts it will completely replace M.2:
https://www.storagereview.com/news/ruler-is-finally-going-mainstream-why-e1-s-ssds-are-taking-over
Not very persuading, the thermal protection capability is only due to M.2 drives are not standardized to include heatsinks, the ease-of-use is just a slot design difference which can easily be overcome.
People could easily design new vertical slots just like the E1.S and also standardized heatsinks, at that point, all the difference listed are gone.
 
Not very persuading, the thermal protection capability is only due to M.2 drives are not standardized to include heatsinks, the ease-of-use is just a slot design difference which can easily be overcome.
People could easily design new vertical slots just like the E1.S and also standardized heatsinks, at that point, all the difference listed are gone
Good points but does the M.2 standard include the ability to hot swap?
 
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
 
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
Well it's not true that all servers just write data like crazy, all the time, at full speed. You have transactional databases, those that collect all detailed customer data for example, but never or very rarely delete the data; they are close to write-once. Then there are analytical databases/data warehouses, where data is written and rewritten more often (like, daily) but in much smaller amounts compared to transactional. I picked these two examples because I have some experience with SQL databases but servers of course do other things too, and even databases need various log files that do see a huge amount of writes.

And if we are to believe Intel, the 30 TB drive has an endurance of 23 PB for random writes and 105 PB for sequential writes. Being Q-L-C! (!!!). Storage review did a review.
 
Interesting write-up on the E1.S form factor that predicts it will completely replace M.2:
https://www.storagereview.com/news/ruler-is-finally-going-mainstream-why-e1-s-ssds-are-taking-over
Good points but does the M.2 standard include the ability to hot swap?
Given that these ruler SSDs necessitate a standardization of both cases and motherboards (or backplanes with expensive riser cables), they are never going to gain even the smallest foothold in the consumer space. Nor are they meant to - they're much larger, much more expensive, and designed for higher capacities than consumers could hope to afford. These form factors are explicitly designed for enterprise use, and while some enthusiasts would no doubt like to adopt the standard, it is fundamentally unsuited for consumer use. It's designed around the wrong parameters, quite simply.

In enterprise, on the other hand, m.2 has never been anything but a band-aid fix on top of a broken femur - using what you have, until you find something actually suited to the task.
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
Many, many servers these days do. NAND durability is really not that low, even with TLC, and the engineers and technicians that spec, build and service these servers are well versed in system monitoring, preventative maintenance, and redundancy. And the performance benefits, especially for anything reliant on random performance, are on such a scale that staying with HDDs would be business suicide. Even the craziest RAID setup won't come within 10% of the random performance of a single SATA SSD, let alone NVMe RAID. And expensive? Compared to an HDD, sure. But compared to the total price of a rack of servers, and the electricity needed to run and cool them for a few years? Negligible.
 
And the performance benefits, especially for anything reliant on random performance, are on such a scale that staying with HDDs would be business suicide. Even the craziest RAID setup won't come within 10% of the random performance of a single SATA SSD, let alone NVMe RAID.
With Violin Memory products you could already achieve one million IOPS in 2012. That was a much bigger difference at the time than the difference now because HDDs have become faster. Why didn't all companies jump on Violin Memory then? They thought the gains in productivity would not outweigh the additional costs. Violin Memory has always claimed the opposite, and they were probably right. But the point I'm making, a Seagate MACH.2 is going to have enough throughput and IOPS for many business situations, and is going to offer much more affordable storage than any SSD. In the situations where a Seagate MACH.2 would not give enough IOPS on Linux and Windows servers, they can still switch to FreeBSD to get additional performance and redundancy and better network latency:
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=12872ac&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=5ca0c1f&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=0ac3ab0&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=4347141&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=c253c2f&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=6e71607&p=2

In many situations, companies don't necessarily need an SSD, and it will be much more expensive for them.
 
With Violin Memory products you could already achieve one million IOPS in 2012. That was a much bigger difference at the time than the difference now because HDDs have become faster. Why didn't all companies jump on Violin Memory then? They thought the gains in productivity would not outweigh the additional costs. Violin Memory has always claimed the opposite, and they were probably right. But the point I'm making, a Seagate MACH.2 is going to have enough throughput and IOPS for many business situations, and is going to offer much more affordable storage than any SSD. In the situations where a Seagate MACH.2 would not give enough IOPS on Linux and Windows servers, they can still switch to FreeBSD to get additional performance and redundancy and better network latency:
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=12872ac&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=5ca0c1f&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=0ac3ab0&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=4347141&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=c253c2f&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=6e71607&p=2

In many situations, companies don't necessarily need an SSD, and it will be much more expensive for them.
I frankly don't understand who you're arguing against here: nobody here has said that every single company should replace every single HDD-based server with flash storage. That's just a straw man. Literally nobody has said that, or anything close to that. What was said, and what you quoted, was that in any workload reliant on random performance, an SSD-based array will be orders of magnitude faster than a HDD-based one. If you earn money based on completing work, and flash lets you complete parts of that work 10x faster, that's a major possible cost savings/revenue increase, which will easily offset the cost of a flash array vs. HDDs in that case. Heck, your own benchmarks - which didn't come with a source link or any info about the configuration, so they're rather meaningelss as an example - confirm this difference. 60 000 IOPS? A single low-end SATA SSD does better than that. Those sequential numbers speak to that being a pretty large array though - you don't get 6GB/s out of HDDs without there being a significant array of them. With SSDs, you can cut the number of drives significantly. Which will of course cost a ton for the same capacity, but again, if the flash lets you work 10x faster, that's a small price to pay.

Also, I don't really see how your arguments apply. Like, "In the cases where a Mach.2 wouldn't give enough IOPS, you could switch your OS and improve other performance metrics"? So what? You're still not coming close to the IOPS of an SSD array. Also, presenting "just change your OS" for a business setting as something trivial or even moderately easy is just nonsensical. Sure, let's just rewrite our entire software stack and spend a couple of years ironing out all the bugs. That sounds like a good business strategy.

As for that company you're mentioning: I have no idea, but if they failed, maybe their tech wasn't very good, maybe they weren't good at marketing themselves, maybe they just launched at the wrong time? Flash was expensive in 2012. It isn't really today. And there are tons of companies providing all-flash storage solutions for servers and enterprise if that's what you're getting at. From the looks of it, that company might have been early, but they're by no means unique in 2022.
 
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
WTAF.
 
dang someone outbid me just after I posted. I bet someone saw my post lol

thats a shame. I wanted to see how it would measure up to other consumer SSDs
 
thats a shame. I wanted to see how it would measure up to other consumer SSDs
I don't think it would be much difference vs Intel DC U.2 drive besides being Gen4 instead. Still get PB of write endurance and probably the same good Intel que depth.
 
I don't think it would be much difference vs Intel DC U.2 drive besides being Gen4 instead. Still get PB of write endurance and probably the same good Intel que depth.

I was dropping a pun.
 
Back
Top