• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Solidigm Launches the D7-P5810 Ultra-Fast SLC SSD for Write-Intensive Workloads

Still no one offering a pci-e 5.0 x16 expansion card stacked with 500GB ddr5 dram that serves as a drive to load virtual machines on?
 
When in theory... but in reality its just QLC N38A 1Tb 144-Layers running in pSLC Mode
Exactly. But nowhere in the press release is it mentioned. P5810 is constantly referred to as SLC which is not true.
Its not SLC
it's QLC running in pSLC Mode, the drive should be 4TB but since its a 1/4 o the capacity, its 1TB with 800GB available so its over 200GB of over provisioning
Yes i figured that out myself.
It's advertised as a cache drive, QLC is no issue here. You're not bothered by the slower writes, because cache is read way more often than it is written. And if it goes belly-up, it's just one drive among a hundred others, you just replace it (but that applies to any enterprise storage setup, it's not particular to cache drives).
It is advertised as SLC. It is not mentioned in the press release or in slides that it's QLC. It's only compared to QLC.
So you think that when you write data to this SSD, it's first stored in the SLC cache, then eventually moved to permanent storage with 4 bits per cell, but Intel, erm, Solidigm won't tell that to us? That would be extremely deceptive advertising even for a consumer SSD, let alone enterprise.
Well they're certainly not telling us that in the press release. I would hope for their sake that their specs page correctly lists this as QLC.
Also it is deceptive that they decided to name it P5810. The P5800X was Optane series. They are trying to convey to people that this is somehow upgrade over the P5800X although it is worse in nearly all metrics...except maybe the price (and availability) that i would hope is cheaper considering its QLC.
 
Solidigm D7-P5810 uses SK hynix 144-layer 3D NAND flash
in reality its just QLC N38A 1Tb 144-Layers
Which of those two is true? Are both true? I'm bothered because SK hynix does not use die designations such as N38A.

Well they're certainly not telling us that in the press release. I would hope for their sake that their specs page correctly lists this as QLC.
You're saying this as if it was something bad. No, this SSD does not store four bits in each cell. It stores one bit. It doesn't matter that they are reusing flash chips designed for many bits per cell.
Also it is deceptive that they decided to name it P5810. The P5800X was Optane series. They are trying to convey to people that this is somehow upgrade over the P5800X although it is worse in nearly all metrics...except maybe the price (and availability) that i would hope is cheaper considering its QLC.
Regular prices of Optane drives - and "regular" should be in quotes - are somewhere between 2000 and 5000 EUR per terabyte. Enterprise NAND-based SSDs, mostly TLC, are roughly 20x cheaper. Your hope is very much justified.
Of metrics, which are the most important? 4K QD1 random reads? Yes it's bad (about 75 MB/s calculated from latency) but QD1 doesn't matter in servers. Write endurance? You'll get 90,000 write cycles for the money. Real Optane has twice the DWPD rating, and the same warranty period.

Conclusion: Real Optane can't keep up, and could never keep up, given that it stayed planar (two-layer if I understand correctly) from the first day to the last day of its life.
 
Last edited:
this SSD does not store four bits in each cell. It stores one bit.
That remains to be seen.
Of metrics, which are the most important? 4K QD1 random reads? Yes it's bad (about 75 MB/s calculated from latency) but QD1 doesn't matter in servers.
On write focused cache drive it does matter.
Write endurance? You'll get 90,000 write cycles for the money. Real Optane has twice the DWPD rating, and the same warranty period.
Optane warranty was always short considering the endrurance and price.
Conclusion: Real Optane can't keep up, and could never keep up, given that it stayed planar (two-layer if I understand correctly) from the first day to the last day of its life.
Cant keep up in terms of performance? Because that's outright false. P5800X is Gen4 like this P5810.
 
Not much point with these speeds. It has average or even first gen PCIe 4.0 speeds at 6400/4000 r/w.
It's main benefit is endurance but for OS drive it's largely wasted unless you do full drive writes daily for some reason.
No. The point is 10us read latency at QD1. This makes it a pretty good OS drive.
 

The 860 pro if I'm not mistaken was the last SSDs to use MLC. There is still some stock on the market for those interested.
I have three of those. Two 512's and a 4TB. I bought them a year or two ago off the local Craigslist for $200. All were NIB, the seller was going to build a PC for his son and never got around to it. I have them in one of my Z690 12600K rigs with a Solidigm P44 Pro 1TB as the OS drive.
 
Which of those two is true? Are both true? I'm bothered because SK hynix does not use die designations such as N38A.
SK Hynix doesn't have 144-Layers NAND Flash, only 128-Layers and 176-Layers, 144-Layers is an Intel Design.

I think it would be best to mark it just like this in your database, therefore "QLC/SLC" or "QLC in SLC mode".
its already listed as "pSLC"
1695810522053.png
 
That remains to be seen.
I don't have a proof, I just think that you can't market a QLC drive as "pure SLC" to datacenter people, that would be worse than the bait-and-switch that we consumers get to see. They'd never buy anything Solidigm again. Besides, a pSLC cache would be of no use in a SSD designed for constant writing. Too often it wouldn't be able to move cached data to permanent (QLC) locations, so it would have to operate at roughly 1/4 the capacity or 1/4 the speed.
On write focused cache drive it does matter.
As enterprise drives, they are not tuned for low QD. They show their strength when they serve many users and proceses simultaneously and still maintain a low average latency and a defined maximum latency.
Optane warranty was always short considering the endrurance and price.

Cant keep up in terms of performance? Because that's outright false. P5800X is Gen4 like this P5810.
Considering the price too, that's what I meant. NAND advanced a lot, planar became 32-layer, then 232-layer. "3D XPoint" remained 2D all along, so manufacturing costs and sales prices per gigybyte remained at DRAM levels.
 
I don't have a proof, I just think that you can't market a QLC drive as "pure SLC" to datacenter people, that would be worse than the bait-and-switch that we consumers get to see. They'd never buy anything Solidigm again. Besides, a pSLC cache would be of no use in a SSD designed for constant writing. Too often it wouldn't be able to move cached data to permanent (QLC) locations, so it would have to operate at roughly 1/4 the capacity or 1/4 the speed.
You assume "datacenter people" buy solely on what written on the label. In fact, they're the most anal kind of customers you can have. And again, it's not marketed as "pure SLC" (in part precisely because "datacenter people" don't fall for those).
 
You assume "datacenter people" buy solely on what written on the label. In fact, they're the most anal kind of customers you can have. And again, it's not marketed as "pure SLC" (in part precisely because "datacenter people" don't fall for those).
No, I certainly didn't assume that.

But you're right in that it's not marketed as pure SLC - that was just the wording in the TPU news post, and I overlooked that.
 
  • Like
Reactions: bug
I think the 800 gigs is already the 1/4 capacity? So its actually 3.2TB QLC physically.

Also pSLC is considerably faster than normal QLC so its not QLC speeds, but how fast it is compared to native SLC dont know, because there hasnt really been like for like comparisons made, all SLC drives I am aware of are now really old.

As an example stacked TLC is now outperforming planar MLC, and I think has at the very least matched its erase cycle endurance.

I think the best way to market it would be 4 bit pSLC.
 
I don't have a proof, I just think that you can't market a QLC drive as "pure SLC" to datacenter people, that would be worse than the bait-and-switch that we consumers get to see. They'd never buy anything Solidigm again. Besides, a pSLC cache would be of no use in a SSD designed for constant writing. Too often it wouldn't be able to move cached data to permanent (QLC) locations, so it would have to operate at roughly 1/4 the capacity or 1/4 the speed.
it doesnt have a "SLC Cache" it just runs for all the time the drive as SLC mode, so the speeds doesn't fall
Its extremely rare to find SLC NAND Dies that are natevely SLC, but we do have QLC NANDs that in SLC mode can reach 100.000 PEC, for example the N48R which is micron's QLC 176-Layer 1Tb dies.

I think the 800 gigs is already the 1/4 capacity? So its actually 3.2TB QLC physically.
its 4TB, the drive would be 3.2TB if it was QLC indeed, so if it was 3.2 TB it would have close to 37.4% of Over-provisioning
But since it's 800GB with 37.4% of OP, the real capacity is 1TB
 
it doesnt have a "SLC Cache" it just runs for all the time the drive as SLC mode, so the speeds doesn't fall
Its extremely rare to find SLC NAND Dies that are natevely SLC, but we do have QLC NANDs that in SLC mode can reach 100.000 PEC, for example the N48R which is micron's QLC 176-Layer 1Tb dies.


its 4TB, the drive would be 3.2TB if it was QLC indeed, so if it was 3.2 TB it would have close to 37.4% of Over-provisioning
But since it's 800GB with 37.4% of OP, the real capacity is 1TB

Yes, I meant usable capacity, sorry if that wasnt clear.

On my DC P4600 I actually have 2TB usable, they didnt stick on 2TB and remove 30%, they added 30% on top.

What they did on this drive seems overly harsh. So 20% OP probably should be a 4.8TB drive, with 1TB usable.
 
Yes, I meant usable capacity, sorry if that wasnt clear.

On my DC P4600 I actually have 2TB usable, they didnt stick on 2TB and remove 30%, they added 30% on top.

What they did on this drive seems overly harsh. So 20% OP probably should be a 4.8TB drive, with 1TB usable.
Oh i see interesting. By the way i'll try adding that SSD to my Database, but it's hard AF to get proper information for those SSDs, specially their Controllers.
Intel, Solidigm and SK Hynix are a nightmare to get information about their controllers, for real, they don't tell us anything
 
On my DC P4600 I actually have 2TB usable, they didnt stick on 2TB and remove 30%, they added 30% on top.
But do you know of any SSD that is sold as x GB, and the usable capacity is less?

Of course you need to take into account that tera equals 1000^4, not 1024^4, and it's been this way since spinning rust.

its 4TB, the drive would be 3.2TB if it was QLC indeed, so if it was 3.2 TB it would have close to 37.4% of Over-provisioning
But since it's 800GB with 37.4% of OP, the real capacity is 1TB
The calculation of total capacity is only possible when you have all the details about the die (bytes/page, pages/block, blocks/plane, planes/die), and even that can only be an approximation. There's also firmware, FTL, other metadata that take up space, then there's allowance for bad blocks (bad when factory tested), and maybe more.
Here's an example for which your database has full data: the Micron B47R die, nominally 512 Gbit, is 609 GBit raw in decimal units, or 567 Gi-bit raw in binary units. There would be more if you counted in page-level metadata (ECC etc.) but metadata can never be user data. Is that 19% OP (=609/512-1)? Or less?
 
But do you know of any SSD that is sold as x GB, and the usable capacity is less?

Of course you need to take into account that tera equals 1000^4, not 1024^4, and it's been this way since spinning rust.


The calculation of total capacity is only possible when you have all the details about the die (bytes/page, pages/block, blocks/plane, planes/die), and even that can only be an approximation. There's also firmware, FTL, other metadata that take up space, then there's allowance for bad blocks (bad when factory tested), and maybe more.
Here's an example for which your database has full data: the Micron B47R die, nominally 512 Gbit, is 609 GBit raw in decimal units, or 567 Gi-bit raw in binary units. There would be more if you counted in page-level metadata (ECC etc.) but metadata can never be user data. Is that 19% OP (=609/512-1)? Or less?
i'm aware i have many datasheets which already lists it.
But i don't count the ECC + Bad Block spares on each dies in our database.
 
But do you know of any SSD that is sold as x GB, and the usable capacity is less?

Of course you need to take into account that tera equals 1000^4, not 1024^4, and it's been this way since spinning rust.


The calculation of total capacity is only possible when you have all the details about the die (bytes/page, pages/block, blocks/plane, planes/die), and even that can only be an approximation. There's also firmware, FTL, other metadata that take up space, then there's allowance for bad blocks (bad when factory tested), and maybe more.
Here's an example for which your database has full data: the Micron B47R die, nominally 512 Gbit, is 609 GBit raw in decimal units, or 567 Gi-bit raw in binary units. There would be more if you counted in page-level metadata (ECC etc.) but metadata can never be user data. Is that 19% OP (=609/512-1)? Or less?

When I said usable I meant in the same way as it is done on consumer drives. If it was done like this new solidigm drive I would have a 1.4TB drive on the 1000^4 basis, but instead its 2TB on the 1000^4 basis, I hope thats super clear now as I though it was clear what I meant.

Also the extra spare on enterprise is mostly to improve wear levelling and trim performance rather than bad block replacement, the more spare an SSD has, the more efficient it works.
 
Back
Top