Thursday, March 28th 2024

YMTC Claims its 3D QLC NAND Offers Endurance Comparable to 3D TLC NAND

YMTC X3-6070 3D QLC NAND flash chips are claimed by the company to offer endurance comparable to 3D TLC NAND flash chips from its competitors, ITHome reports. The company launched the new NAND flash chip at an event earlier this week. The X3-6070 is based on YMTC's 3rd Generation Xtracking architecture, and is 128-layer, which may not sound very competitive, given that other brands have moved up to 176-layer, or 232-layer; but YMTC says that the 128-layer design-choice is one of the four key ingredients to achieving high endurance for this chip.

The other three ingredients are innovations in the materials making up the NAND flash physical layer, new error-correction algorithms, and optimizations at the level of the SSD controller. The X3-6070 can sustain 4,000 P/E cycles per cell, YMTC claims, which is in league of contemporary 3D TLC NAND flash chips. It is able to do so, at lower costs from the QLC architecture. Besides the X3-6070, YMTC also launched a few first-party reference design SSDs that fully implement the controller-level optimizations needed for the chip to perform and endure as advertised.
Sources: ITHome, TPHuang (Twitter), Tom's Hardware
Add your own comment

23 Comments on YMTC Claims its 3D QLC NAND Offers Endurance Comparable to 3D TLC NAND

#1
sam_86314
Endurance is great and all, but what about write performance?

If it's still slower than a hard drive, what's the point?
Posted on Reply
#2
trsttte
sam_86314Endurance is great and all, but what about write performance?

If it's still slower than a hard drive, what's the point?
QLC is not slower than a hard drive, the worst case scenario (no more slc cache) is still faster than a HDD doing it's best case (sequential writes and reads), the difference is slim but it still puts the SSD in the front. Not competitive against TLC SSDs but great for larger storage.



The real problem is endurance, which has thus far sucked, and probably will continue to suck, I'm not putting much faith on a random chinese company caught lying in the past to out of nowhere solve that hurdle.
Posted on Reply
#3
evernessince
trsttteQLC is not slower than a hard drive, the worst case scenario (no more slc cache) is still faster than a HDD doing it's best case (sequential writes and reads), the difference is slim but it still puts the SSD in the front. Not competitive against TLC SSDs but great for larger storage.



The real problem is endurance, which has thus far sucked, and probably will continue to suck, I'm not putting much faith on a random chinese company caught lying in the past to out of nowhere solve that hurdle.
Modern HDDs have sequential writes at 260 MB/s+ for the higher density drives (16TB+ CMR).

Mind you Average whole drive sequential write isn't giving you the speed the NAND writes, it's a figure that's going to include the cache speed until it fills up, recovers, and repeats. Only a portion of those writes will be at actual NAND write speed. When I had a Samsung 8TB QVO, actual NAND write speed when the cache was filled was around 55 MB/s. Ditto goes for a last 16GB average as well, the cache is always recovering so it's impossible to tell how much is actually direct NAND write performance and how much is just the cache.

In any case writing directly to QLC NAND is absolutely slower than a HDD when it comes to sequential writes.
Posted on Reply
#4
AsRock
TPU addict
trsttteQLC is not slower than a hard drive, the worst case scenario (no more slc cache) is still faster than a HDD doing it's best case (sequential writes and reads), the difference is slim but it still puts the SSD in the front. Not competitive against TLC SSDs but great for larger storage.



The real problem is endurance, which has thus far sucked, and probably will continue to suck, I'm not putting much faith on a random chinese company caught lying in the past to out of nowhere solve that hurdle.
In that case they should of said some thing about the speed and just the endurance.
Posted on Reply
#5
ExcuseMeWtf
I'll believe it when someone tests it independently.
Posted on Reply
#6
trsttte
evernessinceModern HDDs have sequential writes at 260 MB/s+ for the higher density drives (16TB+ CMR).

Mind you Average whole drive sequential write isn't giving you the speed the NAND writes, it's a figure that's going to include the cache speed until it fills up, recovers, and repeats. Only a portion of those writes will be at actual NAND write speed. When I had a Samsung 8TB QVO, actual NAND write speed when the cache was filled was around 55 MB/s. Ditto goes for a last 16GB average as well, the cache is always recovering so it's impossible to tell how much is actually direct NAND write performance and how much is just the cache.

In any case writing directly to QLC NAND is absolutely slower than a HDD when it comes to sequential writes.
The samsung QVO is not the fastest QLC drive either, quite the opposite, and just like QLC will eventually run out of cache, an HDD also rarely is able to write sequenctially and will also only be as fast while empty.

And the last 16GB has no cache to use, the cache only works while there's space free on the drive, once you've filled 99% there's no more free space and the drive slows down. The last 16gb is so close to the average because the drive exausted the cache right at the beggining (it's only about 100gb even on the 4TB model), the majority of the fill process was at a slower fixed speed. On the other hand HDD can't even keep a sequential because of simple physics: the inner part of the disc has less bits to read/write for the same turn radial.

This idea tha qlc doesn't outperform hdd's is nonsense, the difference is not as big as one would like in some scenarios but it wins everytime
Posted on Reply
#7
GerKNG
i had HDDs writing a 1TB file faster than a 4TB 870 QVO.
i don't even see the reason why QLC exists except if it would be for slow archival storage at 10TB+. and for that we have HDDs...
Posted on Reply
#8
Denver
GerKNGi had HDDs writing a 1TB file faster than a 4TB 870 QVO.
i don't even see the reason why QLC exists except if it would be for slow archival storage at 10TB+. and for that we have HDDs...
It exists to increase manufacturers' profit margins.
Posted on Reply
#9
Wirko
GerKNGi had HDDs writing a 1TB file faster than a 4TB 870 QVO.
i don't even see the reason why QLC exists except if it would be for slow archival storage at 10TB+. and for that we have HDDs...
Apps don't write to disks all the time - sometimes they read some data, too.
Posted on Reply
#10
mechtech

ver 1.4 is 5 years old

For new drives, I would expect at least 2.0a
Posted on Reply
#11
evernessince
trsttteThe samsung QVO is not the fastest QLC drive either, quite the opposite, and just like QLC will eventually run out of cache, an HDD also rarely is able to write sequenctially and will also only be as fast while empty.
The profile of your writes depends entirely on what you are doing. If were are talking a game or video storage drive most writes are indeed sequential. If we are talking about an OS drive though indeed there is a healthy mix.
trsttteAnd the last 16GB has no cache to use, the cache only works while there's space free on the drive, once you've filled 99% there's no more free space and the drive slows down.
That's not how any decent modern SSD works.

Cache space comes in the form of DRAM built in the drive, SLC, or a HMB cache. In any of those scenarios the cache space is reseved and inaccessible for general storage purposes. It is used soley for buffering writes to the NAND and SSD maintanence operations (like wear leveling for example). Early SSDs slowed down precicely because they didn't setup this reserve space and a cheap or extremely bad SSD nowadays might still have this issue but by an far it is a solved issue.
trsttteThe last 16gb is so close to the average because the drive exausted the cache right at the beggining (it's only about 100gb even on the 4TB model), the majority of the fill process was at a slower fixed speed.
You are only accounting for the cache exhausting once here. It's entirely possible that the cache exaughts, recovers, and then exhausts reapeating again during a full drive write depending on the size of the drive and cache.
trsttteOn the other hand HDD can't even keep a sequential because of simple physics: the inner part of the disc has less bits to read/write for the same turn radial.
And yet the write sequentials on the outside of the disk are still faster than the 870 QVO. Mind you I haven't used any the higher density HDDs or the upcomming dual actuator HDDs given that I went all SSD a few years back but I assume they continue to improve on speed.
trsttteThis idea tha qlc doesn't outperform hdd's is nonsense, the difference is not as big as one would like in some scenarios but it wins everytime
I'd like to point out that I specifically said sequential write performance was better, not that QLC doesn't outperform HDDs. QLC does outperform HDDs. The only things HDDs do well in is sequentials, which just so happens to be good for many consumer data storage scenarios.
WirkoApps don't write to disks all the time - sometimes they read some data, too.
Correctly although in the use case you responded to, Archival, 99% of the time it's going to be writes.
Posted on Reply
#12
trsttte
evernessinceCache space comes in the form of DRAM built in the drive, SLC, or a HMB cache. In any of those scenarios the cache space is reseved and inaccessible for general storage purposes. It is used soley for buffering writes to the NAND and SSD maintanence operations (like wear leveling for example). Early SSDs slowed down precicely because they didn't setup this reserve space and a cheap or extremely bad SSD nowadays might still have this issue but by an far it is a solved issue.
Nope, HMB stores the allocation table only, it's just a couple hundred megabytes most times, and DRAM though caching some reads/writes is usually only 1GB/TB of storage (not enough to cache anything meaningfull on a full disk write). On the QVO specifically SLC is allocated by using a portion of the QLC as SLC so again it won't meaningfully help doing an entire drive write, simply put the drive can't clear it fast enough to be able to use it again, other drives might do things slightly differently but majority will probably work like this, it doesn't make sense to include an extra SLC package in the BOM after all.
evernessinceYou are only accounting for the cache exhausting once here. It's entirely possible that the cache exaughts, recovers, and then exhausts reapeating again during a full drive write depending on the size of the drive and cache.
That doesn't happen fast enough, if the drive did that it would become slower, not faster. It indeed has to do that towards the end because to do a full write what was treated as SLC will need to be used as QLC again.
evernessinceAnd yet the write sequentials on the outside of the disk are still faster than the 870 QVO. Mind you I haven't used any the higher density HDDs or the upcomming dual actuator HDDs given that I went all SSD a few years back but I assume they continue to improve on speed.
My bad for bringing the benchmarks of such a bad and old drive, its the easiest to find. Here's a Sabrent rocket Q doing 300+ in the last 16gb which turns to 400+ average for the full drive and again, not the fastest or newest QLC by any measure, this is just random consumer stuff from 3 years ago because I can't find hole drive write benchmarks for enterprise drives like the more recent Micron stuff.

Posted on Reply
#13
evernessince
trsttteNope, HMB stores the allocation table only, it's just a couple hundred megabytes most times, and DRAM though caching some reads/writes is usually only 1GB/TB of storage (not enough to cache anything meaningfull on a full disk write).
First, you mean mapping table, not allocation table.

Second:

"One feature of the HMB is that drives can include a Fast Write Buffer (FWB) as part of the HMB structure. The basic idea is that SSDs manufacturers can take advantage of the main memory’s speed and use the main memory as a write buffer for the NAND device. This allows for features like data being written to NAND more efficiently aligned to the NAND’s cells as it is flushed from the FWB to the NAND SSD."

www.servethehome.com/what-are-host-memory-buffer-or-hmb-nvme-ssds/

100% the HMB can and is used for more than just the mapping table. The official NVMe standard simply states that HMB may be used by the controller:

nvmexpress.org/wp-content/uploads/NVM-Express-1_3c-2018.05.24-Ratified.pdf

HMB is introduce on page 162.

It does not specify that HMB must be used only for the mapping table.
trsttteOn the QVO specifically SLC is allocated by using a portion of the QLC as SLC so again it won't meaningfully help doing an entire drive write,
SLC of which is reserved and separate from the storage accessible to the user.
trstttesimply put the drive can't clear it fast enough to be able to use it again, other drives might do things slightly differently but majority will probably work like this, it doesn't make sense to include an extra SLC package in the BOM after all.
No one said there was a separate chip for the SLC, just that it was reserved. You are again mis-assuming here.
trsttteThat doesn't happen fast enough, if the drive did that it would become slower, not faster. It indeed has to do that towards the end because to do a full write what was treated as SLC will need to be used as QLC again.
Just stop with this false theory of yours that the cache and storage are competing for the same space, it's wrong. Again the cache space is reserved and separate. You can see this in the chart below:



The drive's speed drops once the cache fills up and then remains steady after that. If your theory was correct, you should see a drop-off in performance as the drive approachs full but we don't. Again most modern SSDs reserve space specifically for the cache separate from the storage usable to the end user.
trsttteMy bad for bringing the benchmarks of such a bad and old drive, its the easiest to find. Here's a Sabrent rocket Q doing 300+ in the last 16gb which turns to 400+ average for the full drive and again, not the fastest or newest QLC by any measure, this is just random consumer stuff from 3 years ago because I can't find hole drive write benchmarks for enterprise drives like the more recent Micron stuff.

The sabrent rocket q has an absolutely massive cache, the largest of any QLC drive and uses the Phison E12S controller.

"Sabrent’s Rocket Q features a massive dynamic SLC write cache that spans a quarter of the SSD’s available capacity. The 8TB Rocket Q wrote a little over 2.1TB of data at 2.9 GBps before degrading to an average speed of 276 MBps after the write cache filled.

The write cache recovers fairly quickly, too. Give the SSD a few minutes of idle time, usually 2-5 minutes, and a lot of the write cache will recover.

QLC flash does have its downfalls, like lower endurance and slower write performance after the SLC write cache gets filled up during large file transfers, but the Phison E12S controller helps push the Rocket Q to the fastest performance we've seen from a QLC drive. The large dynamic write cache, a benefit of the massive capacity, also helps reduce the inherent performance issues that typically plague QLC SSDs."

www.tomshardware.com/reviews/sabrent-rocket-q-nvme-ssd/2

You litteraly cherrypicked the drive who's performance numbers least represent the QLC direct write performance. It was vastly more expensive than even competing TLC drives. Heck the newer Q4 version of that drive costs $174 for 1TB when you can buy a 2TB WD SN850X TLC for $150.

Compare that to a more normal newer QLC drive like the Crucial P3 plus. That drive nets speed of 100 MB/s after the cache runs out.

More or less tom's conclusion says it all, the cache and controller help cover up QLC's downsides but it's very expensive and we can see from the performance of other QLC drives what the actual QLC performance is like.
Posted on Reply
#14
chrcoluk
trsttteQLC is not slower than a hard drive, the worst case scenario (no more slc cache) is still faster than a HDD doing it's best case (sequential writes and reads), the difference is slim but it still puts the SSD in the front. Not competitive against TLC SSDs but great for larger storage.



The real problem is endurance, which has thus far sucked, and probably will continue to suck, I'm not putting much faith on a random chinese company caught lying in the past to out of nowhere solve that hurdle.
Umm that graph shows they can be slower than HDD best case, less than 50% of HDD best case actually.

But I do agree, we are talking about uncommon situations that consumers may come across. (SLC cache exhaustion).

Now days 83MB/sec would be a quite slow HDD.

From spec sheets here. (conveniently to hand given I am investing in some new drives soon, sadly just missed 25% off deal from WD, they now offering me 20% goodwill though so good customer service).

documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-red-plus-hdd/product-brief-western-digital-wd-red-plus-hdd.pdf
documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-red-pro-hdd/product-brief-western-digital-wd-red-pro-hdd.pdf
documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-gold/product-brief-wd-gold-hdd.pdf

Modern 7200RPM can come close to 300MB/sec and modern 5400RPM will still be almost 200MB/sec. Of course this will be non fragmented sequential start of disk.

My 5400RPM CMR WD RED doing backups 2 days ago 80% full was writing at 130MB/sec (cache exhausted so actual speed) and can read them back at same speed. So the TPU data above might be from a full or almost full drive.
Posted on Reply
#15
Maxx
SSD Guru
Just to clear up some things:

Current QLC is in the 30-40 MB/s range for maximum sequential write performance per die. This puts a cap around 640 MB/s on a budget drive with direct-to-QLC, but often these drives are given large SLC caches which impacts throughput. The fastest would be the P41 Plus or 670p at 400 MB/s.

HMB can be used for write caching, yes, it can also be artificially extended and used for different types of mapping like reverse mapping. I have articles and patents for this on my site. Typically HMB reserves 30-40MB and it's not used for high priority data. Usually you will have embedded volatile memory to handle a superpage or more (e.g. 1MiB) and some amount of mapping for hot data that benefits like random writes, on the order of .5 to 4MiB. With optimized mapping this could be up to 4GB or so of random 4KB writes. HMB is still volatile and adds latency so would not be used for the highest priority, but data-at-rest protection and controller algorithms can rebuild the table on next startup anyway.

As for SLC: TurboWrite (for the QVO in question) is static + dynamic SLC. The static portion can be reserved per-die as there are times you want specific word lines for static SLC. The reason for this is that one end of the stack has better retention but worse programming performance, best for static SLC, and the other the opposite, which sometimes is used for static SLC to improve yields on poor-grade flash. You see the former on Intel's 192L flash where they have 250K SLC to compensate for using essentially a QLC design in 5-bit mode. On the other hand, the Samsung 64L TLC you saw in many budget drives a while back, an example being the SX8200 Pro variant (see TH's article), used static SLC to compensate for poor endurance. This is because the wear zone for static SLC is separate from native flash. The native flash shares a rotating zone based on wear with the dynamic portion of the SLC. When determining endurance, static SLC is weighted against the native + dynamic zone, which is why using high-retention WLs is a typical strategy. Likewise, OP space is carved out of each die, and often this system area is in SLC mode for performance as that's where the non-volatile mapping and metadata is stored.

Coming back to the P3/P3 Plus, I have Micron's datasheet for this flash. It's officially rated with an effective program time of 2339µs which translates to about 27.4 MB/s per die. Typical interleaving on a budget drive is 4-channel, 4 dies per channel, giving you an idea of maximum performance if the drive were entirely QLC. Crucial went with full-drive SLC caching so the post-cache speeds are going to be terrible.
Posted on Reply
#16
efikkan
The endurance figures of TLC drives are already greatly exaggerated, not to mention the nonsensical figures for QLC. They make up these figures by optimistically estimate the usage, where the majority of writes will hit RAM cache etc. In reality though, these will wear out much faster.

Then there is performance, which is terrible enough on (consumer grade) TLC drives already, both as the drives fill up and as they wear out. Don't waste your hard-earned money on QLC drives, or any large (consumer grade) SSD for long-term storage. I've heard some even buy a large SSD and only use a tiny portion of it to maintain higher performance and reliability (facepalm), then you might as well buy a little more expensive enterprise SSD and get sustained performance and durability.
Posted on Reply
#17
MacZ
I had QLC drives and replaced them with better technology when the prices for SSD crashed some time ago.

I don't care what anyone say about QLC drives.

QLC : never again.
Posted on Reply
#18
Minus Infinity
Why are we wasting so much time trying to defend QLC? The performance numbers show it's utter trash level and the price is also stupidly high. I would only buy a 870 QVO 4TB if it were literally 1/8 the price of the equivalent TLC drive. Hopeless endurance, abysmal performance, high price per TB. I'm still using HDD for my photo collection as I need 8TB+. QLC offers barely improved performance in most cases (well at QVO). I'm going to split my photo collection over 2 PCI-E 4.0 4TB SSD's for my next PC build later this year, I've given up on affordable 8TB TLC drives. QLC would need to see an order of magnitude improvement in all areas to tempt me.
Posted on Reply
#19
Pepamami
MacZI had QLC drives and replaced them with better technology when the prices for SSD crashed some time ago.

I don't care what anyone say about QLC drives.

QLC : never again.
in 5 years every new SSD gonna be QLC.
QLC means that one cell holds 16 states (TLC holds 8 states, MLC holds 4 states, SLC 2 states)
QLC does not tell you how u gonna achieve these 16 states.
Posted on Reply
#20
MacZ
Pepamamiin 5 years every new SSD gonna be QLC.
QLC means that one cell holds 16 states (TLC holds 8 states, MLC holds 4 states, SLC 2 states)
QLC does not tell you how u gonna achieve these 16 states.
I seriously doubt it.

QLC is great for WORM or WORM adjacent usage scenarios (like loading a game level). But if you overwhelm the SLC cache by writing alot, you then enter a world of pain.

QLC ssds are like SRM hard drives in the sense that they are not all usages drives. Also the cost differential between QLC and TLC is not that big to justify the disappearance of TLC.

I think that if QLC were poised to replace TLC completely, it would have happened already.

And for businesses, there are lot of use cases for which QLC is unacceptable.

This is why I don't think that SLC will disappear.
Posted on Reply
#21
A&P211
evernessinceModern HDDs have sequential writes at 260 MB/s+ for the higher density drives (16TB+ CMR).

Mind you Average whole drive sequential write isn't giving you the speed the NAND writes, it's a figure that's going to include the cache speed until it fills up, recovers, and repeats. Only a portion of those writes will be at actual NAND write speed. When I had a Samsung 8TB QVO, actual NAND write speed when the cache was filled was around 55 MB/s. Ditto goes for a last 16GB average as well, the cache is always recovering so it's impossible to tell how much is actually direct NAND write performance and how much is just the cache.

In any case writing directly to QLC NAND is absolutely slower than a HDD when it comes to sequential writes.
I dont know about you but my 8tb evo 870 does around 155mb/s when the cache is filled up. I still use it. I think the cache is only 86gb, so its very small for 8tb SSD. Its only used for mass storage.
Posted on Reply
#22
Pepamami
MacZI seriously doubt it.

QLC is great for WORM or WORM adjacent usage scenarios (like loading a game level). But if you overwhelm the SLC cache by writing alot, you then enter a world of pain.

QLC ssds are like SRM hard drives in the sense that they are not all usages drives. Also the cost differential between QLC and TLC is not that big to justify the disappearance of TLC.

I think that if QLC were poised to replace TLC completely, it would have happened already.

And for businesses, there are lot of use cases for which QLC is unacceptable.

This is why I don't think that SLC will disappear.
ye in other hand kinda true, since when u move from MLC to TLC u gain 50% size reduction, when u move from TLC to QLC u gain now 33%, aka 3 vs 4, with way more headache.
but in other hand, some one, like samsung, will bring some flash technology, that can ez bring QLC performance to acceptable levels.
Posted on Reply
#23
efikkan
MacZQLC is great for WORM or WORM adjacent usage scenarios (like loading a game level).
Don't forget that the SSD has to rewrite the data to prevent data rot and to do wear leveling, so QLC will be terrible for long-term storage.
Posted on Reply
Add your own comment
Apr 28th, 2024 09:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts