• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Disabled SLC Cache Tested on M.2 SSD, Helps Performance in Some Cases

Would be nice if the drive allowed you to disable SLC cache at a certain % full threshold.
 
Would be nice if the drive allowed you to disable SLC cache at a certain % full threshold.
Why would that be useful?

Anyway, I've seen occasional reports that SLC cache becomes ineffective when the SSD is close to full. I tend to blame internal fragmentation for that, as SLC caching probably needs large extents of contiguous free space, so it can write large chunks of data sequentially (random/fragmented would be slow).

First of all, thanks for all the comments and i hope you guys liked the content, my next one will be disabling a DRAM Cache in a NVMe SSD to see a real-world case scenarios, and we hope to see the "REAL" difference
If I'm allowed to make a suggestion, here it is: please record the SMART data and report how much data each of your benchmarks writes to the SSD. As far as I'm aware, no SSD reviewer does that. OS booting and game loading probably write little data, but it would be nice to have a proof. That would be the reason why disabling the SLC cache has little effect.
 
It looked like beyond a certain % threshold it it performed worse. So maybe with the right threshold point you could get a bit better balance between cache vs no cache.
 
Why would that be useful?

Anyway, I've seen occasional reports that SLC cache becomes ineffective when the SSD is close to full. I tend to blame internal fragmentation for that, as SLC caching probably needs large extents of contiguous free space, so it can write large chunks of data sequentially (random/fragmented would be slow).


If I'm allowed to make a suggestion, here it is: please record the SMART data and report how much data each of your benchmarks writes to the SSD. As far as I'm aware, no SSD reviewer does that. OS booting and game loading probably write little data, but it would be nice to have a proof. That would be the reason why disabling the SLC cache has little effect.
I don't do that since the SSDs are secondary discs

A DRAM cache vs HMB would also be nice, but I guess it's hard to pick two drives that are similar enough for it to be a somewhat apples-to-apples comparison.
It's hard to that since the controllers either support HMB or DRAM. Only a handful support both
 
"I dont always write 700 GB, but if i do, i prefer slc..."
Please transfer data responsibly. :D

(for those outside americas, check Dos Equis commercials on yt)
 
Last edited:
That's why I don't like it when TechPowerup rates a large SLC cache as something positive. 1000-2000 MB/s write speed is still plenty for most applications. But when it drops to 600 MB/s or worse 100MB/s for QLC drives then it's just awful. Even your Internet speed can be faster than that.
I would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.

For all my drives e.g. the likely biggest sustained write is when/if I am migrating data from a drive it is replacing, a one off event.

After I got my SN850X I did move a couple of hundred gigs worth of games of my 980 pro though. But I wont be doing this sort of thing often. Plus it wasnt all in one go, one game at a time, with gaps in between.

Would be nice if the drive allowed you to disable SLC cache at a certain % full threshold.
Probably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.
 
Last edited:
I would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.

For all my drives e.g. the likely biggest sustained write is when/if I am migrating data from a drive it is replacing, a one off event.

After I got my SN850X I did move a couple of hundred gigs worth of games of my 980 pro though. But I wont be doing this sort of thing often. Plus it wasnt all in one go, one game at a time, with gaps in between.


Probably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.

I didn't really take into account endurance angle on things, but fair enough consideration. I was simply looking at it from a performance angle it could make sense and maybe it's generally fine to that option for a typical consumer as well not sure. I don't think most consumers write to disk too heavily to be honest so a lot of endurance concerns are probably a bit overstated. Basically you could perhaps look at it a bit similarly to like short stroking a HDD, but kind of in reverse with cache. Not a perfect analogy perhaps, but from a performance angle a bit of a inverse scenario to it. The whole purpose of short stroking as well was to kind of minimize seek access performance cratering.
 
Probably the worst time to get rid, pSLC also increases endurance, and you want that if the drive is nearly full.
How can pSLC increase endurance?

a lot of endurance concerns are probably a bit overstated
Yes, agreed. Those who are overly worried about endurance AND actually do demanding stuff with their SSDs, such as lots of small file writing/updating, AND are too cheap to buy a higher tier or enterprise SSD, should simply leave a couple hundred gigabytes free.
 
I would think the reason is obvious, essentially the longer the transfer goes on for, the less likely a real use case will encounter it. The drives with the smallest SLC cache can be exhausted in some real world cases, but drives with the largest SLC cache, will probably never hit the scenario of where the pSLC is exhausted with a huge backlog of data having to be moved out of it.
Precisely right.
 
Yeah, comparing enterprise QLC to consumer TLC. So very relevant. Care to compare prices as well?
You actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
 
You actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
Where did say the price comes from NAND? I just said is an apples-to-oranges comparison, not in the least because enterprise drives are engineered for endurance.
 
You actually think the price difference comes from the NAND? :laugh:
The NAND is the same. There might be some binning, but it's the same NAND.
Price difference mainly comes from Controller/Firmware/Support you're paying for the RnD.
It would actually make more sense from a supply chain/cost perspective to have just 1 "type" of NAND.
A 30 TB enterprise SSD costs twice as much as the 15 TB version of the same model. 60 TB is twice as much again. Same controller, firmware, support, R&D, probably same PCB.

I'm aware I'm making an enterprise-to-enterprise comparison instead of enterprise-to-consumer but still. There must be a significant price difference due to the NAND.
 
Um, no it's not. TLC and QLC are NOT the same. IF you really think that, you need to go do some reading..
I think he meant the QLC NAND that goes into enterprise drives is the same as the one that goes into consumer drives, therefore it is ok to compare enterprise and consumer drives. We know it isn't, but I believe that's what he meant.
 
I think he meant the QLC NAND that goes into enterprise drives is the same as the one that goes into consumer drives, therefore it is ok to compare enterprise and consumer drives. We know it isn't, but I believe that's what he meant.
Oh, I think I missed that context. However, THAT is also very incorrect.
 
To prove my point I'll use Techpowerup's own database. ;)

shame there's no info here for the P5316
One is 4 chips / 1Tbit, the other is 6 chips / 1Tbit. That's the most common trick of the enterprise drives. Endurance is just as crappy as consumer drives, but there's 50% more chips to spread the wear.
 
One is 4 chips / 1Tbit, the other is 6 chips / 1Tbit. That's the most common trick of the enterprise drives. Endurance is just as crappy as consumer drives, but there's 50% more chips to spread the wear.
No, there's something else @Scrizz is pointing the finger at: the N38A die can hold 1 Tb in QLC mode (consumer SSD) or 3/4 Tb in TLC mode (enterprise SSD). This dual use is a rare exception. Making a QLC die work with fewer bits per cell is certainly possible but not trivial (the usual 16 KiB page size becomes ... what? 12 KiB?). Maybe the N38A was optimised for both QLC and TLC.

Enterprise drives also employ eMLC, eTLC, eQLC. This may mean different things to different manufacturers but, as Intel explained in the MLC era, it's made up of three components: binned NAND, more overprovisioned space, and slower writing. Slower writing is more accurate and can push lower voltages to storage cells when writing and erasing. The voltages for erasing are higher than those for writing, that's probably how it has to be, and so I assume that erasing contributes most to NAND wear.

So that "some" binning is actually not something to overlook, and may increase the (market) value of a NAND die considerably.
 
Last edited:
  • Like
Reactions: bug
Back
Top