Tuesday, April 30th 2024

Enthusiast Transforms QLC SSD Into SLC With Drastic Endurance and Performance Increase

A few months ago, we covered proof of overclocking an off-the-shelf 2.5-inch SATA III NAND Flash SSD thanks to Gabriel Ferraz, Computer Engineer and TechPowerUp's SSD database maintainer. Now, he is back with another equally interesting project of modifying a Quad-Level Cell (QLC) SATA III SSD into a Single-Level Cell (SLC) SATA III SSD. Using the Crucial BX500 512 GB SSD, he aimed at transforming the QLC drive into a more endurant and higher-performance SLC. Silicon Motion SM2259XT2 powers the drive of choice with a single-core ARC 32-bit CPU clocked at 550 MHz and two channels running at 800 MT/s (400 MHz) without a DRAM cache. This particular SSD uses four NAND Flash dies from Micron with NY240 part numbers. Two dies are controlled per channel. These NAND Flash dies were designed to operate at 1,600 MT/s (800 MHz) but are limited to only 525 MT/s in this drive in the real world.

The average endurance of these dies is 1,500 P/E cycles in NANDs FortisFlash and about 900 P/E cycles in Mediagrade. Transforming the same drive in the pSLC is bumping those numbers to 100,000 and 60,000, respectively. However, getting that to work is the tricky part. To achieve this, you have to download MPtools for the Silicon Motion SM2259XT2 controller from the USBdev.ru website and find the correct die used in the SSD. Then, the software is modified carefully, and a case-sensitive configuration file is modified to allow for SLC mode, which forces the die to run as a SLC NAND Flash die. Finally, firmware folder must be reached and files need to be moved arround in a way seen in the video.
As the drive powers on, capacity decreases from 512 GB to 114-120 GB. However, the SSD endurance jumps to 4000 TBW (write cycles), which is about a 3000% increase. Additionally, performance increased as well, which you can check out below, and in the original video for more details.
Check out the video for more details.
Add your own comment

76 Comments on Enthusiast Transforms QLC SSD Into SLC With Drastic Endurance and Performance Increase

#26
Wirko
TomorrowAs for page file - well unfortunately that's a necessary evil even today. There's no real reason for it to exist but unfortunately some software still expects it and either crashes or behaves weirdly if it's disabled.
A big page file doesn't mean there's a lot of writing to it. It's a reserved lot of virtual memory. A stupid app may require 50 GB just in case, because of lack of optimisation. The OS will say yes (unless disk is full, or some setting limits the file size). Then the OS will allocate that amount of virtual memory (RAM plus PF) for it, so the app can use it if it needs to.
Posted on Reply
#27
n-ster
I dislike QLC as much as the next guy, and I have a weird obsession with Optane even though I have no need for its performance/endurance but... QLC isn't the devil, and not all QLC is made equal. Most people do not need to be worrying about hibernation or pagefile or even write endurance as being issues, you'll still be running into the limitations of the cheap SSD controllers they tend to pair with the cheap QLC drives.

By all means, go ahead and buy this 1TB Optane drive: www.newegg.com/p/N82E16820167463 or this 1.5TB one: www.newegg.com/p/N82E16820167505
Posted on Reply
#28
pk67
londisteHow exactly does 512GB QLC drive end up being 128GB SLC drive?
QLC holds 4 bits per cell, SLC holds 1 bit per cell. 2^4/2^1=8.
Cos ln(2^4)/ln(2^1) = 4

The same formula as above is valid for different modulation schemes too. 16PAM have two times higher bitrate as 4PAM or QPSK at the same symbol rate.
256PAM have two times higher bitrate as 16PAM at the same symbol rate. No magic is incolved.
Posted on Reply
#29
Veseleil
Count von SchwalbeHiberfil.sys and Pagefile.sys say hi.
Only after a fresh OS install. And then I say bye. :)
Posted on Reply
#30
ExcuseMeWtf
Count von SchwalbeHiberfil.sys and Pagefile.sys say hi. If I hibernate my computer every night (like a lot of people do) and don't have a lot of RAM this will get eaten up very quickly.
You actually don't write much to pagefile.sys if you have enough RAM. You can trim it, expand on demand, and see, how long it remains shrunk.
Posted on Reply
#31
Count von Schwalbe
ExcuseMeWtfif
Being the key word.

I would buy TLC if I had the money to buy scads of RAM.

Unless I was going for big bulk storage, in which case it wouldn't be the system drive.
Posted on Reply
#32
Denver
pk67Cos ln(2^4)/ln(2^1) = 4

The same formula as above is valid for different modulation schemes too. 16PAM have two times higher bitrate as 4PAM or QPSK at the same symbol rate.
256PAM have two times higher bitrate as 16PAM at the same symbol rate. No magic is incolved.
Simply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
Posted on Reply
#33
mechtech
Back in my day ddr-400 was king

ddr-500 crazy!!

Posted on Reply
#34
LabRat 891
mechtechBack in my day ddr-400 was king

ddr-500 crazy!!

ONFI NV-DDR. Bit different than DDR-SDRAM :p

The nomenclature caught me off guard the first time I saw it too.


Any tool like this for Realtek NAND controllers? Would be handy for reconfiguring those scam NVMEs
Posted on Reply
#35
Wirko
pk67Cos ln(2^4)/ln(2^1) = 4
The result of "Cos" is never 4. :p
Posted on Reply
#36
pk67
DenverSimply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
It is not unnecessary juggling, it is simple and short explanation for someone who takes spece of states as space of bits.
For space of states it is really 2^4/2^1 = 8 factor but for space of bits the same factor 8 = 2^ (4-1) , so QLC cell have 3 bits more per cell than SLC not 8 times more bits per cell but 8 times more possible states what means space of states in QLC cell is 8 times more dense than in SLC cell case. 3 bits more per one (1+3)/1 = 4 four times more bits in this one case.
Posted on Reply
#37
GabrielLP14
SSD DB Maintainer
LabRat 891ONFI NV-DDR. Bit different than DDR-SDRAM :p

The nomenclature caught me off guard the first time I saw it too.


Any tool like this for Realtek NAND controllers? Would be handy for reconfiguring those scam NVMEs
Depending on the combination of NAND Die and Controller you can easily find it actually
NightOfChristAs a Japanese, I am used to the flag of England as a sign of English language (audio, text) so when I saw that flag, I thought it was something that can only be legally performed in America or maybe something that is only available in America.

Regardless, thanks for the time and effor put into the research, engineering and recording. It's really interesting.
Thank you. And good to know about that hahah
Posted on Reply
#38
Scrizz
john_If I am understanding these numbers correctly, one more reason I am not touching a QLC drive.
It is worth noting not all QLC has the same endurance. There is QLC with much more endurance than that.
Posted on Reply
#39
londiste
DenverSimply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
Sorry about starting with that. Had a proper brainfart when i wrote that initial thing :)
Posted on Reply
#40
maxfly
Thanks for your hard work Gabriel, much appreciated! I look forward to your next endeavor ;)
Posted on Reply
#41
Wirko
pk67It is not unnecessary juggling, it is simple and short explanation for someone who takes spece of states as space of bits.
For space of states it is really 2^4/2^1 = 8 factor but for space of bits the same factor 8 = 2^ (4-1) , so QLC cell have 3 bits more per cell than SLC not 8 times more bits per cell but 8 times more possible states what means space of states in QLC cell is 8 times more dense than in SLC cell case. 3 bits more per one (1+3)/1 = 4 four times more bits in this one case.
This could become interesting if manufacturers start implementing WLC (weird-level-cell) principles, such as 11, 12 or 13 levels (not bits!) per cell. That calculates to 10 b / 3 cells, or 7 b / 2 cells, or 11 b / 3 cells, respectively.
Things could also get worse of course, with 24 levels per cell for example.
Posted on Reply
#42
Wirko
@GabrielLP14

Is the hacked SSD now in regular use? If it is, it would be nice if you can provide an update after some time and tell us how it runs.

I'm thinking ... You have converted the SSD to run permanently in pseudo-SLC cache mode. Well, probably. Do you have any means to check if this is true?

And here's why this could be a problem. In the SSD's cache area, the housekeeping algorithms may not be optimised or may not even work. I'm talking about internal defragmentation and wear leveling at least. The cache is designed to be temporary, created and destroyed often, so it doesn't need those. If you're doing random writes, the data goes to the pSLC cache in one big sequential write operation because that's much faster, but the data remains fragmented. This doesn't hurt if the data is later slowly moved to its permanent locations in the QLC area.

If there's any merit to my theory then performance issues would become obvious after a couple TB have been written in some real-world use.

There could also be minor issues such as SSD not reporting its capacity to the OS, or SMART TBW attribute showing incorrect data. At least that's easy to check.

@Shrek I'm summoning you here because I know you're interested in these matters too.
Posted on Reply
#43
Shrek
Wirko@Shrek I'm summoning you here because I know you're interested in these matters too.
Much appreciated.

@GabrielLP14
I wish TPU would sell these; would make for a great boot drive that will have no problem dealing with paging.

For me this is much more interesting that over-clocking.
Posted on Reply
#44
GabrielLP14
SSD DB Maintainer
Wirko@GabrielLP14

Is the hacked SSD now in regular use? If it is, it would be nice if you can provide an update after some time and tell us how it runs.

I'm thinking ... You have converted the SSD to run permanently in pseudo-SLC cache mode. Well, probably. Do you have any means to check if this is true?

And here's why this could be a problem. In the SSD's cache area, the housekeeping algorithms may not be optimised or may not even work. I'm talking about internal defragmentation and wear leveling at least. The cache is designed to be temporary, created and destroyed often, so it doesn't need those. If you're doing random writes, the data goes to the pSLC cache in one big sequential write operation because that's much faster, but the data remains fragmented. This doesn't hurt if the data is later slowly moved to its permanent locations in the QLC area.

If there's any merit to my theory then performance issues would become obvious after a couple TB have been written in some real-world use.

There could also be minor issues such as SSD not reporting its capacity to the OS, or SMART TBW attribute showing incorrect data. At least that's easy to check.

@Shrek I'm summoning you here because I know you're interested in these matters too.
Yes i`m using that 120GB pSLC drive in an older laptop right now hehe.

Yes, literally by testing, i've reach steady state writting over 500TB in the drive, continuously, never dropped below 480 MB/s.
ShrekMuch appreciated.

@GabrielLP14
I wish TPU would sell these; would make for a great boot drive that will have no problem dealing with paging.

For me this is much more interesting that over-clocking.
Yeah, i was actually thinking of making one like this but bigger capacity and sending to Wizzard so he can test, make a custom package and ship with TPU own logo and name, like "SSD Techpowerup 512GB pSLC", that would be cool right?
Posted on Reply
#45
Count von Schwalbe
Curious; is this only for SATA or would it work on an NVMe drive as well?

That could have some serious performance benefits on some of the PCIe 4.0 drives that aren't really strong on IOPS.
Posted on Reply
#46
chrcoluk
GabrielLP14The software, as far as i know, doesn't allow that. I've only managed to make it work in pSLC Mode.
Shame, as MLC or TLC mode would probably be optimal outcome.

The main issue with the video's findings is the native mode of drive had SLC cache at almost half of the new capacity. If the default cache was something like 5 gigs, then it would be worth more consideration.

However if I remember right this firmware tool can change pSLC cache size? So e.g. could boost it to 120 gigs.
WirkoThis could become interesting if manufacturers start implementing WLC (weird-level-cell) principles, such as 11, 12 or 13 levels (not bits!) per cell. That calculates to 10 b / 3 cells, or 7 b / 2 cells, or 11 b / 3 cells, respectively.
Things could also get worse of course, with 24 levels per cell for example.
They wont stop at QLC, or at least they wont stop trying.

We have an idea of how low they will allow things to go to get that profitable density, when as an example Samsung released their planar TLC drives, which ended up needing emergency firmware fixes to keep them in a adequate operational state.
Posted on Reply
#47
Shrek
Great work, but isn't it enough to make one parition a quarter the size and leave the rest unpartitioned; then the drive will never be more than a quarter filled and run in SLC mode.
Posted on Reply
#48
chrcoluk
ShrekGreat work, but isn't it enough to make one parition a a quarter the size and leave the rest unpartitioned; then the drive will never be more than a quarter filled and run in SLC mode.
No, SSDs dont work like HDDs, the sectors are not hard mapped. Data will get moved for wear levelling purposes, and pSLC is also usually dynamically sized so shrinks as the drive fills up.

I would expect read centric data will end up on QLC, and only data thats written frequently to stay on pSLC.
Posted on Reply
#49
Wirko
GabrielLP14Yes, literally by testing, i've reach steady state writting over 500TB in the drive, continuously, never dropped below 480 MB/s.
So that's about 4000 complete write/erase cycles. This is a very good sign that WL works. If it didn't, most writes (on a system drive at least) would concentrate on a small fraction of cells, which would have had a destructive effect by now.
Posted on Reply
#50
Shrek
chrcolukNo, SSDs dont work like HDDs, the sectors are not hard mapped. Data will get moved for wear levelling purposes, and pSLC is also usually dynamically sized so shrinks as the drive fills up.
By only partitioning one quarter it never fills up beyond a quarter.
Posted on Reply
Add your own comment
Jun 1st, 2024 13:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts