• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Enthusiast Transforms QLC SSD Into SLC With Drastic Endurance and Performance Increase

I'm from brazil hahaa, it was meant to be a message saying the video was in english since most of my videos are in portuguese haahha

As a Japanese, I am used to the flag of England as a sign of English language (audio, text) so when I saw that flag, I thought it was something that can only be legally performed in America or maybe something that is only available in America.

Regardless, thanks for the time and effor put into the research, engineering and recording. It's really interesting.
 
As for page file - well unfortunately that's a necessary evil even today. There's no real reason for it to exist but unfortunately some software still expects it and either crashes or behaves weirdly if it's disabled.
A big page file doesn't mean there's a lot of writing to it. It's a reserved lot of virtual memory. A stupid app may require 50 GB just in case, because of lack of optimisation. The OS will say yes (unless disk is full, or some setting limits the file size). Then the OS will allocate that amount of virtual memory (RAM plus PF) for it, so the app can use it if it needs to.
 
I dislike QLC as much as the next guy, and I have a weird obsession with Optane even though I have no need for its performance/endurance but... QLC isn't the devil, and not all QLC is made equal. Most people do not need to be worrying about hibernation or pagefile or even write endurance as being issues, you'll still be running into the limitations of the cheap SSD controllers they tend to pair with the cheap QLC drives.

By all means, go ahead and buy this 1TB Optane drive: https://www.newegg.com/p/N82E16820167463 or this 1.5TB one: https://www.newegg.com/p/N82E16820167505
 
How exactly does 512GB QLC drive end up being 128GB SLC drive?
QLC holds 4 bits per cell, SLC holds 1 bit per cell. 2^4/2^1=8.
Cos ln(2^4)/ln(2^1) = 4

The same formula as above is valid for different modulation schemes too. 16PAM have two times higher bitrate as 4PAM or QPSK at the same symbol rate.
256PAM have two times higher bitrate as 16PAM at the same symbol rate. No magic is incolved.
 
Last edited:
Hiberfil.sys and Pagefile.sys say hi. If I hibernate my computer every night (like a lot of people do) and don't have a lot of RAM this will get eaten up very quickly.
You actually don't write much to pagefile.sys if you have enough RAM. You can trim it, expand on demand, and see, how long it remains shrunk.
 
Cos ln(2^4)/ln(2^1) = 4

The same formula as above is valid for different modulation schemes too. 16PAM have two times higher bitrate as 4PAM or QPSK at the same symbol rate.
256PAM have two times higher bitrate as 16PAM at the same symbol rate. No magic is incolved.
Simply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
 
Back in my day ddr-400 was king

ddr-500 crazy!!

1714517855456.png
 
Back in my day ddr-400 was king

ddr-500 crazy!!

View attachment 345813
ONFI NV-DDR. Bit different than DDR-SDRAM :p

The nomenclature caught me off guard the first time I saw it too.


Any tool like this for Realtek NAND controllers? Would be handy for reconfiguring those scam NVMEs
 
Simply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
It is not unnecessary juggling, it is simple and short explanation for someone who takes spece of states as space of bits.
For space of states it is really 2^4/2^1 = 8 factor but for space of bits the same factor 8 = 2^ (4-1) , so QLC cell have 3 bits more per cell than SLC not 8 times more bits per cell but 8 times more possible states what means space of states in QLC cell is 8 times more dense than in SLC cell case. 3 bits more per one (1+3)/1 = 4 four times more bits in this one case.
 
ONFI NV-DDR. Bit different than DDR-SDRAM :p

The nomenclature caught me off guard the first time I saw it too.


Any tool like this for Realtek NAND controllers? Would be handy for reconfiguring those scam NVMEs
Depending on the combination of NAND Die and Controller you can easily find it actually

As a Japanese, I am used to the flag of England as a sign of English language (audio, text) so when I saw that flag, I thought it was something that can only be legally performed in America or maybe something that is only available in America.

Regardless, thanks for the time and effor put into the research, engineering and recording. It's really interesting.
Thank you. And good to know about that hahah
 
If I am understanding these numbers correctly, one more reason I am not touching a QLC drive.
It is worth noting not all QLC has the same endurance. There is QLC with much more endurance than that.
 
Simply because 512/4 = 128gb. Meh. Stop doing unnecessary juggling, guys. :p
Sorry about starting with that. Had a proper brainfart when i wrote that initial thing :)
 
Thanks for your hard work Gabriel, much appreciated! I look forward to your next endeavor ;)
 
It is not unnecessary juggling, it is simple and short explanation for someone who takes spece of states as space of bits.
For space of states it is really 2^4/2^1 = 8 factor but for space of bits the same factor 8 = 2^ (4-1) , so QLC cell have 3 bits more per cell than SLC not 8 times more bits per cell but 8 times more possible states what means space of states in QLC cell is 8 times more dense than in SLC cell case. 3 bits more per one (1+3)/1 = 4 four times more bits in this one case.
This could become interesting if manufacturers start implementing WLC (weird-level-cell) principles, such as 11, 12 or 13 levels (not bits!) per cell. That calculates to 10 b / 3 cells, or 7 b / 2 cells, or 11 b / 3 cells, respectively.
Things could also get worse of course, with 24 levels per cell for example.
 
@GabrielLP14

Is the hacked SSD now in regular use? If it is, it would be nice if you can provide an update after some time and tell us how it runs.

I'm thinking ... You have converted the SSD to run permanently in pseudo-SLC cache mode. Well, probably. Do you have any means to check if this is true?

And here's why this could be a problem. In the SSD's cache area, the housekeeping algorithms may not be optimised or may not even work. I'm talking about internal defragmentation and wear leveling at least. The cache is designed to be temporary, created and destroyed often, so it doesn't need those. If you're doing random writes, the data goes to the pSLC cache in one big sequential write operation because that's much faster, but the data remains fragmented. This doesn't hurt if the data is later slowly moved to its permanent locations in the QLC area.

If there's any merit to my theory then performance issues would become obvious after a couple TB have been written in some real-world use.

There could also be minor issues such as SSD not reporting its capacity to the OS, or SMART TBW attribute showing incorrect data. At least that's easy to check.

@Shrek I'm summoning you here because I know you're interested in these matters too.
 
@Shrek I'm summoning you here because I know you're interested in these matters too.

Much appreciated.

@GabrielLP14
I wish TPU would sell these; would make for a great boot drive that will have no problem dealing with paging.

For me this is much more interesting that over-clocking.
 
Last edited:
@GabrielLP14

Is the hacked SSD now in regular use? If it is, it would be nice if you can provide an update after some time and tell us how it runs.

I'm thinking ... You have converted the SSD to run permanently in pseudo-SLC cache mode. Well, probably. Do you have any means to check if this is true?

And here's why this could be a problem. In the SSD's cache area, the housekeeping algorithms may not be optimised or may not even work. I'm talking about internal defragmentation and wear leveling at least. The cache is designed to be temporary, created and destroyed often, so it doesn't need those. If you're doing random writes, the data goes to the pSLC cache in one big sequential write operation because that's much faster, but the data remains fragmented. This doesn't hurt if the data is later slowly moved to its permanent locations in the QLC area.

If there's any merit to my theory then performance issues would become obvious after a couple TB have been written in some real-world use.

There could also be minor issues such as SSD not reporting its capacity to the OS, or SMART TBW attribute showing incorrect data. At least that's easy to check.

@Shrek I'm summoning you here because I know you're interested in these matters too.
Yes i`m using that 120GB pSLC drive in an older laptop right now hehe.

Yes, literally by testing, i've reach steady state writting over 500TB in the drive, continuously, never dropped below 480 MB/s.

Much appreciated.

@GabrielLP14
I wish TPU would sell these; would make for a great boot drive that will have no problem dealing with paging.

For me this is much more interesting that over-clocking.
Yeah, i was actually thinking of making one like this but bigger capacity and sending to Wizzard so he can test, make a custom package and ship with TPU own logo and name, like "SSD Techpowerup 512GB pSLC", that would be cool right?
 
Curious; is this only for SATA or would it work on an NVMe drive as well?

That could have some serious performance benefits on some of the PCIe 4.0 drives that aren't really strong on IOPS.
 
The software, as far as i know, doesn't allow that. I've only managed to make it work in pSLC Mode.
Shame, as MLC or TLC mode would probably be optimal outcome.

The main issue with the video's findings is the native mode of drive had SLC cache at almost half of the new capacity. If the default cache was something like 5 gigs, then it would be worth more consideration.

However if I remember right this firmware tool can change pSLC cache size? So e.g. could boost it to 120 gigs.

This could become interesting if manufacturers start implementing WLC (weird-level-cell) principles, such as 11, 12 or 13 levels (not bits!) per cell. That calculates to 10 b / 3 cells, or 7 b / 2 cells, or 11 b / 3 cells, respectively.
Things could also get worse of course, with 24 levels per cell for example.
They wont stop at QLC, or at least they wont stop trying.

We have an idea of how low they will allow things to go to get that profitable density, when as an example Samsung released their planar TLC drives, which ended up needing emergency firmware fixes to keep them in a adequate operational state.
 
Great work, but isn't it enough to make one parition a quarter the size and leave the rest unpartitioned; then the drive will never be more than a quarter filled and run in SLC mode.
 
Last edited:
Great work, but isn't it enough to make one parition a a quarter the size and leave the rest unpartitioned; then the drive will never be more than a quarter filled and run in SLC mode.
No, SSDs dont work like HDDs, the sectors are not hard mapped. Data will get moved for wear levelling purposes, and pSLC is also usually dynamically sized so shrinks as the drive fills up.

I would expect read centric data will end up on QLC, and only data thats written frequently to stay on pSLC.
 
Yes, literally by testing, i've reach steady state writting over 500TB in the drive, continuously, never dropped below 480 MB/s.
So that's about 4000 complete write/erase cycles. This is a very good sign that WL works. If it didn't, most writes (on a system drive at least) would concentrate on a small fraction of cells, which would have had a destructive effect by now.
 
Back
Top