• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Are there wear problems from partitioning a SSD?

I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.

You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.
 
Last edited:
You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.
That's what I was doing before, but I couldn't get a smaller drive... at least not at a physical store nearby. So I partitioned off the excess capacity, so I could use it for something else without risking whatever I store there being lost if I have to reinstall Windows.
 
What if I make a small partition on a solid-state drive and then write to it a lot, will I cause unbalanced wear? or can the drive compensate?
That's a good question. I don't know the specifics of any particular model, but I'd hazard that logically it should make no difference.

SSDs use all of the free space to ensure that data gets written to different blocks as much as possible. Therefore, there's memory mapping between the logical structure one sees in Windows and the physical structure so it shouldn't matter how you've partitioned it. The one time where it really will make a difference is when the SSD starts to become full since then there will only be relatively few free blocks left which will end up getting hammered, eg a 90% full SSD. And then it will fail faster, or at least that part will.
 
You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.

It does not matter, the wear leveling control is done at the controller level.

That's a good question. I don't know the specifics of any particular model, but I'd hazard that logically it should make no difference.

SSDs use all of the free space to ensure that data gets written to different blocks as much as possible. Therefore, there's memory mapping between the logical structure one sees in Windows and the physical structure so it shouldn't matter how you've partitioned it. The one time where it really will make a difference is when the SSD starts to become full since then there will only be relatively few free blocks left which will end up getting hammered, eg a 90% full SSD. And then it will fail faster, or at least that part will.

Indeed, it makes no difference, and due to it working the way you described, SSDs include spare area that is not exposed to the host controller to serve as overprovisioning. On older SSD models, up to an entire NAND chip was assigned for this specific purpose, nowadays with the advent of 3D layered NAND, it can be a portion of the die that is marked off as spare area and never written to unless the controller detects that any given cell within the structure has croaked - it then reads the data contained within, moves it to block in the spare area and then remaps it, marking that sector as unusable :)

See the old Intel drive I have, Anandtech has a review for the 300 GB model:


This review shows that the device marketed as 300 GB capacity has 320 GB worth of NAND actually installed, for example. It's good to keep in mind that Windows uses the wrong nomenclature for GB (power-of-10) to describe what are actually GiB (power-of-2), creating even more confusion in the mix. An "1 TB" partition (with "1,000,000 MB") actually has ~932 GiB; and Windows calls this as GB anyway.
 
@Dr. Dro Yes, that overprovisioning helps a lot. I believe some of the cheapest SSDs from no-name manufacturers don't even have it.
 
@Dr. Dro Yes, that overprovisioning helps a lot. I believe some of the cheapest SSDs from no-name manufacturers don't even have it.

It isn't as necessary with modern layered NAND, even a single-die QLC drive can have spare area nowadays like I mentioned earlier. The SN350 does ;)

Sure these are far less reliable than the scheme that Intel did with the 320 series back then (backup capacitors for power failure protection, generous overprovisioning with an additional MLC die and all of the bells and whistles you'd expect), but they still exceed the needs of a basic or intermediate PC user. I'm just not entirely comfortable with the concept of PLC (5 bits per cell) though.
 
I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.
That was my thinking as well - don't wear levelling algorithms make block and sector assignments on NAND essentially arbitrary, meaning that any piece of data can be located physically anywhere on the die regardless of partitioning? If so, partitioning wouldn't matter except on a file management level and the minuscule write amplification caused by writes to different partitions not being combined into the same write.
 
The controller does all the wear leveling magic. In short what you think is always the same sector, often isn't.
Exactly, and that's one of the most basic things to understand about SSDs. I'd even say that it almost never is the same sector.

You can't just rewrite data in flash memory - well you can but a block needs to be erased first, and it takes about a millisecond, so you can't get any kind of performance this way. Instead, you copy its old contents plus the newly written contents to a new block. That new block was erased at an earlier time, when the drive was idle and had enough time for its housekeeping jobs, and then put in the spare area.

Even SD cards use wear leveling, although cheaper ones apparently don't - people who use them as Raspberry Pi system drives care more than others about that:
 
Windows 7 was aware of SSDs, but at the time it was developed and released, SATA SSDs on AHCI controllers were all that existed, so it was basically TRIM-capable and aware that a drive was of a solid-state type, and that was about it. It was not optimized for nor is it natively compatible with modern PCI Express/NVMe-type SSDs, getting it to work with an NVMe drive requires custom boot drivers and there's no logic to optimize access for higher queue depths than AHCI affords. Windows 8 on the other hand was already designed with this specification in mind, and 8.1 will boot vanilla on one.
TRIM needs to be done more than once to be really effective.
The first time is when a file is deleted from the file system, and the OS communicates to the SSD that blocks are now free. Windows 7 does that. But the SSD controller sometimes ignores TRIM commands because it has more important work to do.
So Windows occasionally sends TRIM commands for all free blocks on the file system - that's called "retrim". Probably when doing other disk management jobs like defrag. Windows 7 doesn't do that but 8 does. However, it can be done manually even on XP, using tools like Intel's Toolbox.

About the queue depth - can the OS affect it at all, or does it depend entirely on applications? I can't seem to find any info that newer Windows somehow handle these queues better. I'd just expect Windows utilities like Explorer to improve over time in this regard.
 
You might well be right, but I'll not be risking it; for me the whole 256GB SSD is for the OS and the rest resides on a hard drive.
Hi,
All my os ssd's are this size
I do have some personal files like music/ images/ programs on them to so not all personal files are on different storage drives but back ups are.
None have out and out died except one linux mint killed long ago by never running trim on it seemed to be a crucial firmware bug clash with mint 17
Replacement mx100 256gb still works to this day.

I don't partition ssd's though beside a single system reserved in the front otherwise the rest spans except a little unallocated space at the end for the firmware to use for over provisioning
 
TRIM needs to be done more than once to be really effective.
The first time is when a file is deleted from the file system, and the OS communicates to the SSD that blocks are now free. Windows 7 does that. But the SSD controller sometimes ignores TRIM commands because it has more important work to do.
So Windows occasionally sends TRIM commands for all free blocks on the file system - that's called "retrim". Probably when doing other disk management jobs like defrag. Windows 7 doesn't do that but 8 does. However, it can be done manually even on XP, using tools like Intel's Toolbox.

About the queue depth - can the OS affect it at all, or does it depend entirely on applications? I can't seem to find any info that newer Windows somehow handle these queues better. I'd just expect Windows utilities like Explorer to improve over time in this regard.

Disk I/O is controlled by a kernel mode function, so I would very much consider it, and yeah, manual TRIM was possible on XP and Vista, they just weren't designed for automatic maintenance of SSDs at all. These OSes are also not aware of the difference between a mechanical or solid-state drive either. This was first implemented in Windows 7.

That was my thinking as well - don't wear levelling algorithms make block and sector assignments on NAND essentially arbitrary, meaning that any piece of data can be located physically anywhere on the die regardless of partitioning? If so, partitioning wouldn't matter except on a file management level and the minuscule write amplification caused by writes to different partitions not being combined into the same write.

Not only that, but due to real-time encryption used on most SSD controllers, the data that is actually programmed on the NAND completely mismatches the actual data, as it is often encoded in AES to prevent unauthorized physical data retrieval attacks
 
I am still trying to understand why NAND wears out, even the good stuff seems only good for maybe 600 writes.
 
I am still trying to understand why NAND wears out, even the good stuff is only good for maybe 600 writes.

Physics. This is because the logic gates are programmed by a change in physical state by means of an electrical current, and the constant changes in the physical state of the memory cause the material to lose its property over time, ceasing to function correctly, it's similar in concept to electromigration in CPUs. The more states that a given cell can accomodate, the more sensitive to this electrical current it becomes, and this is the reason why newer lithography nodes with ever smaller logic gates and multi-bit NAND cells all have a cost in write endurance. Single bit per cell NAND (SLC), operates with a single programmable state of "0" and "1", while the 2 bits per cell MLC design does "0" "1" and "2", for example. TLC and QLC have 3 and 4 programmable states in addition to empty. This becomes more evident with newer drives, and has been fought off with smarter controllers and advanced wear leveling algorithms to maximize the useful life of the hardware. Reading the current state of the NAND does not cause wear.

The reason why we don't all use 50 nm SLC like the X25-E is cost. To retain the same level of capacity, many more chips are required, and the older the lithography node, the less space efficient it will be, which implies in larger logic gates and less data density per device. Even today with all of the advances in solid state flash memory technology, you'll find that MLC designs such as the Samsung Pro series are still significantly more expensive, and this isn't because they're Samsung drives.
 
Changes physical state? I thought it was quantum tunneling.

Electromigration makes sense, but a CPU can sustain billions of transitions a second and endure for decades.
 
Last edited:
Changes physical state? I thought it was quantum tunneling.

Yes, that is what quantum tunneling is used for. The Wikipedia article on this is actually quite concise, if you read it you will understand the underlying mechanics rather well IMHO


Oh, just saw your edit, to address the electromigration equivalence I made: sure, but it's about volatility, when powered off, CPUs and DRAM (transistorized logic) completely lose their current state and must re-initialize from zero. The ability to retain form without being powered up (aka being non-volatile) is what requires these physical changes to the device's state.

Think about a Game Boy cartridge, they use battery-backed memory. If the juice runs dry, being a RAM device, at the time of failure it will erase itself and save data will be lost. This approach was chosen back then probably because programmable flash chips suitable for save data were far too expensive at the time these were manufactured.
 
Last edited:
Not sure I believe that data retention is just 1 year
The Truth About SSD Data Retention (anandtech.com)

That article accounts for a drive whose NAND has been spent, in real-world conditions with a drive kept under ambient temperatures and hasn't been written out of its usefulness, you can probably expect data retention in the range of decades without any corruption. Words of that article itself:

As always, there is a technical explanation to the data retention scaling. The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data).

For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate.

All in all, there is absolutely zero reason to worry about SSD data retention in typical client environment. Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs. If you buy a drive today and stash it away, the drive itself will become totally obsolete quicker than it will lose its data. Besides, given the cost of SSDs, it's not cost efficient to use them for cold storage anyway, so if you're looking to archive data I would recommend going with hard drives for cost reasons alone.

Honestly, if one who's still an SSD skeptic after all of this read on the drawbacks of spindle magnetic storage... they'd never touch an HDD again! :D :kookoo:
 
The way I understand with ssd is on different scale compared to regular hard drive. Only true test is find a cheap ssd then partition it to a certain size then abuse it to see what happen to the ssd that has the smallest size partition while the rest is empty then do tests on it to see effect it has on the drive. That all I could think of
 
I could be mistaken in my assumption, but I don't think partitions matter to the wear leveling algorithms. At the level wear leveling works at, it's all just sectors and bytes.
Correct. Partitions just act as headers, and are a range of LBA addresses.
That is what I don't understand, if one limits paging to a small partition, how can the wear be even across the drive?
Here is a good short form of how it works on a controller level, which is what handles the LBA addressing of the data.
This happens below the partition level, so partitioning a drive will not cause wear unless you request a write operation to "run over" the existing data in those LBA addresses.
If you want a more detailed example I could create a flowchart :clap:

https://www.elinfor.com/knowledge/overview-of-ssd-structure-and-basic-working-principle2-p-11204
 
The way I understand with ssd is on different scale compared to regular hard drive. Only true test is find a cheap ssd then partition it to a certain size then abuse it to see what happen to the ssd that has the smallest size partition while the rest is empty then do tests on it to see effect it has on the drive. That all I could think of

This is pointless as I and numerous other people on the thread mentioned: wear leveling algorithms exist and independ on the partitioning table or the file system itself
 
It isn't pointless as it needs to be done otherwise it never going to get results of what would happen
 
Back
Top