• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

SSD Defragging: The safe way

@Mussels
perfectly fine with learning, but that will also be the case with some QLC users running (the wrong) defrag and wearing it down.

im all for "speed/perf", but the specifics where it starts to make a difference, more or less shows its not gonna be for the masses,
and until we can do this large scale +2500 identical drives with identical data, its kind of hard to be able to say we should do it (vs all manufacturers saying dont),
as i doubt any defrag tool will ever have all the "insights" on every single drive controller (and how it works underneath),
 
Last edited:
I’d be curious to see your results with nvme. For science!
I can't run the program, it's a linux thing i have no idea how to compile.

@Mussels
perfectly fine with learning, but that will also be the case with some QLC users running (the wrong) defrag and wearing it down.

im all for "speed/perf", but the specifics where it starts to make a difference, more or less shows its not gonna be for the masses,
and until we can do this large scale +2500 identical drives with identical data, its kind of hard to be able to say we should do it (vs all manufacturers saying dont),
as i doubt any defrag tool will ever have all the "insights" on every single drive controller (and how it works underneath),
Those specifics are happily listed above.
There is a threshold where performance halves, why not use that as your metric?
As for the rest, isn't that covered by the explanation of 'Scan the drive, defrag anything with lots of fragments if those files performance matter to you' ?

QLC has bad write speeds, the lifespans aren't a problem.
1693640555546.png


While the 990 PRO...
1693640616254.png


Yeah, even last gen QVO's have more life than the high end drives.
 
Isn’t that sata QVO 8TB? If so adjusted to be like that 2TB 990 pro its endurance would be only 720 TBW, vs 1200 TBW of the 990 pro.
 
Isn’t that sata QVO 8TB? If so adjusted to be like that 2TB 990 pro its endurance would be only 720 TBW, vs 1200 TBW of the 990 pro.
"Only". That is still a lot of writes.
 
90 writes, does seem like a lot to me.

Full writes. Meaning you write it yo say 90%, wipe everything, and write it again, which is not avarage use.
 
Fair enough, but I seem to recall a scam some time back where DRAM was overvoltaged and could be made into fake flash with something like 10 writes possible before destruction; so 90 writes doesn't seem so big to me; but I get your point.
 
Especially given write amplification factor it will probably be less, more like 80-85 probably. And the fuller drive is, the more likely same cells get the brunt of repeated writes, so realistically drive won't be all worn down at once, just cells by cells.

So yeah, full wipe and write isn't an average case, in a way it is a best case.
 
Isn’t that sata QVO 8TB? If so adjusted to be like that 2TB 990 pro its endurance would be only 720 TBW, vs 1200 TBW of the 990 pro.
Largest in it's series, otherwise you'd end up playing games with fudging numbers around - can't just fairly by equal capacity if that's now how they're made.
Even if you look at it your way, 720TBW is still large enough to not care at all about defragging a few files.



I'm finding it hard to translate the technical side of this next part into something easier to understand - explaining it poorly won't help.
The simplest way i can explain it is: The benchmarks proved writes/reads are a ton slower on fragmented files. That's the disk working harder, for longer.

Regardless of any math used or any technical differences between drives and their designs, the harder they work the worse for the drive in the long run.
That doesn't change my original statement of: Don't defrag the entire drive, but do check to see if theres individual files worth defragging.
 
Last edited:
Mussels, can you explain how your own testing validates your post? Right now I think this is either flawed testing, or false results, although I haven't done any tests myself, what I see is you've defragmented a drive and eliminated 77 fragmented files, reduced its fragmented files size by about 1.5 GB, and reduced its random read speed by 6.5 MB/s
"Cleaning" the drive reduced the fragmentation by 34%, yet fragmented 4 more files, and increased the number of fragments by 23 more than before the cleaning. Can you explain how these numbers were generated and why they are so whacky?
 
Mussels, can you explain how your own testing validates your post? Right now I think this is either flawed testing, or false results, although I haven't done any tests myself, what I see is you've defragmented a drive and eliminated 77 fragmented files, reduced its fragmented files size by about 1.5 GB, and reduced its random read speed by 6.5 MB/s
"Cleaning" the drive reduced the fragmentation by 34%, yet fragmented 4 more files, and increased the number of fragments by 23 more than before the cleaning. Can you explain how these numbers were generated and why they are so whacky?
I haven't provided any benchmarks of my own. The values in the screenshots from defraggler are not something useful - they don't automatically update, and I can't exactly re-fragment a file for repeat testing, can I? More fragments can appear because I'm using that drive, things like opening a browser, taking screenshots... they write to the drive.
They were only used to show that SSD files DO get fragmented as people have taken one fact and used their imaginations to translate it to something else
This was never about performance for me, but rather pointing out the issues stemming from a common misconception. I only added those from other sources when people challenged what I was saying.
  • Defragmenting SSD's is bad for their lifespan (A warped version of 'SSD's have limited writes, and old defragmenting tools cause excessive writes')
    and translated that into
  • SSD's do not get fragmented
I've stated a few times i can't use those same benchmarks with artificially fragmented files as they're linux only. What i saw was game load times shrink, and people would still find ways to argue about however i chose to measure that.
What i've tried to help make more common knowledge is three things

  1. SSD files do get fragmented
  2. That fragmentation has a significant performance loss
  3. It's possible to defragment single files

I'll try and find a windows variant to test that out for you - It's frustrating as I'm trying to cover those points above to let people make their own decisions about files worth fragmenting, and i'm getting back something along the lines of "But i want ONE answer, a Yes/no!"
My response there is "Yes, it's a good idea to make your own decision. Don't use over-simplified summaries from the windows 7 Era on what were basically memory cards from digital cameras.

Using this program:
PassMark Fragger - File Fragmentation Utility


I've done something the opposite of what i'm trying to do here, and done something that could harm my drives.
Intel 6000P NVME, it's on USB 3.1 which limits the maximum speeds. Deal with it.

Yes, it's running at 10Gb/s and can normally achieve 1GB/s reads. I shouldn't have to prove these things but this thread overall has been very upset by learning new things.
1693725764664.png

No, i'm not running extended benchmarks on the drive. 550MB/s writes, 1GB/s reads.
We get less than 1GBs below, everyone has their performance answers.
1693725986138.png



19% fragmented file. Ignore the blue, that's from the first attempt, before i changed to 'scattered'
Best i can likely do is HWinfo with reset stats, to show the maximum read from the drive and write to the destination.
1693725395910.png


Unfragmented file copy - HWinfo both during and after the test.
1693726182508.png



1693728291139.png

Yes, it's slower. Is it massively infinitely slower? ... Yeah that's pretty bad.
I should have ran a timed test instead, but again my point has never been about exact percentages of things being faster or slower - just the information that it can be done.

The transfer bad in windows plummeted and wasn't a straight line, but the screenshot got interrupted by a private chat message and i CBF redoing this all over again (I have to rename files, TRIM drives and all sorts of shit to prevent the files being cached and moving from RAM at 3.5GB/s) and if this level of detail isn't enough for anyone - go do your own testing.
 
Last edited:
@Mussels
You're experimenting with artificial file fragmentation, and at this point I'd make a suggestion: leave that fragmented file alone for a few days, just run the Windows disk maintenance utility every day. Microsoft really wants us to not know what it's doing but Windows is mostly able to take care of itself. Maybe, just maybe, it can detect a really bad case of fragmentation and do some defragmentation, similar to what you're doing manually (and what I was doing before I met my first SSD).
However, I'm aware (and sure you're aware too) that my suggestion can have an undesired effect too, as other file writes will be more fragmented than they would otherwise be.

Intel 6000P NVME, it's on USB 3.1 which limits the maximum speeds.
Does your M.2 enclosure support TRIM?
 
@Mussels
You're experimenting with artificial file fragmentation, and at this point I'd make a suggestion: leave that fragmented file alone for a few days, just run the Windows disk maintenance utility every day. Microsoft really wants us to not know what it's doing but Windows is mostly able to take care of itself. Maybe, just maybe, it can detect a really bad case of fragmentation and do some defragmentation, similar to what you're doing manually (and what I was doing before I met my first SSD).
However, I'm aware (and sure you're aware too) that my suggestion can have an undesired effect too, as other file writes will be more fragmented than they would otherwise be.


Does your M.2 enclosure support TRIM?
Windows already does defragment SSDs once they pass a certain threshold, But that information is now buried away after MS reorganised their blog site.
It was on one of the MS dev blogs for windows servers, about how many fragments files can have before they'll be defragmented - this came up in a thread some time ago roughly like "help windows is defragging my SSD"

Yes it supports Trim


I found some old articles with more specific google searches on this
The real and complete story - Does Windows defragment your SSD? - Scott Hanselman's Blog

He mentions one of the blogs, but the link goes to a twitter account without anything on the issue. Not so helpful.

He quotes the original answer i found on the blog some time ago
Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.
Windows already defrags SSD's once a month, if needed. All we're doing here is manually choosing files that matter to us, at the time of our choosing.
 
Spending 1GB of the limited writes on the drive (My 970 Pro 2TB has 1,200 Terabytes writable as it's lifespan (known as TBW) to remove 7,000 fragments is worthwhile since that's barely a drop of water in the ocean - especially if it's in programs or games you run regularly.

TBW ratings do NOT represent the lifespan of an SSD. They're a just an additional warranty sticker: your drive has a warranty of 5 years or 1200 TB writes, whatever comes first. TBW is pretty meaningless for consumer drives. Drives will often go through many revisions with different controllers, flash generation and architecture, while keeping exactly the same specs.

NAND flash is actually very durable and most drives will outlast their TBW rating. TLC is typically rated for 3000 PEC, but could do as many as 5000 PEC or more. A 2TB drive with 3000 PEC flash is expected to survive at least 6000 TB, but realistically could do 10000 TB — NAND writes; host writes will be lower, but not too much under normal consumer workloads. In fact, drives continue to work even after reaching 0% "health". The chia communities sometimes see drives lasting 10 times their specced TBW.
Even QLC is durable these days. Intel/Solidigm already have QLC rated for 1500 PEC.

TBW is just an arbitrary number that is set high enough so you can't reach it under normal consumer usage. SSDs die for many reasons (eg: bricked controller/firmware after power loss, faulty components like capacitors and resistors etc) and TBW ratings allow brands to deny warranty replacements to drives who went through a lot of "abuse".

I've had to console a lot of people who've killed their shitty budget SSD's like WD Green SATAs in far less than 6 months of purchase because they'd torrent anime until the drives filled up completely

You're meant to get the TBW (That they often hide, on the worst drives) and average it over the warranty period
1693373602767.png

With these (far from the worst) you get 40TB over 3 years so about 13TB a year. If you use things more rapidly or have the drives full most of the time, they'll die long before that.
There have been far worse drives in previous years, i'll see if i can find an example.

(Keep in mind Samsungs best drives like the 970 series had 1200TBW on 1TB drives - 1200 fills of the drive. That WD green maths to 333, so its about a third the lifespan)

WD Green has low TBW ratings because it's a bottom of the barrel product and WD needs to justify paying more for better specs. Also, it is a model first announced in 2016, and these TBW ratings didn't look so outrageous at the time.
The product has seen multiple hardware revisions, but they didn't bother to update the spec sheet.
The WD Green is not a good product, but the most striking issue is the slow USB controller that comes with these drives, not their TBW rating.
 
Last edited:
I had a SSD drive just cease to function (not just go into read only mode); is this just a thing of the past?
 
I had a SSD drive just cease to function (not just go into read only mode); is this just a thing of the past?
No, it could happen to any drive, even today. You just had bad luck.
 
Last edited:
It was a small drive on a laptop with little RAM, so it was probably thrashed with paging.

My present SSD is backed up to a hard drive.
 
I found some old articles with more specific google searches on this
The real and complete story - Does Windows defragment your SSD? - Scott Hanselman's Blog

He mentions one of the blogs, but the link goes to a twitter account without anything on the issue. Not so helpful.
I once found this same article, unfortunately it's too old to be still relevant.

Yes, we suspect that Windows does some defrag. But we don't know. We don't have a clear and recent statement or explanation from MS. And we know less than nothing about details - what kind of defrag that is and what it's doing.

The days of Norton Utilities and its defragmenter, Speed Disk, are long gone. Remember, it would painstakingly stack all dark blue blocks in the map before all medium blue blocks, then came the light blue blocks, etc., with colours indicating the frequency of access of files. (Or was that the Windows 95 built-in defragmenter?) A couple days later, this precise order would of course collapse, so a new defrag was in order. Marginally useful but who cared, it was a marvel to watch!

I'm very sure defragmenters have advanced since the dark era of non-black PCs. What's really necessary is to put some order, periodically, in most heavily fragmented files. And in heavily fragmented free space too. Yes, you're doing exactly that. But how do you know Windows isn't doing something that's effectively the same, and doing it often enough?

I have a new PC with (much hated) Windows 11 at work, the system drive is still mostly empty, so I may decide to do a fragmented file experiment on it over the weekend. But running disk maintenance once a day manually may or may not trigger defragmentation, who knows.
 
After some research I believe I know the reason why for the windows defrag.

Seems to be a limitation of NTFS, where the filesystem itself has problems if it has too many fragments to deal with, volume shadow copies can significantly increase the amount of fragments, so the defrag is done to preserve the integrity of the filesystem. If volume shadow copies are disabled (the default) SSD's wont be auto defragged in windows. I also believe ReFS doesnt have the issue.
 
@Mussels
@Wirko
@chrcoluk

Windows runs a scheduled task that defrags SSDs every month. It's a traditional defragmentation, just like on HDDs. As documented on Microsoft Docs:

Screenshot_2.png


You won't find much information on why Windows is doing that. Apparently one reason is to prevent the system from reaching the NTFS fragmentation limit, although the risk of that happening is low for the average user. I've also read somewhere about issues with volume snapshots and that I/Os to highly fragmented files might incur a performance loss.

You can check the defragmentation history of your drives with the following Powershell command:

Code:
Get-EventLog -LogName Application -Source "microsoft-windows-defrag" | sort timegenerated | fl timegenerated, message

Here are the results when I run on my PC:

Untitled.png


Windows defrags any SSD. That includes my main drive (partitions C and D), a drive that is using a secondary slot (G), and also a drive that was plugged via USB (N). And the scheduled task runs even when system restore/volume snapshots is disabled (I only have it enabled for drive C).
 
Nice command to quickly check log contents.

Yes you seem to have found the same information as me regarding the fragmentation limit in NTFS, I believe I found that info on the comments section of that article, the guy was pressured to check with his Microsoft contacts for the reason and thats what he came back with. Sadly officially Microsoft havent gone into detail about it.

However if yours is running on drives with no shadow copy enabled, thats something I have never observed before and is at odds with the public information available. I ran that command on my VM as well which has system restore disabled, and it has no defrag's just retrim's. Seems there is more to it, and perhaps the limit is still a concern even without volume shadow copies.

Do you use macrium or any other backup software at all? as these tools can create shadow copies without system restore being enabled.

I am going to add a second SSD to my VM and see what happens when I enable it just for C: to see if it triggers it for "all" SSD's just for having system restore enabled on "any" drive.
 
Last edited:
@Mussels
@Wirko
@chrcoluk

Windows runs a scheduled task that defrags SSDs every month. It's a traditional defragmentation, just like on HDDs. As documented on Microsoft Docs:

View attachment 312290

You won't find much information on why Windows is doing that. Apparently one reason is to prevent the system from reaching the NTFS fragmentation limit, although the risk of that happening is low for the average user. I've also read somewhere about issues with volume snapshots and that I/Os to highly fragmented files might incur a performance loss.

You can check the defragmentation history of your drives with the following Powershell command:

Code:
Get-EventLog -LogName Application -Source "microsoft-windows-defrag" | sort timegenerated | fl timegenerated, message

Here are the results when I run on my PC:

View attachment 312293

Windows defrags any SSD. That includes my main drive (partitions C and D), a drive that is using a secondary slot (G), and also a drive that was plugged via USB (N). And the scheduled task runs even when system restore/volume snapshots is disabled (I only have it enabled for drive C).
I feel kinda glad for disabling that sucker (scheduled tasks service) on majority of my OS installations. I never trusted MS doing stuff in a background.
 
If you manually run optimise it will only show re-trim in the log. But the log also show that defrag is completed at the same second as the re-trim. My conclusion is that the log just show the start and instant end of a scheduled defrag run, since it was not necessary because the drive in question was a ssd. This is just another round of confirmation bias.
 
If you manually run optimise it will only show re-trim in the log. But the log also show that defrag is completed at the same second as the re-trim. My conclusion is that the log just show the start and instant end of a scheduled defrag run, since it was not necessary because the drive in question was a ssd. This is just another round of confirmation bias.
I'd also check total bytes written before and after optimisation.
 
If you manually run optimise it will only show re-trim in the log. But the log also show that defrag is completed at the same second as the re-trim. My conclusion is that the log just show the start and instant end of a scheduled defrag run, since it was not necessary because the drive in question was a ssd. This is just another round of confirmation bias.

Interesting observation, Window's will defrag SSD's, but what you have observed from the timestamps of the logs makes sense as it should be only defragging heavily fragmented drives that utilise volume shadow copies, not every single SSD.

If you have a case with an i/o led, or have something like task manager open after booting, the obvious signs a defrag is happening is constant activity on the SSD after logging into the desktop following a reboot, I have only observed it to ever happen right after boot. Then can open the optimiser app and that will confirm if an actual defrag is in process (the live status of it), I posted a screenshot of it in a thread I made on here some months ago. If the defrag is interrupted manually in the optimiser app, clicking optimise will restart the defrag process.

In addition I think it isnt a unconditional once a month thing, it actually checks the level of fragmentation.


1694041812503.png
 
Last edited:
Back
Top