• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SSD Defragging: The safe way

@Wirko
Free space perf is not related to overprovisioning, i can have free space without any OP.
and the related drop in sped is usually from less nand available for pSLC.

ignoring for a moment your assuming what drive Daytrader is using.
this could as well be a 770 or a 750...
besides, took me about 5min to see that WD is common to use about 10% OP.

@Daytrader
unless you purchased the retail version, oem drives dont work with the WD sw, so that would explain why you cant change anything.

personally, i never fill drives anywhere close to full, no matter what kind or what for, but having a lot of drives,
i usually drop down to the next "even" number (400gb on a 450gb drive etc), just to not have to worry about.
sometimes games have large updates coming at a much later point (a ~120gb game needed additional 80gb for new maps, before deleting old files),
if that happens, i can just "add" that unpartitioned space, and have more time to get a new (as in bigger) drive.
 
@Wirko
Free space perf is not related to overprovisioning, i can have free space without any OP.
and the related drop in sped is usually from less nand available for pSLC.

ignoring for a moment your assuming what drive Daytrader is using.
this could as well be a 770 or a 750...
besides, took me about 5min to see that WD is common to use about 10% OP.

@Daytrader
unless you purchased the retail version, oem drives dont work with the WD sw, so that would explain why you cant change anything.

personally, i never fill drives anywhere close to full, no matter what kind or what for, but having a lot of drives,
i usually drop down to the next "even" number (400gb on a 450gb drive etc), just to not have to worry about.
sometimes games have large updates coming at a much later point (a ~120gb game needed additional 80gb for new maps, before deleting old files),
if that happens, i can just "add" that unpartitioned space, and have more time to get a new (as in bigger) drive.
So you have a OP partition then, instead of just not filling up your drives and leaving free space as your OP ?
 
@Daytrader
its just easier to do it with formatting/partitioning less space, vs just leaving it free.

this way you dont have to worry about forgetting (not to fill it), especially if you still have hdd/multiple drives,
but it also gives you more options if there is an issue later.
e.g. a win (update) needed to increase size of a (hidden) partition, having unformatted space made it easier to "fix",
vs having to shrink existing partition to get it done.
 
@Daytrader
its just easier to do it with formatting/partitioning less space, vs just leaving it free.

this way you dont have to worry about forgetting (not to fill it), especially if you still have hdd/multiple drives,
but it also gives you more options if there is an issue later.
e.g. a win (update) needed to increase size of a (hidden) partition, having unformatted space made it easier to "fix",
vs having to shrink existing partition to get it done.
Ok thx for reply.
 
The other day I was testing a custom Solidigm NVME driver linked in https://www.techpowerup.com/forums/...or-all-nvme-brands-ssds-any-nvme-ssds.327143/ thread. Read speeds were slightly higher than with generic NVME driver but in one of AS-SSD tests write speed took a hit, so I decided to test it out by timing copying of a 15GB file from secondary PCIx3 Samsung 960Pro I use as a download drive to PCIx4 Kingston Fury Renegade which is my main drive.

To my surprise, copying was on average 1.6GB/s with generic windows driver and 1.4GB/s with Solidigm driver. Difference in speed aside, this was too low because I remembered being able to copy files to Kingston around by up to 2.6GB/s. After some trial and error, which included duplicating mentioned 15GB file and copying it to Kingston with 2.6GB/s, i came to conlusion the drive might be fragmented which was confirmed from elevated command prompt by typing

defrag.exe d: -a (analyze the disk. Drive turned out to be heavily fragmented, close to 50%)

Then I defragged the drive with

defrag.exe d: -f

Process took around 7-8 minutes for ~150GB of data. Most of the data were large files though. After that I copied again the original 15GB file and now the speed was 2.3GB/s on average. Looks like defragging a heavily fragmented SSD drive can make a big difference.
 
There
The other day I was testing a custom Solidigm NVME driver linked in https://www.techpowerup.com/forums/...or-all-nvme-brands-ssds-any-nvme-ssds.327143/ thread. Read speeds were slightly higher than with generic NVME driver but in one of AS-SSD tests write speed took a hit, so I decided to test it out by timing copying of a 15GB file from secondary PCIx3 Samsung 960Pro I use as a download drive to PCIx4 Kingston Fury Renegade which is my main drive.

To my surprise, copying was on average 1.6GB/s with generic windows driver and 1.4GB/s with Solidigm driver. Difference in speed aside, this was too low because I remembered being able to copy files to Kingston around by up to 2.6GB/s. After some trial and error, which included duplicating mentioned 15GB file and copying it to Kingston with 2.6GB/s, i came to conlusion the drive might be fragmented which was confirmed from elevated command prompt by typing

defrag.exe d: -a (analyze the disk. Drive turned out to be heavily fragmented, close to 50%)

Then I defragged the drive with

defrag.exe d: -f

Process took around 7-8 minutes for ~150GB of data. Most of the data were large files though. After that I copied again the original 15GB file and now the speed was 2.3GB/s on average. Looks like defragging a heavily fragmented SSD drive can make a big difference.
There be a time when you chasing for maximum read and write you going to kill the SSD quicker by defragmenting it on a regular basis. You may misinterpreted the idea about the way the data transfer works it totally different. I give you a prime example I am using a regular 7200rpm 1tb Seagate with crucial mx500 1tb both on different chipset one intel and other is Marvell. All on the official last drivers by the manufacturer. Now you thinking the 7200rpm 1tb going to bottleneck the read and write performance this is where the computer memory takes the data from the mx500 to the 7200rpm through the memory controller therefore given the regular hard drive a boost in data transfer for a certain amount of time before dropping to the slower speed. This is the expected behavior. As your case you using two different drives in two different pcie they may be wired differently like one is connected to the cpu and other is connected to the south bridge chip giving two different kind of read and write performance
 
How does one know which material is in SLC memory (cache) and which in TLC/QLC? so it may be more than fragmentation.
 
How does one know which material is in SLC memory (cache) and which in TLC/QLC? so it may be more than fragmentation.
AFAIK the drive tries to clear SLC as quickly as possible, otherwise it wouldn't serve much point.
 
There be a time when you chasing for maximum read and write you going to kill the SSD quicker by defragmenting it on a regular basis. You may misinterpreted the idea about the way the data transfer works it totally different. I give you a prime example I am using a regular 7200rpm 1tb Seagate with crucial mx500 1tb both on different chipset one intel and other is Marvell. All on the official last drivers by the manufacturer. Now you thinking the 7200rpm 1tb going to bottleneck the read and write performance this is where the computer memory takes the data from the mx500 to the 7200rpm through the memory controller therefore given the regular hard drive a boost in data transfer for a certain amount of time before dropping to the slower speed. This is the expected behavior. As your case you using two different drives in two different pcie they may be wired differently like one is connected to the cpu and other is connected to the south bridge chip giving two different kind of read and write performance
No one here is suggesting that SSD should be defragmented on regular basis. Once a year may be enough to keep the drive in good shape.
 
No one here is suggesting that SSD should be defragmented on regular basis. Once a year may be enough to keep the drive in good shape.
I hardly do defrag on ssd but only on regular hard drive. I wouldn't worry too much the average transfer rate there is no need to be fixated on read or write performance before, during and after
 
Is Ultimate Gefrag a good one ?:

 
No one here is suggesting that SSD should be defragmented on regular basis. Once a year may be enough to keep the drive in good shape.
If you have really large files making a separate partition with a larger cluster size for storing them should also help reduce the number of fragments the OS needs to manage them. On an SSD defragging can optimize the OS overhead of highly fragmented files but of course at the cost of wear and tear of the SSD. Simply copying a file off and back onto the partition will defragment a file. This is probably better and faster in many cases than trying to use a defragmenter that is usually also trying to thrash a lot of other files around too or trying to uselessly move things around in small chunks both incurring more writes and more time. Contiguous free space does not exist on an SSD like it does for an HDD.
 
If you have really large files making a separate partition with a larger cluster size for storing them should also help reduce the number of fragments the OS needs to manage them. On an SSD defragging can optimize the OS overhead of highly fragmented files but of course at the cost of wear and tear of the SSD. Simply copying a file off and back onto the partition will defragment a file. This is probably better and faster in many cases than trying to use a defragmenter that is usually also trying to thrash a lot of other files around too or trying to uselessly move things around in small chunks both incurring more writes and more time. Contiguous free space does not exist on an SSD like it does for an HDD.
I agree with you, in case one's fragmented SSD is not a system drive, copying the files to another drive, formatting the SSD and copying the files back will induce less wear and tear on drive than doing an old fashion defragmenting.

Of course, you can't do that with system drive, that's why some kind of defragmenting has to be done to keep file operations as fast as possible.

Does defragmenting a fragmented SSD make a difference ? It sure does. Is it worth sacrificing drives' R/W cycles to get optimal performance out of your drive? That is on each user to decide for themselves.
 
Last edited:
SyncBack copy files without fragmenting, if ever it's a usefull info, i ust copied 8Tb of movies from 2x4tb to 1x 8tb !!
 
I agree with you, in case one's fragmented SSD is not a system drive, copying the files to another drive, formatting the SSD and copying the files back will induce less wear and tear on drive than doing an old fashion defragmenting.

Of course, you can't do that with system drive, that's why some kind of defragmenting has to be done to keep file operations as fast as possible.

Does defragmenting a fragmented SSD make a difference ? It sure does. Is it worth sacrificing drives' R/W cycles to get optimal performance out of your drive? That is on each user to decide for themselves.
You can take some guesswork out of the equation by recording the total bytes written from SMART attributes. If and when you decide to manually defrag again, write down the TBW before and after, and then again a few hours later (in the unlikely case attributes don't update in real time).

Here's the data for my (lightly used) PC, with a 500 GB SSD and a 100 GB Windows 7 system partition:
first defrag: 115 GB written
next 4 days of regular use: 17 GB written
repeated defrag after 4 days: 3 GB written
8 more days of use: 31 GB written
repeated defrag after 8 days: 4 GB written.
 
You can take some guesswork out of the equation by recording the total bytes written from SMART attributes. If and when you decide to manually defrag again, write down the TBW before and after, and then again a few hours later (in the unlikely case attributes don't update in real time).

Here's the data for my (lightly used) PC, with a 500 GB SSD and a 100 GB Windows 7 system partition:
first defrag: 115 GB written
next 4 days of regular use: 17 GB written
repeated defrag after 4 days: 3 GB written
8 more days of use: 31 GB written
repeated defrag after 8 days: 4 GB written.
That is very good advice, thank you! I will remeber that for next time. Won't be for at least a year tho :)

Interesting, your numbers say it didn't do much writing, but I guess it mostly depends of size of the files with 2+ fragments.
 
That is very good advice, thank you! I will remeber that for next time. Won't be for at least a year tho :)

Interesting, your numbers say it didn't do much writing, but I guess it mostly depends of size of the files with 2+ fragments.
The other thing you can do to reduce writes and extend the endurance is to just defrag the files with 20+ fragments.

If you plotted the files by the number of fragments they had, most would have a very small number of fragments (50,000 files with 2-10 fragments each) with a very small number of files with a very large number of fragments (20 files with 100+ fragments). It would look like a Pareto distribution.

If you just defrag the files with >X number of fragments, you will get most of the gain with less of the downside.
 
The other thing you can do to reduce writes and extend the endurance is to just defrag the files with 20+ fragments.

If you plotted the files by the number of fragments they had, most would have a very small number of fragments (50,000 files with 2-10 fragments each) with a very small number of files with a very large number of fragments (20 files with 100+ fragments). It would look like a Pareto distribution.

If you just defrag the files with >X number of fragments, you will get most of the gain with less of the downside.
That's basically the same as the first post in this thread suggested.

Also, it seems that most people consider the Windows defragmenter to be as dumb as the Norton Utilities defragmenter 30+ years ago. I don't know how dumb it is, I just assume MS knows a thing or two about SSDs (Windows 7 already had TRIM support, for example). So they have the knowledge to make a defragmenter with SSD-specific algorithms. Total defragmentation of files and free space makes no sense at all but putting only the most fragmented files in order would make sense. Giving priority to the most frequently used files would be even smarter.

How smart is the Windows 11 defragmenter really? We can't say until someone does a thorough analysis of its operation.

I'll just add that total defragmentation makes no sense on HDDs, either. You get fragments after half an hour of using Windows, so why even try to keep a perfect order on your disks?
 
I'll just add that total defragmentation makes no sense on HDDs, either. You get fragments after half an hour of using Windows, so why even try to keep a perfect order on your disks?
Speed. You gain a lot of sequential speed (or rather, you cut your sequential speed losses) by doing total defragmentation on a hard drive. If the drive head can access your entire file without having to change positions on the platter because the entire file is not fragmented, you reduce access times quite a bit.

Also, defragmentation software usually places the most accessed files near the outer parts of the platter. Since the edge is moving faster than the inner parts of the platter, access times drop further still.

You want the free space to be on the slowest part of the platter if possible so the maximum amount of files can be accessed at the highest possible speed with your most accessed files at the very edge of the platter, near the rim for reasons listed above.

Worst case is that you have a mix of free space and fragmented files all scattered randomly throughout the platter.

When I had hard drives, I didn't aim for perfect order but did run a defragmentation run every 2 months or so, depending on how static my data was. It had a noticeable effect on access time for certain games.

Concerning hard drives, the Windows defragmenter is perfectly fine if a bit slow. I prefer Defraggler for both hard drive total defragmentation and SSD file defragmentation because it gives an option, regardless of drive type, to only process certain files and it gives a really detailed block visualization that you can click on to see what files are contained where.
 
Also, defragmentation software usually places the most accessed files near the outer parts of the platter. Since the edge is moving faster than the inner parts of the platter, access times drop further still.

I'd argue that transfer speed was highest on the outside, but not access time; 180° is 180°
 
That's basically the same as the first post in this thread suggested.

Also, it seems that most people consider the Windows defragmenter to be as dumb as the Norton Utilities defragmenter 30+ years ago. I don't know how dumb it is, I just assume MS knows a thing or two about SSDs (Windows 7 already had TRIM support, for example). So they have the knowledge to make a defragmenter with SSD-specific algorithms. Total defragmentation of files and free space makes no sense at all but putting only the most fragmented files in order would make sense. Giving priority to the most frequently used files would be even smarter.

How smart is the Windows 11 defragmenter really? We can't say until someone does a thorough analysis of its operation.

I'll just add that total defragmentation makes no sense on HDDs, either. You get fragments after half an hour of using Windows, so why even try to keep a perfect order on your disks?
On the hd i use it in a command prompt with parameters
 
I run windows 11 defrag every week on my 990pros. It doesn't defrag though. It just runs the trim command for the drive(s). I have over provisioning on both my 1tb and 4tb 990 pros. Other than that, game on!
 
I run windows 11 defrag every week on my 990pros
Manually? Windows own automatic schedule set on "weekly"? Nonetheless, Win11 does "light" defrag on your (and mine) 990Pro at most once a month, if overall fragmentation grade exceeds 10%. So, it could be probably 1x every 2-3 month in your case, but it certainly happens.
It doesn't defrag though. It just runs the trim command for the drive(s).
Yes, "Re-trim" once a week, but sometimes "defrag" <- still happens. Just start Powershell as admin and type in:
Get-EventLog -LogName Application -Source „microsoft-windows-defrag“ | sort timegenerated | fl timegenerated, message
If you only see optimize/retrim reports, it just means your SSDs never reached >10% fragmentation.
 
Back
Top