• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Are there wear problems from partitioning a SSD?

Joined
Jan 18, 2020
Messages
31 (0.02/day)
System Name Roku
Processor Ryzen 3600
Motherboard MSI VHD PRO MAX
Cooling Cryorig H7
Memory G.Skill Sniper X 16 GB
Video Card(s) Galax 2060 Super 1-Click OC
Storage ADATA XPG SX8200 Pro
Display(s) Acer VG252Q
Audio Device(s) Realtek
Power Supply Seasonic Focus GX 650W
Mouse Logitech G102
Keyboard Phantom RGB
Software Windows 10 Pro
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
As the old saying goes, when in doubt, don't.

What more does one really need?
Sure, but Windows devs who built a custom NTFS with extended block size limitations and scheduled drive optimization tailored to SSDs think it’s sometimes necessary after analysis due to block size limitations. Why?

Obvs wouldn’t encourage anyone to do it manually but I’m not understanding something about the limits of block sizes (I think?) and am curious.
 
Joined
Jul 5, 2013
Messages
25,559 (6.47/day)
Sure, but Windows devs who built a custom NTFS with extended block size limitations and scheduled drive optimization tailored to SSDs think it’s sometimes necessary after analysis due to block size limitations. Why?
Honestly, no idea. The rational does not make sense.
Obvs wouldn’t encourage anyone to do it manually but I’m not understanding something about the limits of block sizes (I think?) and am curious.
I think you're referring to a problem that existed years ago but has since been solved.
 
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
With respect, did you follow the links in my post? I’m not sure it’s been resolved, nor do I have any reason to believe so, given that the defrag docs from last year explicitly say Windows defragments hard drives. Idk what this means — do you?

Because I don’t actually know, and really appreciate and am heartened that two frustrated forum users put their feelings aside over a pointless debate (that I’m invested in) to come to an agreement while demonstrating humility (:love:), I’m not going to elaborate. I know we have beef, and we both get a certain joy out of it, but I’d hope you’d do the same (unless of course you have a change log or something to point to!).

:toast:
 
Joined
Jul 5, 2013
Messages
25,559 (6.47/day)
With respect, did you follow the links in my post? I’m not sure it’s been resolved, nor do I have any reason to believe so, given that the defrag docs from last year explicitly say Windows defragments hard drives. Idk what this means — do you?
I did. What was stated in those documents is in direct conflict with known facts. Everyone who understands SSD NAND on even a basic level knows that NAND has a limited number of Program/Erase cycles. As Mussels stated correctly above, SSD controllers do not handle NAND in continuous memory, but instead broken up into stripped sectors like a RAID array. It's actually more complicated than that, but you get the idea. As Windows itself warns against defraging, I'm inclined to lean toward the idea that Windows does not actually do so, by default. Even if it does, the extent to which data is reordered is likely minimal.

Due to the automatic sector allocation and wear-leveling all drives now do, if the Windows defrag service is actually running, it shouldn't be because of the extra wear such would impose on a drive. This conflicting and confusing information is exactly why I do not let Windows manage itself.

So where does that leave us all? We know NAND cells have a limited lifespan. We know that drive makers advise not allowing defrag operations and we know that Windows itself warns about it.
 
Joined
Dec 26, 2019
Messages
33 (0.02/day)
In direct answer to the qustion:
NO.
There are ZERO wear problems due to partitioning an SSD.

An SSD does NOT use the file/partition system in order to write to any specific PLACE on the SSD.
Your data can be spread out all over the SSD.

There might be OTHER issues associated with a LOT of partitions, but wear is NOT one of them.

You can read the first answer in this link.
And if you aren't sure, then go to his references.
Disadvantages of partitioning an SSD?
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Hi,
Only main stream os I see here is win-10

View attachment 257923
Most popular and mainstream are not the same words, nor do they have the same meaning

Going to see how primocache works, thanks! I'm currently using IMDisk to create a RAM disk as temp drive, dynamically allocating up to 4 GB for this purpose.
My advice: Set a 2GB delayed write cache, dont bother with a read cache.

If and when you write to the drives, it's got enough buffer for a mech drive to catch up without any issues, and on SSD's the delay reduces writes.
Primocache can also re-use deleted files (recurring files like a browser cache from the same website) so it can reduce the amount of writes done to an SSD over time. It's not large in the MB sense, but i've seen greater than 50% write reductions on my C: partition, just by delaying small singular writes browsing the web


I'll try and get a screenshot, the stats reset when I reboot the PC so it's currently zeroed

D: drive (Users folders)
Almost nothing has been written, so nothing was cached
That 1.6% missing here is either still in the cache, or never needed to be written - if deferred blocks is at 0, then it's all been written and thats the amount of writes you've saved.
1660379586504.png



C: drive, after 30 minutes: Totally different story
1660380094216.png


29.1% reduction
10.2% still in RAM
Trimmed Blocks - 2141 - This is data that avoided getting written, because it was either on the disk already and 'un deleted' or was no longer needed by the time the 60 second write delay had ended.
1660379712493.png


You could think of this like a Log file being written to every second - by delaying the writes to every 60 seconds, you've saved 59 writes every minute it's running
(Reality isn't quite like that as windows has it's own limited caching system, but it's a clear example)


So depending how you view this: Wowee, i've saved 64MB of writes.
Or: Wowee, my OS drive will live 30% longer




As they say on their website: You risk data loss or corruption if you have an unstable system, or suffer power outages. Use a smaller delay if you're worried, and a UPS if you can.

1660380045166.png
 

Attachments

  • 1660379671459.png
    1660379671459.png
    24.3 KB · Views: 33
Last edited:
Joined
Mar 21, 2021
Messages
4,403 (3.89/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) Asus NVIDIA GeForce GT 1030 2GB GDDR5 (fan-less)
Storage 2TB Micron SATA SSD; 2TB Seagate Firecuda 3.5" HDD
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit
My advice: Set a 2GB delayed write cache, dont bother with a read cache.

Would this cause issues during a power cut or crash? (I have an SSD in mind as cache as opposed to RAM)

How does the Mac fusion drive technology work?
 
Last edited:
Joined
Jan 18, 2020
Messages
31 (0.02/day)
System Name Roku
Processor Ryzen 3600
Motherboard MSI VHD PRO MAX
Cooling Cryorig H7
Memory G.Skill Sniper X 16 GB
Video Card(s) Galax 2060 Super 1-Click OC
Storage ADATA XPG SX8200 Pro
Display(s) Acer VG252Q
Audio Device(s) Realtek
Power Supply Seasonic Focus GX 650W
Mouse Logitech G102
Keyboard Phantom RGB
Software Windows 10 Pro
What's the difference between deferred-writes and Windows' own "turn off write-cache buffer flushing" ?
 
Joined
Feb 1, 2019
Messages
2,582 (1.35/day)
Location
UK, Leicester
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 3080 RTX FE 10G
Storage 1TB 980 PRO (OS, games), 2TB SN850X (games), 2TB DC P4600 (work), 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Asus Xonar D2X
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
This thread might have needed a cleanup, but any perceived grumpiness got sorted out via PM's
Let's keep things nice, we all seem to have different views on SSD's but seriously - there are garbage drives out there, and a lot of garbage information too. Things change in the SSD world, common knowledge from the SLC days means nothing on a QLC drive.

This thread has changed topic, because we got the answer to the OP's question (no) - but we're still discussing the actual concern of SSD wear


I'll summarise the entire mess below into: You can buy 1TB drives right now, that range from 1,200TBW to 80TBW.
I've only spent 15 minutes looking into this, i'm sure worse drives exist out there.




Smaller SSD's are at the highest risk of more TBW, because the simple fact of running out of room means you need to delete things and likely re-create them later. Even automated windows tasks like the page file behave this way, with greater storage space helping alleviate re-writes.


I wrote some big annoyed rant a while back about samsungs naming scheme and how a 980 pro has half the TLC of a 970 pro - and every new release (evo, evo plus, evo plus plus, whatever - was it the SD cards that did that?) was fairly consistent, until now when it smashed backwards.



Modern SSD's went backwards in TBW, fast. Theres a lot of 250GB and under drives with low TBW's and some brands that refuse to even advertise them, and give you "hours" instead.

Lower capacity drives often have more writes, not less - a PC gamer is going to delete games to install a new one far more times than a user who installs and leaves it there.
Deleting to free up space, only to re-create is the worst case scenario here.




From here on i'm only comparing NVME drives that are for sale today.
Ranked in order of samsung as reference, then best to worst.


Keep in mind, these are considered the top tier premium drives by manu
View attachment 257639


Sticking to just the 1TB models since every series has them:
980 Pro: 600
980: 600
970 pro: 1200
970 Evo plus: 600
970 evo: 600

980 (regular)
View attachment 257643

Evo plus range:
View attachment 257640

970 evo range
View attachment 257641

What about their QLC range, well known for being cheap, at the reduction of lifespan?

360TBW. Honestly, it's low but it's not terrible - and they get much more reasonable on the bigger models.
View attachment 257642

So if samsung, the king of consumer gaming SSD's are going backwards (the 980 series) what about other brands?


Team MP34:
Huh. That's actually impressive.
View attachment 257647

XPG's SX8200 Pro?
Not so bad, on par with samsung.
View attachment 257648

Crucial P2 series:
Basically, halve samsungs. Except for the 970 pro, quarter that.
View attachment 257644

Kingston's NV1 range:
Oh, making crucial look good here.
View attachment 257646

WD green? Oh no. Oh fuck no. 80 TBW for the TLC 960GB and 100TBW for 1TB and 2TB QLC

From 1200TBW to 80TBW.
View attachment 257645


In Sata SSD's, things are just depressing.
These are generally on par with small NVME drives, but you can imagine these drives would end up with data deleted and created far more often than bigger drives that can retain data easier
These 40TBW drives wouldn't last me as an OS drive for a year.

Kingston A400:
View attachment 257649

Crucial BX500:
View attachment 257650

WD dont even list the TBW for their WD green SATA drives, they know its so bad. They state "upto 1 million hours" instead for all drives
Yep kingston have been going backwards, when I last brought some, I deliberately got old ones from ebay, as the newer models even back then were a clear downgrade.

But even the older ones I have I now actually have issues. I own 4 kingston, 2 are dead, no detection or anything, these drives barely got used, I tried to use after they were powered down ages and just dead. One has a weird Issue I have never heard of before where it reports space full when its not in an xbox, still does it after new format of filesystem. It was used for a while to record game clips so heavy write use. One still works normally.

Currently all my samsungs are ok, I have 2 really old 830s, they run way slower than new, but no errors in active use or stalling issues on reads suggesting functionally they still fine just lower peak speeds. 850 pro which has had a ton of use including time in my ps4 pro as the main drive (so again clip recordings), this runs like new, full performance no active errors. The drive feels like it will last forever. 3 860 evo's. 1 870 evo, 1 970 evo and 1 980 pro.

Finally own 2 mx500s early revisions drives with the wear cycling issues, both originals developed signs of failure after a while, one only had light use, other moderate use in laptop.

Also in m-sata interface, got a 860 evo and a kingston ssd, I replaced the kingston in my pfsense firewall a few months back after for unknown reasons the boot sectors got corrupted.

Dramless ssd's in a tomshardware article before they first came to market, an industry insider warned that tbw would nosedive due to the mapping tables needing to be written all the time instead of stored in onboard ram cache, the cells for those cannot be remapped with wear levelling. This may have now changed as was an old article, but because of that article I will avoid dramless ssd's.

Of the drives I own that did develop issues, none were anywhere near their rated endurance. Only my two 830s have significant usage of their rated erase cycles.
 
Joined
Mar 21, 2021
Messages
4,403 (3.89/day)
Location
Colorado, U.S.A.
System Name HP Compaq 8000 Elite CMT
Processor Intel Core 2 Quad Q9550
Motherboard Hewlett-Packard 3647h
Memory 16GB DDR3
Video Card(s) Asus NVIDIA GeForce GT 1030 2GB GDDR5 (fan-less)
Storage 2TB Micron SATA SSD; 2TB Seagate Firecuda 3.5" HDD
Display(s) Dell P2416D (2560 x 1440)
Power Supply 12V HP proprietary
Software Windows 10 Pro 64-bit
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Would it be simpler to also specify how many times a drive can be completely re-written?

That way the
  • Samsung 970 Pro becomes: 1200
  • Samsung 980 Pro becomes: 600
just a thought
In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.
 
Joined
Nov 15, 2021
Messages
2,709 (3.03/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.
Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.
To some degree, yes, and as we've discussed above the amount of writes you're likely to perform varies with drive size. Still, if what you're after is a simple, legible, relatable measure for what a drive can handle, TBW is much more of that than full drive write cycles, regardless of size.
 
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
I did. What was stated in those documents is in direct conflict with known facts.
Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.
As Windows itself warns against defraging, I'm inclined to lean toward the idea that Windows does not actually do so, by default. Even if it does, the extent to which data is reordered is likely minimal.
With respect, your inclination doesn’t clarify the mixed messaging that MS presents. Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?
So where does that leave us all? We know NAND cells have a limited lifespan. We know that drive makers advise not allowing defrag operations and we know that Windows itself warns about it.
And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why? We can assume this is different from a HDD defrag — how so?

It’s a boring question, and probably doesn’t interest or effect the vast majority of MS users or forum goers. Maybe curiosity killed the cat, but I’m just out here trying to live dangerously.

Honestly I’m over here talking about format and volume limitations and acknowledging that the controller is ambivalent to these things, all the while pointing out MS’s documentation on the question — a conversation requires meeting a person where they’re at, not hand waving because you don’t have the answers, which I certainly don’t expect you to (it’s not like MS is providing them, at least as far as my research went)… it’s okay not to know and to make your own decisions based on your own knowledge, but I am curious and would like to know more :)
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why? We can assume this is different from a HDD defrag — how so?
If it is only defragmenting metadata then that's quite different from a regular HDD defrag - essentially not touching normal files at all, but just various file system data and possibly even lower level metadata. Should have a vastly smaller impact on drive endurance simply due to the much, much smaller amount of data being worked on.
 
Joined
Jul 5, 2013
Messages
25,559 (6.47/day)
Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.
They contradict themselves frequently. I really don't take microsoft documentation seriously.
With respect, your inclination doesn’t clarify the mixed messaging that MS presents.
Wasn't intending to. Don't care what microsoft presents in their documentation.
Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?
Because it's microsoft. In that company, the situation of the right-hand not knowing what the left-hand is doing happens frequently. This has the effect of frequent misinformation being documented and making microsoft look like monkey's diddling a football. So "why" is nearly impossible to answer and just as equally irrelevant.
And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why?
Ok, that one has an answer. Fragmenting drive metadata can(but does not always) inhibit performance. However, that problem was solved many years ago when SSD makers moved all drive internal fuctions to the drive NAND controller and baked it all into the drive firmware package. Operating systems no longer have any effect on such functionality.
We can assume this is different from a HDD defrag — how so?
That assumption would be correct. HDD defragmenting is a completely different task.
not hand waving because you don’t have the answers
It's not about not having the answers, it's about not wanting to spend the huge amount of time to hand-hold people to flesh those answers out. If people are not happy/satisfied/understanding concerning an answer provided, they need to do their own research and dig deeper for themselves.

If it is only defragmenting metadata then that's quite different from a regular HDD defrag
Very much so and correct. In the case of older SSD's, that function was needed from time to time, but only rarely.
 
Last edited:
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Ok, that one has an answer. Fragmenting drive metadata can(but does not always) inhibit performance. However, that problem was solved many years ago when SSD makers moved all drive internal fuctions to the drive NAND controller and baked it all into the drive firmware package. Operating systems no longer have any effect on such functionality.
The drive controller is unaware of the filesystem format and volume attributes, which not only includes block size but also the metadata written by the file system. These are all determined by the OS and, to some extent (like block size and filesystem used), the user — much like the previous discussion on partitions. I think you are misunderstanding the question.
That assumption would be incorrect. HDD defragmenting is a completely different task.
You are agreeing with my assumption, not showing that it’s incorrect. We are both assuming they are different tasks — I even provided data to attempt to demonstrate as much, which also shows that Windows can and does defragment SSDs.
It's not about not having the answers, it's about not wanting to spend the huge amount of time to hand-hold people to flesh those answers out. If people are not happy/satisfied/understanding concerning an answer provided, they need to do their own research and dig deeper for themselves.
I have, and am attempting to. You’re uninterested, and that’s okay, but it’s not hand holding, it’s hand waving. Please consider your own advice.

Very much so and correct. In the case of older SSD's, that function was needed from time to time, but only rarely.
I don’t think you can demonstrate that either of these claims are true — Windows does defrag modern SSDs, and we don’t actually know why or how often, at least in terms of the body of knowledge this thread has provided.
 
Joined
Jul 5, 2013
Messages
25,559 (6.47/day)
You are agreeing with my assumption, not showing that it’s incorrect.
Sorry, see edit. I meant correct. HDD's do work a very different way.
I have, and am attempting to. You’re uninterested, and that’s okay, but it’s not hand holding, it’s hand waving. Please consider your own advice.
I'm not talking about you in particular, just people in general. However the answers for the question of defragging an SSD are very simple: Don't do it. Why is easy to understand. Explaining microsoft's nonsensicle documentation is not easy and I choose not to bother. This is not because I don't think you personally are worth it, only that the effort itself isn't worth it. Does that make sense?
and we don’t actually know why or how often
But we don't need to know why. It is a choice made on top of the mountain of poor choices microsoft has made concerning Windows configurations. Explaining them is pointless and not worth the effort & time. Helping people understand a better choice is simple and productive.
 
Last edited:
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
The same could be said of @Shrek ’s original question — no one needs to know why, but there’s an answer — why not pursue it? I’ve conceded as much, and we’ve both identified that it doesn’t matter to you, so I’m not sure what all the words are about.

To be sure, there’s an explicit reason why metadata fragmentation on an SSD matters to Microsoft — snapshot integrity. Windows defrags metadata to prevent excessive block sizes as a normal part of it’s operation. It’s actually incorrect to simply say “don’t do it,” even if for many users it might not matter, or the chance that it does is one in a million.

Idk I like the enlightenment and science, so technical questions are interesting to me. Feel free to generalize, but I’d like a technical answer, and am hoping fellow forum members can guide me to one. It’s clear you’re uninterested, and that’s okay, but please don’t be dismissive — if you don’t care, why bother?
 
Joined
Dec 25, 2020
Messages
4,637 (3.80/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Razer DeathAdder Essential Mercury White
Keyboard Redragon Shiva Lunar White
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
The same could be said of @Shrek ’s original question — no one needs to know why, but there’s an answer — why not pursue it? I’ve conceded as much, and we’ve both identified that it doesn’t matter to you, so I’m not sure what all the words are about.

To be sure, there’s an explicit reason why metadata fragmentation on an SSD matters to Microsoft — snapshot integrity. Windows defrags metadata to prevent excessive block sizes as a normal part of it’s operation. It’s actually incorrect to simply say “don’t do it,” even if for many users it might not matter, or the chance that it does is one in a million.

Idk I like the enlightenment and science, so technical questions are interesting to me. Feel free to generalize, but I’d like a technical answer, and am hoping fellow forum members can guide me to one. It’s clear you’re uninterested, and that’s okay, but please don’t be dismissive — if you don’t care, why bother?

I wonder if the metadata issue would be solved with a modern, self-healing file system. Windows is still incapable of booting from a ReFS volume and it seems that Microsoft has decided to position it as a premium feature, too, since instead of extending ReFS support to basic editions of Windows such as Home, they actually removed it from Pro and created a new SKU, Pro for Workstations, that adds back support for this file system.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
What's the difference between deferred-writes and Windows' own "turn off write-cache buffer flushing" ?
They're exact opposites
Windows write cache buffer IS a cache, but its a lot smaller with no delay


Would this cause issues during a power cut or crash? (I have an SSD in mind as cache as opposed to RAM)

How does the Mac fusion drive technology work?
Yes. A 60 second buffer would mean upto 60 seconds of data loss.
As for corruption, the risk is exactly the same as normal. NTFS and GPT for example, have redundancy features to reduce corruption (shadow copies "un delete", not being as easy to corrupt as MBR's MFT, etc)

No idea. I dont use a mac.
The ram cache is great for writing, to remove issues from writing more than a drive can handle.
Using an SSD is for a READ cache, to buffer frequently read files from multiple mechanical drives. Like a 500GB SSD caching the most common small files off your 40TB of word documents or whatever.



Yep kingston have been going backwards, when I last brought some, I deliberately got old ones from ebay, as the newer models even back then were a clear downgrade.

But even the older ones I have I now actually have issues. I own 4 kingston, 2 are dead, no detection or anything, these drives barely got used, I tried to use after they were powered down ages and just dead. One has a weird Issue I have never heard of before where it reports space full when its not in an xbox, still does it after new format of filesystem. It was used for a while to record game clips so heavy write use. One still works normally.

Currently all my samsungs are ok, I have 2 really old 830s, they run way slower than new, but no errors in active use or stalling issues on reads suggesting functionally they still fine just lower peak speeds. 850 pro which has had a ton of use including time in my ps4 pro as the main drive (so again clip recordings), this runs like new, full performance no active errors. The drive feels like it will last forever. 3 860 evo's. 1 870 evo, 1 970 evo and 1 980 pro.

Finally own 2 mx500s early revisions drives with the wear cycling issues, both originals developed signs of failure after a while, one only had light use, other moderate use in laptop.

Also in m-sata interface, got a 860 evo and a kingston ssd, I replaced the kingston in my pfsense firewall a few months back after for unknown reasons the boot sectors got corrupted.

Dramless ssd's in a tomshardware article before they first came to market, an industry insider warned that tbw would nosedive due to the mapping tables needing to be written all the time instead of stored in onboard ram cache, the cells for those cannot be remapped with wear levelling. This may have now changed as was an old article, but because of that article I will avoid dramless ssd's.

Of the drives I own that did develop issues, none were anywhere near their rated endurance. Only my two 830s have significant usage of their rated erase cycles.
DRAMless use system ram, via HMB
Overall, it's a pretty decent alternative.
Some drives using HMB without DRAM, actually have extremely good performance "Second-fastest PCIe 3.0 SSD we ever tested"



Would it be simpler to also specify how many times a drive can be completely re-written?

That way the
  • Samsung 970 Pro becomes: 1200
  • Samsung 980 Pro becomes: 600
just a thought

In a way, but TBW is more legible: it states an amount of data, which lends itself to simple comparisons to other amounts of data (which is something most computer users are at least somewhat familiar with), instead of requiring a conversion to reach this. So, TBW is more useful for reading against actual human use of the drive, full drive writes is more legible if what you're after is some kind of nominally like-for-like comparison of drive endurance on a more abstract/less evertday level. Which means that TBW is far more useful of a metric overall, but not the be-all, end-all of describing drive endurance.

Yes and no - I see both sides. TBW is great for comparing drives of the same size, but if you are comparing drives of different sizes, it can provide a different perspective.

Eeeeeeehhh
TBW makes more sense. Otherwise users have to go do math.

You get a hardware reading for your drive in software, okay cool its got 17.8 rewrites left: and that figure is useless
At least with TBW, you know literally - how many terabytes of data you have left to go. You dont sit there and think "ah yes, let's copy 0.67 SSD's worth of data to this drive.... oh wait thats the wrong capacity, let me math it out again"


Sure, but “facts” that Microsoft themselves dispute and provide current documentation and bug fixes for.

With respect, your inclination doesn’t clarify the mixed messaging that MS presents. Why would they put out a fix for their SSD defragmentation feature if they didn’t have one that needed to be fixed?

And yet MS thinks it’s sometimes a good idea to defragment metadata after analysis for some reason relating to block size limitations. Why? We can assume this is different from a HDD defrag — how so?

It’s a boring question, and probably doesn’t interest or effect the vast majority of MS users or forum goers. Maybe curiosity killed the cat, but I’m just out here trying to live dangerously.

Honestly I’m over here talking about format and volume limitations and acknowledging that the controller is ambivalent to these things, all the while pointing out MS’s documentation on the question — a conversation requires meeting a person where they’re at, not hand waving because you don’t have the answers, which I certainly don’t expect you to (it’s not like MS is providing them, at least as far as my research went)… it’s okay not to know and to make your own decisions based on your own knowledge, but I am curious and would like to know more :)

you're pretty much correct on this.

MS had some VERY specific situations where super fragmented files were an issue on an SSD, in enterprise or server style setups.
They have a method to defrag them, and optimise it to do as little writing as possible.

That's it. It's not the same as defragging an entire disk, or treating them like mech drives at all. It's defragging a single file in problematic situations only, and is probably done in the background for us already but never gets triggered "Does file have 500+ fragments? No? leave it TF alone"
If yes, make a copy into contiguous free space as reported by the drive, and mark the old locations for TRIM to clear up
 
Joined
May 4, 2020
Messages
11 (0.01/day)
That is my point... if I just work on one end of my desk, I'll wear out that section long before the desk would wear out if I were to use the whole area.

Your analogy is wrong. You may see 2 or more partitions on you SSD but this is what SSD's controller shows you. In reality, SSD's on board controller doesn't care and will use evenly all SSD's cells.
 
Joined
Jul 5, 2013
Messages
25,559 (6.47/day)
Your analogy is wrong. You may see 2 or more partitions on you SSD but this is what SSD's controller shows you. In reality, SSD's on board controller doesn't care and will use evenly all SSD's cells.
We've been over that. You've missed the conversation a bit.
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Eeeeeeehhh
TBW makes more sense. Otherwise users have to go do math.

You get a hardware reading for your drive in software, okay cool its got 17.8 rewrites left: and that figure is useless
At least with TBW, you know literally - how many terabytes of data you have left to go. You dont sit there and think "ah yes, let's copy 0.67 SSD's worth of data to this drive.... oh wait thats the wrong capacity, let me math it out again"
Yep, exactly this. If your denomination requires math to become understandable in a real-world usage situation, it is not a generally legible denomination. And that might be fine for a bunch of use cases, but not this one. It would be similar to a car's gas tank not measuring from full to empty, but instead zeroing out every time it was filled, and then counting out the consumed liters/gallons used since filling. Like ... sure, that's useful in a way, but now I have to know how large my tank is and then calculate how much is actually left to actually get the most immediately useful information from what I'm being shown.
 
Top