• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Are there any long term consequences to not trimming sata ssd in ( raid 0) ? (upgrading storage capacity however I can)

Joined
Mar 7, 2023
Messages
1,088 (1.38/day)
Processor 14700KF/12100
Motherboard Gigabyte B760 Aorus Elite Ax DDR5
Cooling ARCTIC Liquid Freezer II 240 + P12 Max Fans
Memory 32GB Kingston Fury Beast @ 6000
Video Card(s) Asus Tuf 4090 24GB
Storage 4TB sn850x, 2TB sn850x, 2TB Netac Nv7000 + 2TB p5 plus, 4TB MX500 * 2 = 18TB. Plus dvd burner.
Display(s) Dell 23.5" 1440P IPS panel
Case Lian Li LANCOOL II MESH Performance Mid-Tower
Audio Device(s) Logitech Z623
Power Supply Gigabyte ud850gm pg5
Keyboard msi gk30
So I'm looking to expand my storage.....

I have two Sata drives doing a raid 0 (for a total of 2TB)... and those two drives are unable to be trimmed.

And I am out of m.2 slots (I have 3, all occupied giving a total of 6TB of capacity), and I only have two sata plugs left so when I need to expand storage, I think its going to have to be sata... and the only way I can keep the speed up is another raid 0 setup ( don't worry, its not going to be critical data).

So what happens if you don't run trim... the drive just gets filled up unnecessarily with bits and pieces kind of thing? And will sooner than later have to be reformatted. Is that the downside? Or is there something else? Maybe I'm missing the point entirely.


Also thinking raid 0 as it might do something to cover up less than stellar ssd quality ( Since prices are quite high at the moment). I was originally doing to do mx500s again but prices have doubled since the last time I did this.....

I'm thinking 2tb+2tb in raid 0 OR possibly just one high quality 4TB sata, leaving room for one more at a later date.

Thoughts? Product recommendations?

1710225173951.png
 
Without trim you could lose (write) performance and ssds write more data than "needed" thus reducing lifetime of the ssd
not sure how big a difference it is though
edit: But it seems some ssd have good enough garbage collection that works fine enough without needing trim
"As of 2024, many SSDs had internal garbage collection mechanisms for certain filesystem(s) (such as FAT32, NTFS, APFS) that worked independently of trimming. Although this successfully maintained their lifetime and performance even under operating systems that did not support trim, it had the associated drawbacks of increased write amplification and wear of the flash cells."
 
Last edited:
Without trim you could lose (write) performance and ssds write more data than "needed" thus reducing lifetime of the ssd
not sure how big a difference it is though
I see. So... what do you suggest then? Do a reformat? And then trim while on a different drive, like every 6 months or so, something like that?

Maybe just forget about raid. Get one 4 TB sata SSD for now. Get another one when its needed. Maybe thats the way to go.
 
Are you sure that trim doesn't work in RAID 0? Even this post from ancient history, back when Win 7 was almost new, explains that it does, if certain conditions are met. The state of affairs may be better now, or may be worse.
 
Are you sure that trim doesn't work in RAID 0? Even this post from ancient history, back when Win 7 was almost new, explains that it does, if certain conditions are met. The state of affairs may be better now, or may be worse.
I think it does on some chipsets. I'm not sure if AMD softraid is one of them.
 
More to your needs, and less towards the title-question:

Are these slots on your board open and usable?
1710245641801.png


1710245753380.png


While only PCIe3.0x1 slots, you can still slot-in NVMEs there w/ very affordable passive adapters.



NVMEs' performance is not merely from raw bandwidth. Speaking from personal experience:
-Even, on ancient PCIe1.0/1.1x1 'boards, the quick access times, etc. are well worthwhile
(boot drive or programs/storage).
NtM, Gen2x1, Gen2x2, Gen2x4, and Gen2x8 PCIe Switch Equipped dual- and quad-NVME expander cards can be found for $50-100 (new and used).
Older QNAP Switched NVME Expander cards are reported to work fine in normal PCs. They may be a more-reliable 'affordable' option (v. 'generic import' expanders, and a more-affordable option than Sabrent's new and Amfeltec's used options)
 
Last edited:
More to your needs, and less towards the title-question:

Are these slots on your board open and usable?
View attachment 338619

View attachment 338620

While only PCIe3.0x1 slots, you can still slot-in NVMEs there w/ very affordable passive adapters.



NVMEs' performance is not merely from raw bandwidth. Speaking from personal experience:
-Even, on ancient PCIe1.0/1.1x1 'boards, the quick access times, etc. are well worthwhile
(boot drive or programs/storage).
NtM, Gen2x1, Gen2x2, Gen2x4, and Gen2x8 PCIe Switch Equipped dual- and quad-NVME expander cards can be found for $50-100 (new and used).
Older QNAP Switched NVME Expander cards are reported to work fine in normal PCs. They may be a more-reliable 'affordable' option (v. 'generic import' expanders, and a more-affordable option than Sabrent's new and Amfeltec's used options)
Yes.. I have two of them but they are both electronically 1x, though they appear to be full size ( that tripped me up before when I bought adapters).

All my m.2s did slow down significantly buuuut not as badly as sata mind you. I'll have to sit down and do the math to see if I have any spare lanes left.

I have a b760 chipset with a 14700k, 16x GPU and 3*4x m.2 drives. If I do have any lanes left it can't be many.
 
Yes.. I have two of them but they are both electronically 1x, though they appear to be full size ( that tripped me up before when I bought adapters).

All my m.2s did slow down significantly buuuut not as badly as sata mind you. I'll have to sit down and do the math to see if I have any spare lanes left.

I have a b760 chipset with a 14700k, 16x GPU and 3*4x m.2 drives. If I do have any lanes left it can't be many.
Both work great. I saw no 'issue' with putting them in an x1, as I intended to (justabout) fill them.
When I mostly-filled the one I had for boot, it slowed down considerably. Reformatted and used for game installs, it (more/less) maxes out the X570's 4.0x1's bandwidth.

View attachment 338629View attachment 338630
View attachment 338633View attachment 338631
View attachment 338634
Putting a 4-lane NVME into a single-lane slot, you do lose overall throughput, yes.
However, you still get the benefits of ultra-low access times, and low-latency small transactions.
 
Last edited:
Are you sure that trim doesn't work in RAID 0? Even this post from ancient history, back when Win 7 was almost new, explains that it does, if certain conditions are met. The state of affairs may be better now, or may be worse.
I don't know. I just do raid through windows and in the areas where I would normally execute a trim command its just greyed out. If there was a workaround that would be very helpful.
 
I don't know. I just do raid through windows and in the areas where I would normally execute a trim command its just greyed out. If there was a workaround that would be very helpful.
Well, There's your problem.

Since the data on the drives is considered 'replaceable' already, I'd highly recommend switching to using your motherboard's AHCI RAID functions.
Admittedly, a Windows RAID will 'migrate' between Windows PCs 'better' but, you've already noticed feature loss.
(ntm, on a dynamic use Personal Computer, I'm always afraid of Windows breaking it, all on its own.)

edit:
GB-supplied guide.

 
Well, There's your problem.

Since the data on the drives is considered 'replaceable' already, I'd highly recommend switching to using your motherboard's AHCI RAID functions.
Admittedly, a Windows RAID will 'migrate' between Windows PCs 'better' but, you've already noticed feature loss.
(ntm, on a dynamic use Personal Computer, I'm always afraid of Windows breaking it, all on its own.)
I don't know much about raid and somebody suggested this way to me... which has worked fine except for the aforementioned problem. It was nice when my last pc bit the dust and I was shocked that when I took the drives and put them into a new pc, all the data was still there! That was cool, and not expected at all. Would configuring through the mobo break under similar conditions or?

Also, wondering what you think of these ssds?

1710317427387.png


Pickings are pretty slim right now. I just can't believe how much prices have gone up in the last 6 months to a year maybe? I remember picking up a 2tb sn850x for like 100 usd. Now I can't even get bargain bin satas for that price. This was the cheapest I was able to come up with... 2x2tb for $280 cad free shipping... (about... $210 usd). I know its an entry level drive but I was thinking since its in raid 0, it'd only have to do half the work, potentially hiding some of the shortcomings? Maybe just wishful thinking.

The other option I was thinking is the sata Netac drives since I've been very pleased with my m.2 Netac. However... I realize there's probably some difference between their halo product and their... 'alibaba' product lol. Also the price still comes to $320 CAD after shipping. For no name brand drives!! Madness...

Anyway... thanks for the help.





-----------------------------------------------------------

Putting a 4-lane NVME into a single-lane slot, you do lose overall throughput, yes.
However, you still get the benefits of ultra-low access times, and low-latency small transactions.
Okay so this is what I have:

CPU lanes: 20
Chipset lanes: 10 (pcie4) +4 (pcie3)
Total: 30 (Pcie4) + 4 (pcie3)

Components:
GPU:16
m.2:4
m.2:4
m.2:4
Total: 28 (pcie4)

Remaining: 2 (pcie4), 4 (pcie3). Is that right?

An adapter wouldn't take away from my gpu, right? I wish there was an app where you could assign pcie lanes to wherever you wanted them rather than it kind of just re-assigning to wherever the hell it feels like :cry:
 
Last edited:
Okay so this is what I have:

CPU lanes: 20
Chipset lanes: 10 (pcie4) +4 (pcie3)
Total: 30 (Pcie4) + 4 (pcie3)

Components:
GPU:16
m.2:4
m.2:4
m.2:4
Total: 28 (pcie4)

Remaining: 2 (pcie4), 4 (pcie3). Is that right?

An adapter wouldn't take away from my gpu, right? I wish there was an app where you could assign pcie lanes to wherever you wanted them rather than it kind of just re-assigning to wherever the hell it feels like :cry:
Your calculation is correct, a B760 board could support all that. But, just as a general remark, it's not enough to count lanes because chipsets (and Core and Ryzen CPUs) have very limited bifurcation abilities. Lanes are grouped into ports as Tek-Check's diagram nicely shows, and it's not possible to make more than one PCIe device connect to a single port without the use of external PCIe switches.

it'd only have to do half the work
With half the chips! Haven't you thought of that?

You're looking at the CX2. What about the Lexar NS100? I see at Amazon Germany that it's ~10% cheaper. Same 3-year warranty, same amount of RAM (zero), but lower TBW.
 
Last edited:
You're looking at the CX2. What about the Lexar NS100? I see at Amazon Germany that it's ~10% cheaper. Same 3-year warranty, same amount of RAM (zero), but lower TBW.
Hmm for some reason newegg canada only has low density versions of it, like <1GB only....? (our selection is really bad). Amazon on the other hand doesn't look bad. 2TB its still $7 more than the cx2 and don't forget I'll need two, but I mean if its a better product thats certainly doable. Do you use this product? Would you vouch for it?
 
Hmm for some reason newegg canada only has low density versions of it, like <1GB only....? (our selection is really bad). Amazon on the other hand doesn't look bad. 2TB its still $7 more than the cx2 and don't forget I'll need two, but I mean if its a better product thats certainly doable. Do you use this product? Would you vouch for it?
No, I don't have any of those, and I would choose Teamgroup at same or lower price, as it has a higher (or should I say more optimistic) TBW rating - 1.6 PB vs 1 PB.

There exists another Teamgroup model, the T-Force Vulcan Z. Basically same characteristics as the CX2, just newer. You can find it in the TPU database so at least you can be relatively sure what components is it made with.
 
No, I don't have any of those, and I would choose Teamgroup at same or lower price, as it has a higher (or should I say more optimistic) TBW rating - 1.6 PB vs 1 PB.

There exists another Teamgroup model, the T-Force Vulcan Z. Basically same characteristics as the CX2, just newer. You can find it in the TPU database so at least you can be relatively sure what components is it made with.
Oh no... the cx2 is like the Vulkan? Shit... I had that drive... it was.... not a pleasant experience, I ended up returning it. Maybe I need to rethink this. Well anyway, thanks for the help :peace:
 
Oh really? I've seen that drive for good prices. But I noticed there were some bad reviews so steered clear. Is the sustained write any good?
So far in ~5000 operating hours there has been no problem . At present I am using 1TB SATA models in some old laptops.
AFAIK the affordable SSD is fine as it shows 100% good every time I check the operating hours

I might try some M.2 NVMe models and see how well the work as some machines need more storage
 
So far in ~5000 operating hours there has been no problem . At present I am using 1TB SATA models in some old laptops.
AFAIK the affordable SSD is fine as it shows 100% good every time I check the operating hours

I might try some M.2 NVMe models and see how well the work as some machines need more storage
Ah I see, so its just for light use then? I have some bottom barrel satas in a couple old laptops that do great at booting fast and watching youtube. On my main pc... I'm a little more picky about performance... kind of why I'm trying to use raid 0 as a crutch to overcome my lack of m.2 slots :laugh:
 
Long term consequences, of not using trim at all, depends on what you use the drives for. Usually the consequences are slim to none. The only really bad scenario is drives used for constant writes of temporary files.

Without trim, modern ssds still have firmware level garbage collection. Which does more or less the same job as trim, but done by the drive controller directly. So, the drives will not fill up unnecessarily over time. Trim is more accurate, but gc will still keep your drives in good health.

Except for some very specific scenarios, there is honestly no reason to ever manually trim your drives or formatting them semi-annually. Unless you issue a secure erase command to the ssd, it won't even do what you think it does when you format it.

Also thinking raid 0 as it might do something to cover up less than stellar ssd quality ( Since prices are quite high at the moment). I was originally doing to do mx500s again but prices have doubled since the last time I did this.....
If you are concerned with the quality of the drives, then you absolutely should not stick raid 0 on top.

Why would you need raid 0 to improve the ~500MB/s speed of a sata ssd for non-critical data anyway? What is your use case? I get that high burst speed is fun and all. But assessing the actual need would be wise.
 
Long term consequences, of not using trim at all, depends on what you use the drives for. Usually the consequences are slim to none. The only really bad scenario is drives used for constant writes of temporary files.

Without trim, modern ssds still have firmware level garbage collection. Which does more or less the same job as trim, but done by the drive controller directly. So, the drives will not fill up unnecessarily over time. Trim is more accurate, but gc will still keep your drives in good health.

Except for some very specific scenarios, there is honestly no reason to ever manually trim your drives or formatting them semi-annually. Unless you issue a secure erase command to the ssd, it won't even do what you think it does when you format it.


If you are concerned with the quality of the drives, then you absolutely should not stick raid 0 on top.

Why would you need raid 0 to improve the ~500MB/s speed of a sata ssd for non-critical data anyway? What is your use case? I get that high burst speed is fun and all. But assessing the actual need would be wise.
I just move files around a lot... Yeah thats pretty much the reason. I dont think I'm gonna go with super low end anyway. I ordered a 4TB mx500 for... way too much money... honestly I don't know what I was thinking at the time. Cant raid 0 it right away but eventually I'll get a second one. I mean those are the drives that worked so well in this way before.... MX500 may not have that fast burst speed but they are very consistent in my experience.
 
Last edited:
Ended up getting 2x4tb mx500s since the price went down a bit and they certainly are slower than my 1tb models. I hear the dram was reduced and possibly they even use qlc? Even out of the two, one has signifcantly higher random write, like 40% higher. I wonder if its undergone an additional downgrade? Ontop of everything else I've mentioned.... idk.

Anyway w/e. It works in raid 0. Read speeds are good, write speeds aren't that bad and anything that needs better will go on a faster drive so, really, I got what I wanted... a 8TB downloads folder lol. :)
 
Hi,
Used linux mint 17 I believe for about a year and it killed a crucial mx-100 256gb because the firmware was incompatible with linux trim
So it never did and boom rma hehe
2tb might take it longer to die.
 
Back
Top