• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Samsung 870 EVO - Beware, certain batches prone to failure!

Yes to make it easier for people... :confused: I do remember that from long time ago. And yes now it's NOT Raid...

I also have an old 860EVO here from 256GB that still works okay, Zero CRC errors, but good used already.
In the BIOS I was talking about, what's actually AHCI, is listed "RAID" in the BIOS!

Not as suspicious about the 860 Evos. But, I have two of them, with the label of one showing a different origin, which also has a different font!

20240730_205832.jpg
20240730_205929.jpg
 
Last edited:
I also have an older 860EVO from 500GB, as you can see, ZERO CRC errors. In normally circumstances this may not happen.
The 860EVO are really ROCK SOLID with very good and strong NAND, they don't make them like this anymore... :(

This one is in the M.2 format but still is SATA connected.

Screenshot 2024-07-31 031708.png


And this is my older 860EVO, same as you. The one from me is only some months younger... :)

The 860EVO where better then the 870EVO, i never have seen one that was broken. They don't make NAND like this anymore, at that time it was the best quality you could get.
The best ones where always Made in Korea. I had the most problems with Made in China ones.

IMG_20240731_033108.jpg


The 860EVO in M.2 format looks like this, but it's still SATA, not NVMe or PCIE.

Screenshot 2024-07-31 051020.png
 
Last edited:
I also had my 2TB 870 Evo fail on me a while ago with LBA access error. It only had 4,1 TBW. All extended SMART tests would fail.

Drive info: Samsung SSD 870 EVO 2TB
Serial: S621NF0R****
FW: SVT01B6Q
TBW: 4,1

I have upgraded the firmware to the fixed SVT02B6Q version. However this did not fix the issue with the failing SMART tests immediately. Then I did a secure erase to make sure the controller can start from scratch mapping blocks. After that I did a full f3write/f3read cycle to make sure all blocks could be read properly. This was successful!

From now on all extended SMART tests are passing again.

So I am pretty sure that the firmware update actually fixes the issue. Reading a certain flash cell's state and/or determining it's health might not have been implemented ideally on the initial firmware, maybe due to new flash technology. So it could just have been a threshold (maybe something like cell charge?) which had not been implemented correctly on the old firmware. Who knows...

If you are affected I'd definitely suggest to issue a secure erase command after upgrading the firmware.
 
I also had my 2TB 870 Evo fail on me a while ago with LBA access error. It only had 4,1 TBW. All extended SMART tests would fail.

Drive info: Samsung SSD 870 EVO 2TB
Serial: S621NF0R****
FW: SVT01B6Q
TBW: 4,1

I have upgraded the firmware to the fixed SVT02B6Q version. However this did not fix the issue with the failing SMART tests immediately. Then I did a secure erase to make sure the controller can start from scratch mapping blocks. After that I did a full f3write/f3read cycle to make sure all blocks could be read properly. This was successful!

From now on all extended SMART tests are passing again.

So I am pretty sure that the firmware update actually fixes the issue. Reading a certain flash cell's state and/or determining it's health might not have been implemented ideally on the initial firmware, maybe due to new flash technology. So it could just have been a threshold (maybe something like cell charge?) which had not been implemented correctly on the old firmware. Who knows...

If you are affected I'd definitely suggest to issue a secure erase command after upgrading the firmware.

Well yes and no... Every SSD has spare blocks on board for this, Bad blocks are copied to spare blocks, and the bad ones are mapped as defective. So the bad blocks are still there, you can't magically repair bad blocks.

After secure erase they are away from sight, but internally still there marked as defective. A good SSD should never have bad blocks, only after years or very intensive use. If you have a SSD with bad blocks in the beginning, send back and get a new one, use the warranty from 5 years! I would never accept an SSD that has bad blocks after short time. That's why you have warranty in the first place!!

Anyway it is a bad sign if you have bad blocks on any SSD. New firmware does not repair them, they are still there. The same story as what happened earlier with the 980 and 990 PRO SSD.
New firmware just stops the SSD from further degrading, but the damage done stays. Of course Spare blocks are limited, if you used up all of them, you will end up with an SSD that's dead.

As with older HDD, don't trust an SSD with bad blocks and surely don't copy important data on it, make sure you have a backup from it!
 
Last edited:
After secure erase they are away from sight, but internally still there marked as defective. A good SSD should never have bad blocks, only after years or very intensive use.

Yes, I know that the bad blocks are still there. My SSD has a bad block count of 6 (if that SMART value is a counter). That is perfectly normal. Even new SSDs may have bad blocks and that's also normal. Crucial for example has a "factory bad block count" attribute in their SMART readings.

That's what wear levelling is used for. And that's basically an algorithm that can be updated through a firmware update. It could also be possible that the old firmware's wear levelling did not distribute writes equally (which would be it's job) which caused some cells to die earlier because they were hit by too many writes. But it could also be possible that the wear levelling did not mark broken cells as bad properly. However, none of these options is a big concern to me.
 
No one would accept a HDD with bad blocks to begin with, same goes for a SSD. I have too many SSD here, none of them had bad blocks on it. Not even after years of service an TB written.

It's not normal behavior as you call it, but if you feel fine with it, i rest my case. Hope you are have luck with it, Samsung gives warranty for this and it says explicit without ANY defects...
Samsung is happy with such customers that does not care about this. As said that's why you have warranty, for even 5 years they say NO defects. And you feel fine with this...o_O

I lost count how many i send back during the pandemic and later on, i did always get a new one WITHOUT bad blocks!

If it's normal behavior like you say, why does Samsung send me a new one with a note as; We feel sorry...

This is what WE call here in my shop a healthy SSD.

Screenshot 2024-09-12 110733.jpg


I just did take a NEW Crucial SSD out the box, can you tell me where i can see the Factory bad block count?

Screenshot 2024-09-12 113352.jpg
 
Last edited:
No one would accept a HDD with bad blocks to begin with, same goes for a SSD. I have too many SSD here, none of them had bad blocks on it. Not even after years of service an TB written.

It's not normal behavior as you call it, but if you feel fine with it, i rest my case. Hope you are have luck with it, Samsung gives warranty for this and it says explicit without ANY defects...
Samsung is happy with such customers that does not care about this. As said that's why you have warranty, for even 5 years they say NO defects. And you feel fine with this...o_O

I lost count how many i send back during the pandemic and later on, i did always get a new one WITHOUT bad blocks!

If it's normal behavior like you say, why does Samsung send me a new one with a note as; We feel sorry...

This is what WE call here in my shop a healthy SSD.

View attachment 363055

I just did take a NEW Crucial SSD out the box, can you tell me where i can see the Factory bad block count?

View attachment 363057

Your "healthy" SSD has a wear levelling count of 6. Same as mine. This is normal wear levelling and refers to dead flash cells. This does not entitle you for an RMA.

BTW: I also would not accept a HDD with bad sectors. That's a totally different story. A HDD does not have spare sectors, but a SSD has (in fact a 2TB SSD must have thousands of spare cells). Wear levelling usually works the way that whenever a cell is read and it's charge falls below a certain threshold, this cell is marked as bad and the read value will be stored in a different cell. And this could be the part which was just wrong on the original firmware.
 
Last edited:
I just did take a NEW Crucial SSD out the box, can you tell me where i can see the Factory bad block count?
Apparently some Micron and Crucial SSDs have that - here is an example. It's the Micron-specific SMART attribute BD. It's possible that older drives expose more data than newer ones.

But I wouldn't worry about that part. I have a new Kingston KC3000 2TB with Micron flash, and the Flash ID utility lists ~400 sections factory marked as bad. Those are either blocks, or larger units - unfortunately I can't find out exactly what they are. That's all fine, flash chips (or HDD platters, for that matter) don't leave the factory without defects, but they shouldn't develop more defects later.
 
Your "healthy" SSD has a wear levelling count of 6. Same as mine. This is normal wear levelling and refers to dead flash cells. This does not entitle you for an RMA.

BTW: I also would not accept a HDD with bad sectors. That's a totally different story. A HDD does not have spare sectors, but a SSD has (in fact a 2TB SSD must have thousands of spare cells). Wear levelling usually works the way that whenever a cell is read and it's charge falls below a certain threshold, this cell is marked as bad and the read value will be stored in a different cell. And this could be the part which was just wrong on the original firmware.

Where are you getting that?
Wear leveling count exactly means; This attribute represents the number of times a block has been erased. This value is directly related to the lifetime of the SSD. The raw value of this attribute shows the average erase cycles of total blocks.

Has absolutely nothing to do with bad blocks... First you said you have bad blocks, now you say it's wear leveling count???? As if i don't know that wear leveling count is normal for an SSD. Bad blocks are bad blocks shown in the reallocated sectors count, do you see any in mine? No. When an SSD does develop Reallocated sectors count numbers, your SSD is trash. Then you are entitled to get a new one.
 
Last edited:
PNY CS900=Another one to be on the lookout for. Another person here reported theirs failing, while I have one that looks like it has bad blocks! It corrupted Windows 10 files and randomly became extremely slow!

CS900=A monster letdown! It sure doesn't look solid like their CS1111! (IIRC)
 
No one would accept a HDD with bad blocks to begin with, same goes for a SSD.

Any initial production bad blocks would be hidden away from view, with a user facing count of 0.
 
Any initial production bad blocks would be hidden away from view, with a user facing count of 0.

Always has been like that, nothing is perfect. But as new you should not get reallocated sectors anymore... Specially for SSD now. If it does anyway, you have a bad product in your hands. Send back and use warranty.
 
Yeah, it caused Windows 10 and/or the bootloader to hang for many seconds, then I eventually was able to get to the login prompt. Seemed slower than a fragmented WD Blue HDD!

Luckily, it was with the second build I got during the very-early pandemic. ('20) The SSD could be trusted even less than even the already yucky-for-today's-standards PSU that powered it!

The PSU would seem alright if it were 2008!
 
Last edited:
Feels so bad especially if you did buy a new expensive one. And it begins to degrade fast down.
I know very well that wear leveling is normal behavior for any SSD on the market now, but Bad Blocks and Reallocated sector counts; You may have a product that was made Monday morning or Friday evening... :( Then the whole RMA process begins... :mad:
 
Last edited:
Feels so bad especially if you did buy a new expensive one. And it begins to degrade fast down.
It was a prebuilt I got from the same place that I returned a laptop to, because of the severe CSME flaw that only affects Coffee Lake and earlier! And no BIOS update found! That was a laptop with a Core i7 9750H, IIRC. I got a desktop, but it came with a second-gen Ryzen. (Ryzen 5 2600) That A320 motherboard now has the Ryzen 7 3700X. Even the so-so-looking motherboard, has an NVMe slot, so that SSD got taken out of there quickly! That desktop build rescued me, because it came with a GeForce GTX 1660 Super, which I had to use, when I couldn't even use my Radeon RX 5600 XT!

I wished they put an NVMe SSD in there to start with. But, that was the least of my worries. The biggest issue was the SATA SSD that got installed. Obviously faulty!
 
SSD are fast but nowadays also can give you quite a bad headache if problems arise with it!

Oh men don't remind the pandemic period, at that time aspirin was within reach of anyone working with SSD's.
 
SSD are fast but nowadays also can give you quite a bad headache if problems arise with it!
The SSD, was by far the worst! Didn't even last as long as a PSU from the worst of the "stereotypical-bad-caps-period", where a PSU I got in 2005, had at least one bad cap in 2011!

I think it was the +5V standby cap! It's usually the secondary caps that went bad!
 
I even have had a bad ATX power supply that was used two or three times, after been forgotten for some years it just exploded in our hands, caps where exploded inside.
Was from the brand SuperFlower, not so cheap. And yes you got it, was in the secondary side.

Now mostly use Corsair Power Supply, never had a bad one until now, they are a bit more costly then other's.
 
I even have had a bad ATX power supply that was used two or three times, after been forgotten for some years it just exploded in our hands, caps where exploded inside.
Was from the brand SuperFlower, not so cheap.
Usually, I see them in other brands. But 2004 and 2005, seemed to by far be the worst years for the caps in PSUs! I've seen caps in a 2003 PSU outlast them, at least before bulging!

That's even when it was during the period, where Topcat saw a load of bad caps on motherboards! So, I think that it actually got worse in 2004 and 2005!
 
Last edited:
Even now there are still millions of that bad caps around that are waiting to explode shortly or leaking out and destroy the printed circuit board.
 
Even now there are still millions of that bad caps around that are waiting to explode shortly or leaking out and destroy the printed circuit board.
For 2004 and 2005, bad secondaries were common in PSUs and on motherboards. It seemed to be mostly just motherboards before that, unless it was a really cheap PSU!
 
How many bad 870EVO are still waiting in the world that where made in 2021-2022 but not used until now? Lucky you can see the production date on the side, if you have an older one, just send it back or don't accept it, it is rotten inside. Don't even bother to do a firmware update with them, new out the box you get reallocated sector counts from the moment you use them.

Nowadays these you can trust again, haven't seen any bad ones passing anymore. Just always make sure it has a production date from 2024 and you are good to go.
 
Last edited:
How many bad 870EVO are still waiting in the world that where made in 2021-2022 but not used until now?
No wonder I still haven't bought any 870 Evo, despite the specs looking good for SATA. Even the ones that look genuine, are still failing!
 
No wonder I still haven't bought any 870 Evo, despite the specs looking good for SATA. Even the ones that look genuine, are still failing!

As i see SATA is loosing it's place now in favor for NVMe ssd's. I don't sell them so much anymore and customers now mostly ask for NVMe SSD's.
They are still good to blow new life in old laptops equipped with an HDD inside. Newer laptops have NVMe already inside as standard storage medium.
 
Back
Top