Monday, March 1st 2021
Intel Rolls Out SSD 670p Mainstream NVMe SSD Series
Intel today rolled out the SSD 670p series, a new line of M.2 NVMe SSDs that are targeted at the mainstream segment. Built in the M.2-2280 form-factor with PCI-Express 3.0 x4 host-interface, the drive implements Intel's latest 144-layer 3D QLC NAND flash memory, mated with a re-badged Silicon Motion SM2265G 8-channel controller that uses a fixed 256 MB DDR3L DRAM cache across all capacity variants. It comes in capacities of 512 GB, 1 TB, and 2 TB.
The 1 TB and 2 TB variants offer sequential read speeds of up to 3500 MB/s, while the 512 GB variant reads at up to 3000 MB/s. Sequential write speeds vary, with the 512 GB variant writing at up to 1600 MB/s, the 1 TB variant at up to 2500 MB/s, and the 2 TB variant at up to 2700 MB/s. The drives offer significantly higher endurance than past generations of QLC-based drives, with the 512 GB variant capable of up to 185 TBW, the 1 TB variant up to 370 TBW, and the 2 TB variant up to 740 TBW. Intel is backing the drives with 5-year warranties. The 512 GB variant is priced at $89, the 1 TB variant at $154, and the 2 TB variant at $329.
The 1 TB and 2 TB variants offer sequential read speeds of up to 3500 MB/s, while the 512 GB variant reads at up to 3000 MB/s. Sequential write speeds vary, with the 512 GB variant writing at up to 1600 MB/s, the 1 TB variant at up to 2500 MB/s, and the 2 TB variant at up to 2700 MB/s. The drives offer significantly higher endurance than past generations of QLC-based drives, with the 512 GB variant capable of up to 185 TBW, the 1 TB variant up to 370 TBW, and the 2 TB variant up to 740 TBW. Intel is backing the drives with 5-year warranties. The 512 GB variant is priced at $89, the 1 TB variant at $154, and the 2 TB variant at $329.
92 Comments on Intel Rolls Out SSD 670p Mainstream NVMe SSD Series
And then you have cells which are "worn out", which retains data for much shorter, or not at all. The problems I've described falls into this second category, I'm talking about getting write errors or whole parts or sectors of the SSD detected as bad.
To be clear, I don't think anyone should use any NAND flash based SSDs for "long term" storage, even SLC. Most/all of which I believe bring a lot of maintenance, configuration challenges/pitfalls and potential risks of corruption due to the massively over-engineered file systems, at least for ZFS. (last time I checked it wasn't completely stable for Linux either.) In terms of reliability, I don't see what value this adds vs. a simple low maintenance software RAID1 with scrubbing. Sure these filesystems have advanced features like snapshots which are a pain to use, at least for ZFS and BTRFS. Plus optional features like compression, deduplication etc. which adds unnecessary complexity to a filesystem.
Some of these might be valid choices for a 20 TB storage volume spanning many drives, but for a work drive of 0.5-1 TB for coding, and some 3D-modelling and photo editing, what alternatives are reliable and performant?
The best setup for development workstations I've found so far is;
1 SSD for the OS (potentially another one for VMs)
1 SSD or 2 SSDs in RAID1 for a workspace
1 HDD for daily "snapshots" (perhaps some incremental rsync, with checksums of course)
Any ideas are welcome.
-----
Then there is also the question of how long you should expect a SSD to be productive? (TLS or QLC)
Personally I start planning to replace mine when they get about 2 years old, I've been burned too many times already.
The SLC cache size depends on the drive size.
Edit: The same applies to the 670p.
Or do you just like arguing about things that have nothing to do with the point of the discussion? Because so far you've chimed in about copying things over from a NAS when we were talking about installing games and now you are arguing that I'm wrong because the amount of fixed space on the drive varies by the size of the drive which has nothing to do with the point we were discussing about how QLC drives always have some SLC cache even when full(which your information proves I'm correct in, thanks).
I do not go through the file system to inspect the blocks on the SSD.
I don't get confused by small file accesses. This allows for the highest speed inspection.
Even TLC type NAND has a slow read speed for slightly tired blocks.
What about QLC? The results can be even worse.
I ran the same test on a TLC type SSD that had been sitting for 3 years and 7 months. The total write capacity of this SSD was only 1.67TB. 1.67TB!
Even though we were reading the blocks directly, the read speed was at least 2MB/s and the average speed was about 50MB/s. Even the reserved area, which was prepared as an over-provisioned area, was about 160MB/sec.
We found some small corruptions in the jpeg files stored on the SSD. This is because error correction is not perfect and lost charge cannot be recovered.
With the exception of DRAM-like self-refresh, the more powerful error correction is implemented, the less reliable the NAND cells themselves appear to be.
The Genesis Mini, a reissued gaming hardware, uses SLC Flash inside, despite its high cost.
The reason for this is that not turning it on for a few years can cause it to fail. This is to avoid a situation where the device is already broken before it is even taken out of the box.
It is a good idea to understand the characteristics of QLC before using it. The temp folder is a temporary place to store games.
It is not recommended to use it as a boot drive.
Personally, I'd like to see all makers of SSD's provide a utility that can allow the end user to manually lock the drive into SLC, MLC or TLC mode, at the cost of storage space.
But does the drive size change the point I was making? Like I said, do you just like arguing about things that have nothing to do with the point of the conversation or is there some other reason you constantly go off point? Yeah, but that's literally just the TBW rating of the drive divided by the capacity of the drive. The reality is that drives are always underrated. A manufacturer would be stupid to warranty the drive for right on the edge of it's actually endurance.
Clearly the smaller drive capacity doesn't have 12GB of SLC cache even when full.
Also, only Intel drives does this, other drives actually runs out of SLC cache, such as this lovely thing.
www.techpowerup.com/review/samsung-870-qvo-1-tb/6.html
Or this, even if it's not nearly as extreme.
www.techpowerup.com/review/sabrent-rocket-q-1-tb-m-2-nvme-ssd/6.html
Please, show me exactly where in either of those two reviews it says the SLC Cache is entirely gone when the drive is 80% full.
It makes me wonder if you understood anything of what you wrote yourself above, about Intel's SLC cache being a minimum fixed size.
Who mentioned anything about 80% full? The Samsung drive runs out of SLC cache after you write 42GB to it, as that's how large the SLC cache is. Hence why the write speeds drop to ~100MB/s.
The Sabrent drive has a much larger SLC cache at 240GB, but once you run out, you're down to ~150MB/s.
Intel's 670p never drops that low, as their small, fixed SLC cache prevents that. As you can see below, the 670p never really drops below 400MB/s.
www.tomshardware.com/reviews/intel-ssd-670p-m-2-nvme-ssd-review/2
I think you need to take a refresher course on how SSDs and SLC cache works.
The 6GB minimum is the smallest size the cache will be on the 512GB 660p as space on the drive is used up. It does not guarantee that the 6GB will never be filled or will always be available. If the drive is 80% full, and you write 15GB to it, 6GB will be written at the fast SLC speed, the other 9GB will write directly to QLC at the much slower QLC speed. This is fundamental caching stuff here. The informational pictures you posted just a few posts up explain exactly this. Did you not understand what you were posting?
And if you actually read the Tom's hardware article you posted, they say that the SLC cache is filled on the 670p, the only reason it writes at 400MBps after the cache is full is because the drive can actually write directly to QLC at 400MBps. There isn't always 6GB of SLC cache available on the drive when you are writing large amounts of data to it.
And as for the 80% full, that was the original statement about SLC cache that started this discussion. The original statement was that once a drive is 80% full, there is no SLC cache anymore. Come on, you gotta keep up with the conversation if you're going to participate. That is the statement you are defending and the statement I disagreed with.
At this point it is obvious that your complete lack of understanding on how the technology works and your complete inability to stay on point in the discussion means it is pointless to continue this discussion with you.
From Tom's hardware. It's clearly no point discussing this with you, as you've made your mind up how things are without understanding the basics.
But lets do this. Explain to me this. If your idea of how SLC cache works on Intel drives is true. Why does the 2TB 660P drop to 100MBps write speeds when the SLC cache is full even though it supposedly has the same 24GB minimum SLC cache size(or Static-SLC Span if you want to call it that) as the 670p? Answer me that one question.
So that is direct from Intel that both the 660p and 670p have the same size static cache.
Both have the same 6GB per 512GB Static SLC cache size.
And from the first Tom's hardware article you posted: And from the previous page confirming the 660p has the same static 6GB SLC minimum per 512GB:
So, now that we have confirmed that the 660p has the same static SLC cache size as the 670p I'll ask you again: Explain to me this. If your idea of how SLC cache works on Intel drives is true. Why does the 2TB 660P drop to 100MBps write speeds when the SLC cache is full even though it definitely has the same 24GB minimum SLC cache size(or Static-SLC Span if you want to call it that) as the 670p? Answer me that one question.
Pretty much in every area, manufacturers are estimating how much their average users will actually use the product.
It's no accident why enterprise SSDs cost up to several times what their consumer counterparts cost. Or some analogs;
Graphics cards - put one under sustained load, and it may burn out after 3-6 months (e.g. mining), while Tesla cards will not.
CPUs - Xeons rated for 24-7 operations tend to cost 20-30% extra or more, for "the same specs".
Or something very different - Internet connections, if everyone used the bandwidth they paid for, ISPs would collapse.
Many products and services are based on the assumption of people not using what they pay for.
Companies tries to estimate how much/hard their user base will use a product, and calculates risk for price, warranty terms, RMAs etc. If a company can lower their quality and the increased profits outweigh the RMAs and the reputation isn't too damaged, they may do it.
Vendors will not be able to exploit the huge premium price of SLC SSDs, but those who reject the low reliability of QLC will buy more SSDs.
If I could use a 2TB QLC-SSD as a 256GB SLC-SSD, I would happily use it as my boot drive without worrying about the degradation of the NAND cells and the deterioration of data read performance due to degradation.
You can also expect a dramatically longer life for video editing tasks. You won't have to worry about writing a lot of data by browsing video sites.
If it can be applied to smartphones as well, it will maintain its comfort even after more abuse than before.
The 1Gbps network connection is the bottleneck: 90MB/s. I've been looking for cheap 10Gbps so that I can actually reach the speed of modern hard drives over the network (~200MBps x2 == 400MBps or 4Gbps required). 2.5Gbit ethernet is beginning to get popular these days, but its hard to find 5Gbit or 10Gbit ethernet at reasonable prices. SFP+ / Fiber Optics might be the better option.
All in all, I probably should just experiment with 1Gbps first and maybe just 2x mirrored hard drives (simplest redundancy setup, no effort to optimizing speeds because of the 1Gbps bottleneck)
--------
All slower than a typical SSD of course. But a dedicated NAS has many "reliability" benefits. SSD-only for the local computer. iSCSI to virtually map partitions on the NAS into the workstation (and if my workstation dies due to SSD issues, I can theoretically just reformat everything and just transfer the iSCSI target over to the next build).