• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Introduces the 990 EVO SSD with PCIe 5.0 x2 Interface

The usability of a PCIe 5.0 x2 link is questionable at the moment - the SSD can only work in this mode when connected to a Threadripper 7000 CPU, which can split each of its PCIe 5.0 x16 buses into eight 5.0 x2 links. Not even Xeons can do that (they can only split 5.0 x 16 into eight 4.0 x2 links)
 
I'll join the chorus of curious people about that pcie 5.0 x2 or 4.0 x4 thing they did, like what the heck? That's not how lanes usually work :wtf:

It's a DRAM-less mid-range drive, so yeah...

That's a pretty big and sad downgrade for the EVO series. Usually the EVO series was a very sensible choice over the pro, slightly cheaper slightly slower but overall very good, not anymore I guess

The usability of a PCIe 5.0 x2 link is questionable at the moment - the SSD can only work in this mode when connected to a Threadripper 7000 CPU, which can split each of its PCIe 5.0 x16 buses into eight 5.0 x2 links. Not even Xeons can do that (they can only split 5.0 x 16 into eight 4.0 x2 links)

The pcie distribution on most current generation boards sucks in my opinion, it's not like it was great in the past, with pcie 4.0 it was already perfectly sensible to have ssd's running at x2 if you're going to put 4 m.2 on a single board, with pcie 5.0 it becomes even more ridiculous and wastefull
 
PCIe 5.0 x2 Interface

Wait is that not the same bandwidth as PCIe 4.0 at x4 lanes?

It's the same bandwidth but you can't cut or double lanes like that. My guess is the controller has 2 lanes at 5.0 and another 2 at 4.0 and when negotiating the connection if it encounters a 5.0 host it presents the 2 lanes, if it's a 4.0 host it downgrades the 2 5.0 lanes to 4.0 to have the full 4x 4.0.

Not sure how much that might go or not against the stantard, might work well enought but doesn't sound like something that should be allowed.
 
The usability of a PCIe 5.0 x2 link is questionable at the moment - the SSD can only work in this mode when connected to a Threadripper 7000 CPU, which can split each of its PCIe 5.0 x16 buses into eight 5.0 x2 links. Not even Xeons can do that (they can only split 5.0 x 16 into eight 4.0 x2 links)
Why is it so hard to understand how PCIe works? You don't need to use all the lanes in a physical interface. Of course you won't gain "extra" lanes to use for something else either if you put a PCIe 5.0 x2 drive in a PCIe 5.0 x4 M.2 slot, but it will work just fine.
The only potential issue here is if the drive doesn't detect that it's in a PCIe 5.0 interface properly, but even so, it shouldn't be a huge issue based on the early benchmarks shared earlier in the comments here.

PCIe 5.0 x2 Interface

Wait is that not the same bandwidth as PCIe 4.0 at x4 lanes?
It should be, but it doesn't appear to be the same for all benchmarks if you look at the early benchmarks linked to earlier in this thread, at least not for this drive.

It's the same bandwidth but you can't cut or double lanes like that. My guess is the controller has 2 lanes at 5.0 and another 2 at 4.0 and when negotiating the connection if it encounters a 5.0 host it presents the 2 lanes, if it's a 4.0 host it downgrades the 2 5.0 lanes to 4.0 to have the full 4x 4.0.

Not sure how much that might go or not against the stantard, might work well enought but doesn't sound like something that should be allowed.
Did you check out the link to the review earlier in this thread, it really seems to switch between the two, so Samsung has figured out to do something no-one has done before.
 
Last edited:
Why is it so hard to understand how PCIe works? You don't need to use all the lanes in a physical interface.
So many people commenting who seem incapable of understanding this.

It should be, but it doesn't appear to be the same for all benchmarks if you look at the early benchmarks linked to earlier in this thread, at least not for this drive.
Fits my guess of how Samsung has made this work - they're reusing the EVO PCIe 4.0 controller, just with some extra filter algorithms (and maybe some basic hardware) in front of it such that when the controller is running in PCIe 5.0 x2 mode, these filters are active to deaggregate the data stream from two 5.0 lanes into four 4.0 ones for data coming into the drive, and the opposite to aggregate 4.0 x4 to 5.0 x2 when data is flowing out - thus adding some overhead.

In 4.0 x4 mode the filters are unnecessary and thus disabled, so the data passes through to the controller as-is, there's zero overhead and the drive performs almost exactly as an EVO would.
 
Fits my guess of how Samsung has made this work - they're reusing the EVO PCIe 4.0 controller, just with some extra filter algorithms (and maybe some basic hardware) in front of it such that when the controller is running in PCIe 5.0 x2 mode, these filters are active to deaggregate the data stream from two 5.0 lanes into four 4.0 ones for data coming into the drive, and the opposite to aggregate 4.0 x4 to 5.0 x2 when data is flowing out - thus adding some overhead.

In 4.0 x4 mode the filters are unnecessary and thus disabled, so the data passes through to the controller as-is, there's zero overhead and the drive performs almost exactly as an EVO would.
Yeah, that seems like the logical way of doing it, but until we have some actual proof, we can only speculate.
Found another review, in English this time, but it seems like Samsung hasn't told reviewers exactly how it works.
 
Huh, interesting. Can't say I've seen this weird PCIe lane count trick before, but it's cool - hopefully Samsung will tell us how it works in some level of detail.

Curious what they'll call their next generation of SSDs. 991 EVO/Pro? 1000 EVO/Pro? 1K EVO/Pro?
 
Fits my guess of how Samsung has made this work - they're reusing the EVO PCIe 4.0 controller, just with some extra filter algorithms (and maybe some basic hardware) in front of it such that when the controller is running in PCIe 5.0 x2 mode, these filters are active to deaggregate the data stream from two 5.0 lanes into four 4.0 ones for data coming into the drive, and the opposite to aggregate 4.0 x4 to 5.0 x2 when data is flowing out - thus adding some overhead.

In 4.0 x4 mode the filters are unnecessary and thus disabled, so the data passes through to the controller as-is, there's zero overhead and the drive performs almost exactly as an EVO would.

Isn't that what chipsets and PLX switches do? How did they put something like that in the ssd controller while staying performance and cost competitive?
 
Would be nice if this allows you to install 4 on a adapter card in a x8/x8 bifurcation slot setup too and use the other x8 lanes for the GPU.
 
like what??
Hi,
Released with deadly suicidal firmware, now fixed
Writes speeds drop a lot after a short time from a couple reports on tpu one was on a pci-e 3 but write dropped from 3200+- to 2500+- which is odd to say the least.

 
Last edited:
Isn't that what chipsets and PLX switches do? How did they put something like that in the ssd controller while staying performance and cost competitive?
Disclaimer: I'm a software, not a hardware, developer, but there are some things (algorithmic and/or time complexity) that both domains deal with, so here's my best guess.

It almost certainly boils down to the number of PCIe lanes involved. Aggregating and deaggregating the data stream(s) from/to a lane(s) has processing overhead, the higher that overhead the more PCIe bandwidth you effectively lose (because the data transfer stalls while the agg/deagg processing is ongoing), and the more lanes involved the more processing required - thus the higher the total overhead. That's why Samsung can get away with a relatively simple 4 => 2 or 2 => 4 mapping, whereas enterprise deals with far higher counts of 32 => 16 lanes and vice versa. The latter is the level at which you need to build dedicated hardware to do this processing, such that the overhead and therefore bandwidth loss is as minimal as possible.

It's also why PCIe lane switchers became too expensive to use in desktop applications - as per-lane bandwidth doubles with each new PCIe version, the hardware required to switch those lanes with acceptably low bandwidth losses has necessarily become more and more complex. Then on top of that you have the more and more stringent electrical requirements to support that higher bandwidth, which requires more components, which also increases cost.

Ever-faster data transfer links are a two-edged sword - they're great for consumers, but horrible for engineers. This is also why new versions of PCIe have appeared less frequently than earlier ones - because it's taking longer and longer for the engineers building PCIe devices, to be able to (re-)design their components to be able to adequately handle the ever-increasing amount of bandwidth and lanes.
 
TLDR Not worth it to buy because of the low speed. But interesting idea and Samsung are into something.
 
:roll: haha, all drives are hybrid as PCIe is backwards compatible.

Not this type of hybrid, PCIe is backwards compatible but you loose half the bandwidth because the lanes are slower. What samsung did here is having a different number of lanes depending on if you're using PCIe 5.0 or 4.0 - 2x or 4x respectively - which means you have roughly the same bandwidth available using the newer or older spec.

Haven't looked at reviews yet so can't say if it ends up with something interesting or not, but in theory at least is pretty interesting.
 
Haven't looked at reviews yet so can't say if it ends up with something interesting or not, but in theory at least is pretty interesting.
The early reviewers at tweakers.net obviously had issues either with their test system or immature firmware - the SSD had far worse performance in 5.0 mode in many tests, and the 1TB variant in 5.0 mode was worse still. Tom's Hardware Guide review of a 2TB SSD at least shows consistency and small differences between 4.0 and 5.0, in favour of 5.0 mode. The 970 EVO is just another SSD, not particularly good or bad, and the price is now 146€/2TB at amazon.de, just where it should be in relation to other SSDs.

So many people commenting who seem incapable of understanding this.
The usability of a PCIe 5.0 x2 link remains questionable a few months later. Of course it works but where's the advantage? I was lucky to ask the right people in the wrong thread here at TPU about that, and they gave me great and insigtful answers.

Even more restricted use cases are possible - these SSDs could be put to work with only a single-lane PCIe 5.0 interface with USB/TB adapters for example. Someone should test that too, but the only method currently possible is probably to physically disable lanes on the M.2 connector.
 
Back
Top