Thursday, October 26th 2017

PCI SIG Releases PCI-Express Gen 4.0 Specifications

The Peripheral Component Interconnect (PCI) special interest group (SIG) published the first official specification (version 1.0) of PCI-Express gen 4.0 bus. The specification's previous draft 0.9 was under technical review by members of the SIG. The new generation PCIe comes with double the bandwidth of PCI-Express gen 3.0, reduced latency, lane margining, and I/O virtualization capabilities. With the specification published, one can expect end-user products implementing it. PCI SIG has now turned its attention to the even newer PCI-Express gen 5.0 specification, which will be close to ready by mid-2019.

PCI-Express gen 4.0 comes with 16 GT/s bandwidth per-lane, per-direction, which is double that of gen 3.0. An M.2 NVMe drive implementing it, for example, will have 64 Gbps of interface bandwidth at its disposal. The SIG has also been steered toward lowering the latencies of the interconnect as HPC hardware designers are turning toward alternatives such as NVLink and InfinityFabric, not primarily for the bandwidth, but the lower latency. Lane margining is a new feature that allows hardware to maintain a uniform physical layer signal clarity across multiple PCIe devices connected to a common root complex. This is particularly important when you have multiple pieces of mission-critical hardware (such as RAID HBAs or HPC accelerators), and require uniform performance across them. The new specification also adds new I/O virtualization features that should prove useful in HPC and cloud computing.
Add your own comment

32 Comments on PCI SIG Releases PCI-Express Gen 4.0 Specifications

#26
nemesis.ie
EarthDogAnd only if that PCIe 4 SSD is able to saturate PCIe 3.0 bandwidth in the first place would it be worth it. ;)
Not just in that case, if you can have (on a none HEDT board) an NVME at current speeds (e.g. @3200MB/sec) only needing 2 lanes instead of 4, you can have two of them at full speed versus 1 (think AM4).

Likewise you can have 4 graphics cards with the same b/w each as you currently have for 2. SGTM!

On the "other" topic. I'd be happy to proof maybe 3 or 4 articles/day, if folks want to send them to me in advance of publishing - although I do have eyesight issues at the moment, surgery soon. ;)

Oh and shouldn't that be "relatively fluently"? :laugh:
Posted on Reply
#27
Rehmanpa
I want to see threadripper booting off of a pcie 4.0x16 nvme ssd in 2 way raid 0. Talk about faassssstttt
Posted on Reply
#28
Th3pwn3r
RehmanpaI want to see threadripper booting off of a pcie 4.0x16 nvme ssd in 2 way raid 0. Talk about faassssstttt
How fast? My current NVME with i7700k boots in the 15-17 second range but I usually just put it to sleep.
Posted on Reply
#29
EarthDog
RehmanpaI want to see threadripper booting off of a pcie 4.0x16 nvme ssd in 2 way raid 0. Talk about faassssstttt
Youd need faster drives to see a difference. Most dont currently saturate pcie 3.0. The reason why you dont see scaling in most R0 NVMe is because they go through the dmi3.0 pipe (limited to pcie 3.0 4x). ;)
Posted on Reply
#30
Rehmanpa
EarthDogYoud need faster drives to see a difference. Most dont currently saturate pcie 3.0. The reason why you dont see scaling in most R0 NVMe is because they go through the dmi3.0 pipe (limited to pcie 3.0 4x). ;)
That's why I said a pcie 4.0x16 ssd so that it would go maximize its speed :P
Posted on Reply
#31
Prima.Vera
960 Pro already saturates the PCIE 3.0 x4 lines on M.2 format.... I'm curious to see 2 of those drives in a RAID0 setup on a PCI 4.0. Right now it's capping at 3.5GB/s...
Posted on Reply
#32
EarthDog
Prima.VeraRight now it's capping at 3.5GB/s...
Because of the DMI3.0 it has to go through I would imagine. It needs a x8 AIC or 2 CPU connected M.2 slots to bypass the DMI.
Posted on Reply
Add your own comment
Apr 26th, 2024 06:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts