Tuesday, August 29th 2017

PCI-SIG: PCIe 4.0 in 2017, PCIe 5.0 in 2019

After years of continued innovation in PCIe's bandwidth, we've hit somewhat of a snag in recent times; after all, the PCIe 3.0 specification has been doing the rounds on our motherboards ever since 2010. PCI-SIG, the 750-member strong organization that's in charge of designing the specifications for the PCIe bus, attribute part of this delay to industry stagnation: PCIe 3.0 has simply been more than enough, bandwidth-wise, for many generations of hardware now. Only recently, with innovations in storage mediums and innovative memory solutions, such as NVMe SSDs and Intel's Optane, are we starting to hit the ceiling on what PCIe 3.0 offers. Add to that the increased workload and bandwidth requirements of the AI field, and the industry now seems to be eager for an upgrade, with some IP vendors even having put out PCIe 4.0-supporting controllers and PHYs into their next-generation products already - although at the incomplete 0.9 revision.
However, PCIe 4.0, with its doubled 64 GB/s bandwidth to PCie 3.0's comparably paltry 32 GB/s (yet more than sufficient for the average consumer), might be short lived in our markets. PCI-SIG is setting its sights on 2019 as the year for finalizing the PCIe 5.0 specifications; the conglomerate has accelerated its efforts on the 128 GB/s specification, which has already achieved revision 0.3, with an expected level of 0.5 by the end of 2017. Remember that a defined specification doesn't naturally and immediately manifest into products; AMD themselves are only pegging PCIe 4.0 support by 2020, which makes sense, considering the AM4 platform itself has been declared by the company has being supported until that point in time. AMD is trading the latest and greatest for platform longevity - though should PCIe 5.0 indeed be finalized by 2019, it's possible the company could include it in their next-generation platform. Intel, on the other hand, has a much faster track record of adopting new technologies on its platforms; whether Intel's yearly chipset release and motherboard/processor incompatibility stems from this desire to support the latest and greatest or as a way to sell more motherboards with each CPU generation is a matter open for debate. However, considering Intel's advances with more exotic memory subsystems such as Optane, a quicker adoption of new PCIe specifications is to be expected from the company.
Source: Tom's Hardware
Add your own comment

31 Comments on PCI-SIG: PCIe 4.0 in 2017, PCIe 5.0 in 2019

#1
Chaitanya
Don't see industry adopting this new standard well into late next year. Also other storage devices and other enterprise hardware PCI-E 4.0 specs will be useless for gaming industry.
Posted on Reply
#2
dj-electric
2 years for one PCIE spec? that's ridicules and stupid
Posted on Reply
#3
R-T-B
Dj-ElectriC said:
2 years for one PCIE spec? that's ridicules and stupid
That's actually quite fast for a fresh standard. Fastest we've seen historically has been around that from drawing board to implementation. You don't notice because you're usually toying with whatever else is fresh and new for those two years.

Unless you mean that's TOO fast. I'm not sure we can ever have too fast of advancement.
Posted on Reply
#4
eidairaman1
The Exiled Airman
Just Skip 64, go straight to 128, i figured gpus cant utilize all the bandwidth, even 2.0 still is fine.
Posted on Reply
#5
dj-electric
R-T-B said:
That's actually quite fast for a fresh standard. Fastest we've seen historically has been around that from drawing board to implementation. You don't notice because you're usually toying with whatever else is fresh and new for those two years.

Unless you mean that's TOO fast. I'm not sure we can ever have too fast of advancement.
Thing is, if you know that your product, or spec for that matter will be much better only in two years, in that area, just go with it instead and delay progress for those two years.

PCIE spec market is not in a huge rush. staying with 3.0 until 2020 will be just fine. This way there will be more time for chip makers to aim for a finalized 5.0 spec much ahead
Posted on Reply
#6
RejZoR
Why not just skip PCIe 4.0 ? No real point in it if we'll be at PCIe 5.0 by the year after...
Posted on Reply
#7
zo0lykas
why just not skip your post?

RejZoR said:
Why not just skip PCIe 4.0 ? No real point in it if we'll be at PCIe 5.0 by the year after...
Posted on Reply
#8
Raevenlord
News Editor
RejZoR said:
Why not just skip PCIe 4.0 ? No real point in it if we'll be at PCIe 5.0 by the year after...
Not for consumer workloads, but for professional workloads, it will ease current and future bottlenecks.
Posted on Reply
#9
EarthDog
eidairaman1 said:
Just Skip 64, go straight to 128, i figured gpus cant utilize all the bandwidth, even 2.0 still is fine.
2.0 8x takes a hit of 4%, 2.0 4x takes a hit of 16%... etc...I'd clarify and say 2.0 16x is still ok. :)
Posted on Reply
#10
R-T-B
Dj-ElectriC said:
Thing is, if you know that your product, or spec for that matter will be much better only in two years, in that area, just go with it instead and delay progress for those two years.

PCIE spec market is not in a huge rush. staying with 3.0 until 2020 will be just fine. This way there will be more time for chip makers to aim for a finalized 5.0 spec much ahead
Actually, you've swayed me with that argument, at least for consumerland. Have a thanks.
Posted on Reply
#11
RejZoR
zo0lykas said:
why just not skip your post?
Then why did you quote it? :rolleyes:
Posted on Reply
#12
eidairaman1
The Exiled Airman
EarthDog said:
2.0 8x takes a hit of 4%, 2.0 4x takes a hit of 16%... etc...I'd clarify and say 2.0 16x is still ok. :)
Higher end
Gpus were only ever designed for the 8/16x slot anyway...
Posted on Reply
#13
Yukikaze
Dj-ElectriC said:
PCIE spec market is not in a huge rush
This is incorrect. There is a massive push for increased bandwidth brought forth by 100/200GbE. Faster and faster multi-port NICs are the main consumers of PCIe bandwidth and GPUs aren't even close. Server NICs, by and large, are x8/x4 beasts, and x8/x4 slots are by far the most common in the server world. In order to reduce system cost and/or size (and thus cram more things into a single rack), OEMs wish to pack the most traffic in the smallest amount of real-estate. That is where PCIe bandwidth matters, and this is where the push comes from.

GPUs do not move the world. NICs do.
Posted on Reply
#14
Durvelle27
Yukikaze said:
This is incorrect. There is a massive push for increased bandwidth brought forth by 100/200GbE. Faster and faster multi-port NICs are the main consumers of PCIe bandwidth and GPUs aren't even close. Server NICs, by and large, are x8/x4 beasts, and x8/x4 slots are by far the most common in the server world. In order to reduce system cost and/or size (and thus cram more things into a single rack), OEMs wish to pack the most traffic in the smallest amount of real-estate. That is where PCIe bandwidth matters, and this is where the push comes from.

GPUs do not move the world. NICs do.
Didn't the NICs lose
Posted on Reply
#15
Steevo
Yukikaze said:
This is incorrect. There is a massive push for increased bandwidth brought forth by 100/200GbE. Faster and faster multi-port NICs are the main consumers of PCIe bandwidth and GPUs aren't even close. Server NICs, by and large, are x8/x4 beasts, and x8/x4 slots are by far the most common in the server world. In order to reduce system cost and/or size (and thus cram more things into a single rack), OEMs wish to pack the most traffic in the smallest amount of real-estate. That is where PCIe bandwidth matters, and this is where the push comes from.

GPUs do not move the world. NICs do.
This is how we ended up with PCIx and a few other less well known ports, we needed faster server access, and the communication lines have to be wide and fast, and each extra step adds latency when routed. Servers need fast data access between racks of HDD's.

Its also why AMD came up with Infinity Fabric and have plans on using it in server space, more faster.
Posted on Reply
#16
EarthDog
eidairaman1 said:
Higher end
Gpus were only ever designed for the 8/16x slot anyway...
There are 16x physical slots which are wired x4. ;)
Posted on Reply
#17
eidairaman1
The Exiled Airman
EarthDog said:
There are 16x physical slots which are wired x4. ;)
I know that, it was to support any plug in board but example a gf510 doesnt even need the bandwidth to run
Posted on Reply
#18
Blueberries
What's interesting is this will most likely affect NVMe storage more than anything else, GPU acceleration and other uses of the PCIe protocol are far from being bottlenecked by the bandwidth of an x16 slot.
Posted on Reply
#19
Rockarola
Dj-ElectriC said:
2 years for one PCIE spec? that's ridicules and stupid
In any other industry two years is ridiculous...fast.
Look at MIDI, AC measurement (weighted average vs. peak average) and any standard used in the automotive industry...adopting a new standard usually takes decades, not years, but in computer technology it sometimes only takes months.
Posted on Reply
#20
Prima.Vera
100/200Gbps Ethernet, multiple USB 3.1 and Thunderbolt 4.0 ports over a PCI-E card, and so on....
Posted on Reply
#21
cdawall
where the hell are my stars
I would be curious the bandwidth the new amd cards with dual ssds in the back required. Wonder if those would even use the 4.0
Posted on Reply
#22
Blueberries
cdawall said:
I would be curious the bandwidth the new amd cards with dual ssds in the back required. Wonder if those would even use the 4.0
The SSDs on those cards act as VRAM, they interface directly with the GPU, not the PCI bus.
Posted on Reply
#23
bug
eidairaman1 said:
Just Skip 64, go straight to 128, i figured gpus cant utilize all the bandwidth, even 2.0 still is fine.
As far as connectivity goes, it's nice to always have untapped bandwidth available. It gives us 100% peace of mind that the interface is not a bottle neck. The only downside would be if this extra bandwidth came at significant additional cost or power draw.
Posted on Reply
#24
EarthDog
eidairaman1 said:
I know that, it was to support any plug in board but example a gf510 doesnt even need the bandwidth to run
I mean....................................................................................................

I was just clarifying your point. We didn't need to go down a rabbit hole. Anyone with half their grey matter can figure out this is with high-end cards...
Posted on Reply
#25
cdawall
where the hell are my stars
Blueberries said:
The SSDs on those cards act as VRAM, they interface directly with the GPU, not the PCI bus.
They act as a buffer between vram and system memory, but also transfer back to during rendering?
Posted on Reply
Add your own comment