Thursday, December 22nd 2011

PCI Express 3.0 Has Zero Performance Incentive for Radeon HD 7970: Tests

Over the last few months, motherboard manufacturers have been raising a big hoopla over how it's important to pick their products that feature PCI Express 3.0 (Gen 3.0) slots. There was even some drama between competing motherboard manufacturers over who was first to the market with this technology, even when consumers couldn't really make use of the technology. To begin with, you needed a next-generation Ivy Bridge CPU, then you needed a compliant graphics card. Sandy Bridge-E, fortunately, formally introduced the technology, complete with motherboards and processors that support it.

GPU maker AMD wanted to be the first to be out there with a GPU that's compliant with this interface, and so one thing led to another, and VR-Zone got to set up a test-bed using Core i7 "Sandy Bridge-E", ASUS Rampage IV Extreme (which allows users to change PCI-Express standard mode in the BIOS setup program, by forcing Gen 2 or Gen 1 mode), and an HD 7970, to see if running the GPU on PCIe 2.0 and PCIe 3.0 modes made any worthwhile difference. The results are in: zero, nada, zilch, sunna (zero in my language).
In its comparison, VR-Zone put the GPU through 3DMark 11 (a DirectX 11 graphics benchmark) and ComputeMark (a GPU compute shader benchmark that heavily loads system bus). The performance figures between the two were agonizingly insignificant. 3DMark 11 and ComputeMark are tell-tale tests of whether the GPU (and with it, its system interface) is at least getting loaded enough. You would much rather spend the money you saved to upgrade your current, perfectly-functional LGA1155 motherboard to an "ooh-Gen3" one, on a memory upgrade, before DRAM prices rebound.

One area, however, where Gen 3.0 could have a performance incentive, could be with future Ivy Bridge LGA1155 platforms, where to run 2-way CrossFire, the single x16 link from the CPU is split into two PCI Express 3.0 x8 links. Those numbers could be interesting.Source: VR-Zone
Add your own comment

44 Comments on PCI Express 3.0 Has Zero Performance Incentive for Radeon HD 7970: Tests

#1
sneekypeet
Unpaid Babysitter
cadaveca said:
Higher bandwidth for inter-GPU communication is needed, period.
So when I heard this was implemented for PCIE raid cards for SSDs that were already thrashing the bandwidth of 2.0, that was all made up?
Posted on Reply
#2
Yellow&Nerdy?
For single card configs, PCI-E 3.0 is not required. But P67 and Z68 only have 16x lanes, so it might be useful for running two cards in Crossfire.
Posted on Reply
#3
cadaveca
My name is Dave
sneekypeet said:
So when I heard this was implemented for PCIE raid cards for SSDs that were already thrashing the bandwidth of 2.0, that was all made up?
Can't tell ya, I was never sent the PCIe SSD to confirm as I was promised by a certain OEM. In fact, that particular OEM stopped sending me stuff, period, so screw 'em. Yeah, faked, 100%, and all the reviews done with a PCIe SSD with the board in question were faked, too.:rolleyes:

Besides, your memory fails you...PCIe SSDs, not RAID cards(ie RevoDrives). Could have simply been BIOS-level tweaks, even.
Posted on Reply
#4
sneekypeet
Unpaid Babysitter
I was thinking more of Raid cards (SAS maybe) not the revo's. Your "period" at the end made me question what I heard.
Posted on Reply
#5
cadaveca
My name is Dave
I don't blame you. I question it myself.


Particularily because we know Ivy-Bridge samples are floating around, so real PCIe 3.0 tests could have been done.

No need for SAS cards when X79 has SAS built-in. The X79 enthusiast SKU has these ports disabled, but that's not the only SKU. OEMs can use other "Patsburg" chipsets and enable the functionality, if so desired, but it's going be with a more expensive chipset. PCie 3.0 isn't used to connect Intel's PCH's to the CPU, so I see that particular list of components to be sub-optimal anyway, as most PCIe ports that would take a RAID card wil lbe running off of the PCH.

Intel designates PCIe 3.0 from the CPU as for graphics(check official chipset diagrams). I can't even check currently if putting RAID cards or a PCIe SSD into one of the ports works, even. I had legitimate reasons for wanting a PCIe SSD, but I'm wasting my time trying to get one for board testing purposes.

So, this question about PCIe 3.0 increasing performance, for anything, cannot be done at this time, but that's not new news. Doesn't matter if it's VGAs or SSDs.
Posted on Reply
#6
ensabrenoir
When ivy comes out are the only tests I want to see
Posted on Reply
#7
TheOnlyHero
I think its too early for 3.0 Express. The Cards are not using the full potencial of PCI Express 2.0, and they want to introduce the new Expess 3.0 ?!:eek:
Posted on Reply
#8
dr emulator (madmax)
i'd love to see someone create a ps2/latest console emulator, that could use this new tech to it's full potential (yes i own 2 ps2's plus numerous other consoles , just i love being able to play my games on one system)

i'm also curious as to how emulators work on these new 2011 motherboards with win7

ye ye wrong place to talk about it ,but usually not many emulator fans have the latest tec :(

my i7 920 can play alot of the ps2/gamecube titles i have ,but some are still suffering from bad frame rates
Posted on Reply
#10
twicksisted
btarunr said:
The results are in: zero, nada, zilch, sunna (zero in my language).
So whats new, same story when PCI-E 2.0 came out, but rest assured components will saturate it soon ;)
Posted on Reply
#11
Nyte
Hi,

I'm the lead PCI-Express systems engineer over @ AMD Markham that brought up PCI-E 3.0 for this generation of GPUs. I would like to contact the author of that article but I can't find this email address. If someone would be so kind as to put me in contact with him that would be great.

I have a inkling feeling that he isn't even running at Gen3 speeds so I'd like to confirm his configuration and setup (setting a CMOS option does not guarantee anything contrary to user-experience belief).


Stephen
Posted on Reply
#12
josejones
My understanding is that in order to get true PCIe 3.0 support every link in the chain must support PCIe 3. Otherwise, it can only be as good as the weakest link in that chain i.e. mobo, chipsets, CPU, GPU etc.

Nyte "I'm the lead PCI-Express systems engineer over @ AMD Markham that brought up PCI-E 3.0 for this generation of GPUs."

Nyte, (Stephen with AMD) when will AMD be coming out with motherboards that have full PCIe 3.0 support? Intel's gen 3 mobos have been out for months now. I'm not aware of AMD even mentioning them.

Where's the AMD mobos w/PCI Ex. 3.0 promised would be here by now?

I ask because I'm ready for my next build but with the work that I do I need full PCIe 3 support so, that's what I'm waiting for.

Please ask AMD to make a video at their youtube ch. explaining how PCIe 3.0 works and what all it is used for. I see too many gamers not understanding that it's not just for games. Some of us actually have WORK to do.
Posted on Reply
#13
Nyte
josejones said:
My understanding is that in order to get true PCIe 3.0 support every link in the chain must support PCIe 3. Otherwise, it can only be as good as the weakest link in that chain i.e. mobo, chipsets, CPU, GPU etc.

Nyte "I'm the lead PCI-Express systems engineer over @ AMD Markham that brought up PCI-E 3.0 for this generation of GPUs."

Nyte, (Stephen with AMD) when will AMD be coming out with motherboards that have full PCIe 3.0 support? Intel's gen 3 mobos have been out for months now. I'm not aware of AMD even mentioning them.

Where's the AMD mobos w/PCI Ex. 3.0 promised would be here by now?

I ask because I'm ready for my next build but with the work that I do I need full PCIe 3 support so, that's what I'm waiting for.

Please ask AMD to make a video at their youtube ch. explaining how PCIe 3.0 works and what all it is used for. I see too many gamers not understanding that it's not just for games. Some of us actually have WORK to do.
Unfortunately I can't comment on our 3.0 chipset availability due to conflict-of-interest related stuff. Instead, what I can say is this, would it make sense for AMD to NEVER release a 3.0 chipset and let Intel take free reign of that market segment? I'm sure you know the answer to that.

You are also correct that the ultimate throughput of the end-to-end link is constrained to the lowest performing link in the hierarchy. That is not all though, the more "links" you add in this topology, the greater delay you induce. A longer delay will ultimately starve the PCIe credit/debit based transaction system causing stalls (or rather idle "bubbles" to appear over the link). This leads to inefficiency. Unfortunately though, it's very hard to even saturate a PCIe 3.0 x16 link in the first place due to other system related bottlenecks (notably the CPU).

A Youtube video is a great idea, I'll have to ping the marketing departments for that though (I'm in the engineering department). I'll gladly answer any technical questions you have regarding PCIe 3.0 because it isn't a company trade secret.

You are correct in that PCIe 3.0 is not geared purely for games... there are many market segments that this is valid in which is not apparent to gamers:
1. Netbooks/Notebook PCB layouts that use less than x16 lanes (for routing issues or lowering material costs) can run at Gen3 to offset the reduced width to equalize performance.
2. Compute heavy (GPGPU) applications. Geared towards server markets.
3. Same as point number 1 but applying to desktop motherboards.
4. Various internal reasons which unfortunately I can't mention but I can say something like "transportability of the PCIe IP to other subsystems".
5. And final reason: future-proofing and it gives engineers a job :).
Posted on Reply
#14
josejones
Thanks for your response, Nyte. I'm no A+ certified or IT PC guy so, I don't know enough about it to ask legit questions beyond the basics. I'd just like to be made aware of exactly what I can expect from the new PCIe 3 standard as well as what I shouldn't expect or its limitations. I'd like to know specifically what software/applications will benefit most from this doubling of bandwidth. And maybe even a sneak peak at PCIe 4 since they're already talking about it which will double the bandwidth again.

We have a small business where we have to do everything ourselves now to cut overhead - we build our own websites, make our own product description videos, create our own books, covers, images etc. We work with the Adobe Creative Suite, Photo Shop, Premier, Word and more almost everyday.

And don't tell anyone but, I sneak in a little online gaming now and again ... ssshhhh. Wondering how much that new Gigabit Ethernet LAN really helps but that's a different topic.
Posted on Reply
#15
Revisionfx
slow transfer from x79 mobo to 7970?

Nyte said:
Hi,

I'm the lead PCI-Express systems engineer over @ AMD Markham that brought up PCI-E 3.0 for this generation of GPUs. I would like to contact the author of that article but I can't find this email address.
Stephen
Hi Stephen,

We have 7970 on both
Asus P8Z68-V PRO/GEN3 i7 i5 i3 LGA1155 Z68 DDR3
and
Asus RAMPAGE IV Extreme i7 X79 LGA 2011 DDR3 PCIE
(yes tried their latest bios upgrade)

Transfering data via openCL from RAM to GPU is at least twice (if not more) slower on second motherboard, have any clue?

Pierre (never posted here so I am not sure I will receive a response to thread), so:
jasmin at revisionfx dot com
Posted on Reply
#16
theoneandonlymrk
Nyte said:
Unfortunately I can't comment on our 3.0 chipset availability due to conflict-of-interest related stuff. Instead, what I can say is this, would it make sense for AMD to NEVER release a 3.0 chipset and let Intel take free reign of that market segment? I'm sure you know the answer to that.

You are also correct that the ultimate throughput of the end-to-end link is constrained to the lowest performing link in the hierarchy. That is not all though, the more "links" you add in this topology, the greater delay you induce. A longer delay will ultimately starve the PCIe credit/debit based transaction system causing stalls (or rather idle "bubbles" to appear over the link). This leads to inefficiency. Unfortunately though, it's very hard to even saturate a PCIe 3.0 x16 link in the first place due to other system related bottlenecks (notably the CPU).

A Youtube video is a great idea, I'll have to ping the marketing departments for that though (I'm in the engineering department). I'll gladly answer any technical questions you have regarding PCIe 3.0 because it isn't a company trade secret.

You are correct in that PCIe 3.0 is not geared purely for games... there are many market segments that this is valid in which is not apparent to gamers:
1. Netbooks/Notebook PCB layouts that use less than x16 lanes (for routing issues or lowering material costs) can run at Gen3 to offset the reduced width to equalize performance.
2. Compute heavy (GPGPU) applications. Geared towards server markets.
3. Same as point number 1 but applying to desktop motherboards.
4. Various internal reasons which unfortunately I can't mention but I can say something like "transportability of the PCIe IP to other subsystems".
5. And final reason: future-proofing and it gives engineers a job :).
Is there any chance of some ltd compatibility or bios upgrade to full pciex 3 for 990fx boards,obv thinkin of my own
Posted on Reply
#17
RejZoR
They tested at 1024x600 !? That's netbook territory, how the hell is it suppose to saturate the PCIe bus at all!? You need to test with very high resolution, very high detail and very high levels of FSAA to show some differences. Preferably you need CrossfireX or SLi configuration as well.
Besides, i think the history is repeating with PCIe bus like it did with AGP. All just a theoretical difference but in reality, all the bus versions act pretty much the same.
Posted on Reply
#18
Prima.Vera
PCI-EX 3.0: CrossFire/SLI and Dual GPU cards. Also test in resolution HD and beyond. This is how you need to make tests, not on a single GPU card...:shadedshu
Posted on Reply
#19
Aquinus
Resident Wat-man
Prima.Vera said:
PCI-EX 3.0: CrossFire/SLI and Dual GPU cards. Also test in resolution HD and beyond. This is how you need to make tests, not on a single GPU card...:shadedshu
Or the benchmark can be done right?


Source
Posted on Reply
Add your own comment