• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ivy Bridge PCI-Express Scaling with HD 7970 and GTX 680

Thank you W1z for being so dedicated. 2k individual tests :twitch::wtf: I think I'd die if I had to do that :laugh:
I wonder why you guys think its pure marketing gimmick, it delivered on its promise to double bandwidth. Psersonally I would rather them release 3.0 before we hit a bottleneck on the 2.0 (and thus showing you no performance advantage right now between the two) rather than wait until 2.0 becomes a bottleneck before making 3.0 a standard (where they actually show a difference between the two during launch day). They can release it slightly later, so people will not feel "cheated" but why release something tommorow when you can do it today?
Exactly, it's about future-proofing.

That said, I'm surprised to see that PCI-E 1.1 x16 still holds up with high-end GPUs these days.
 
Thank you W1z for being so dedicated. 2k individual tests :twitch::wtf: I think I'd die if I had to do that :laugh:

He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.
 
He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.
Don't rain on my parade!

It's still dedication!
 
He wrote a script for that, all he needs to do is just to doubleclick the run button and go back to do his stuff, periodically swapping monitors and graphics cards around.

yup, otherwise it's impossible to do anything.

2000 x 10 sec to type in a result number = 5.5 hours

oh and getting all those benchmarks automated = hard, lots of hours .. only the gpu manufacturers and tpu do that, no other sites i know of
 
Kickass review W1zzard! :toast:
 
no, sorry



gpu compute seems to be a waste of time. video encoders have laughable quality. are there any other applications that anyone uses?

F@H is about all we really use.
 
Now that was a cool review.... well done Wizz :toast:
 
Thanks for the review. I wish a 16x + 4x test would have been ran as well in light of the abundant boards out there supporting crossfire that way.
 
we'll see about that. given the data in this article i seriously doubt higher resolution in multi gpu will need more bandwidth.

i'm planning to do multi gpu testing once hd 7990 is out (using 680 sli, 690, 7970 cf, 7990)

multi-gpu is only a minority of users so i thought it would be more useful to test single card first

Thanks for doing this - but it's multi-gpu setups that need the bandwidth. The results while reassuring to see are nothing unexpected bearing in mind what we've seen in the past. Multi GPU would have been far more useful. Check out vega's data - it shows that pci-e bw is a huge limitation for multi-gpu. Multi-gpu users are probably as much of a minority as 680 users.

What I'd like to see is a compare of 2/3/4 way on x79 (modded drivers for pci-e3) vs 4 way on 2/3/4 way Z77 with the plx chip and without the plx chip for 2 way. Of course you'll need suitably overclocked cpus too.
 
Excelent review.
But I would love to know if x8x8x4 PCI-e 2.0 would be ok for multi-gpu, anyone knows?

Cheers
 
w1zz will you be testing 2 690s/7990s?

only they are the cards that will need pcie3 if any do :)
 
w1zz will you be testing 2 690s/7990s?

no plans for that. right now the plan is 2x 680, 2x 7970, 1x 690, 1x 7990.

send me another 690 and 7990
 
Awesome review with great info! Glad to know this before planning my next upgrade path.

Small itty-bitty typo I believe: in conclusion, 6th bullet: "expect" should be "except" I think

Otherwise, just pure goodness
 
It will take many years before gpus will need the kind of bandwith pcie 3.0 @ 16x has to offer.
 
Last edited:
It will take many years before gpus will need the kind of bandwith pcie 3.0 has to offer.
Disagree. With PCIe 3.0, desktops or laptops can be designed smaller, cheaper, more efficient, but using fewer lanes. With 3.0 there really is no need for x16 anymore. We can use x8 sockets or even x4. Save space. Save costs. And every socket can be an x8 meaning stick your GPUs or SLIs whereever you want. But with x4 we will be getting close to saturating bandwidth
 
Last edited:
no plans for that. right now the plan is 2x 680, 2x 7970, 1x 690, 1x 7990.

send me another 690 and 7990

if i had them dude i would be happy to so we could all see the results :D

as it is am im broke as a joke and can only dream :)

i look forward to it none the less as it will be good to see your findings with those cards :rockout:
 
Nice to see up to date figures on this.

I always laughed when people argued its a poor setup to run multi-gpu on a mainstream platform thanks to the "crippling" effect of "only" having x8 lanes. Previous reviews on this topic showed the difference to be negligible with such setup.

Pleased with my new rig and happy in the knowledge I can throw another gpu in at a later time with no fuss or penalty.
 
I remembered last at about 2010 I bought an HD 5770 from HIS and used my MSI K9a-platinum. I plugged the board and pushed the power button, nothing happened. Changed to the GT 7100 I had lying around and it worked. Bought a new PSU (from CX 500 to TX 650) still no joy, banged my head real hard, still no joy.

Wrote an email to MSI and would'nt you know it, gave me a new BIOS about 3 hours later, and all started to work. The question is, I got a 6950 now, would it only require a BIOS to run on that old rig of mine? since it was PCIE 1.1 already and X16 on two slots?

My new rig does maximize all components, but it kept me wondering about the tech race I am in. Any thoughts guys?
 
Thanks, now I can slap to their faces (those that upgrade for only this reason) that 2.0 16x has not substantial difference vs 3.0 16x.
 
Thanks, now I can slap to their faces (those that upgrade for only this reason) that 2.0 16x has not substantial difference vs 3.0 16x.

For a single card, yes, 100% correct. There IS a very small difference, but not one that is really noticible. Very similar to the performance boosts offered by some pre-overclocked VGAs.:laugh:


I noticed some larger differences between x16/x8/x8 VS. x16/x8/x16 in trifire with 6950's, to the tune of 1000 points in 3DMark Vantage, just in PCIe 2.0(clearly, with 6950's). I am not sure exactly why there is a noticible difference, as it could be the extra lanes allowing the PCIe controller a bit more wiggle room for assigning data to each PCIe link, it could be the lesser overhead of PCIe 3.0 encoding for PCIe 2.0...I dunno.


I also am unsure if that difference in Vantage is seen elsewhere, as I ended up RMA'ing the third card before I did more testing. Frankly, because I RMA'd the card, it could have just been the card acting funny.


I've always held the opinion that PCIe bandwidth only matters when the bus has been saturated, and it seems quite obvious that a single card barely makes use of the added 8 lanes from a x16 link vs. a x8 link. It will be very interesting to see W1zz's testing of the multi-GPU cards, and whether that will have a larger impact.

We could also surmise that the driver itself might not take full advantage of the PCIe bandwidth offered...or in otherwords, the driver may not be optimized to notice the difference in PCIe link, and could be optimized for a X8 link, or something. THere is no difference, for sure, but there's not a lot of quntitative info that declares WHY it doesn't matter.
 
Thank you for this testing. What would be of particular interest to a lot of people, IMO, would be to test the 690.
 
Once again THANK YOU WIZ! Your hard work and dedication to this forum is TRULY appreciated!

I see now that going from 2.0 to 3.0 at x16 is kinda worthless due to the net of only 1FPS or 1%
 
I think this was a great review. Thanks a lot for the hard work.

For those asking for more, or trying to prove a point why IB is not worth upgrading. Most people had realized that by now.

And those saying PCI-E 3.0 is just a gimmick, well, would you like it to be like SATA? Where SATA 6 is about (or already a) to be a bottleneck with 550/525Mb SSDs rapidly getting into the market. I dont think so. So please let us have bandwidth available before we saturate it to the max.
 
Back
Top