• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon Fury X PCI-Express Scaling

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,747 (3.75/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
In this article, we investigate how performance of AMD's Radeon Fury X is affected when running on constrained PCI-Express bus widths such as x8 or x4. We also test all PCIe speed settings, 1.1, 2.0, and 3.0. One additional test checks how much performance is lost when using the chipset's PCIe x4 slot.

Show full review
 
Last edited:
For fun, does the overall system power consumption change when using pcie2 vs 3?
 
I was hoping for this test run with CrossFire Fury X, to see how much bandwidth XDMA consumes and how badly CF scaling is affected by lower speeds.
 
I was hoping for this test run with CrossFire Fury X, to see how much bandwidth XDMA consumes and how badly CF scaling is affected by lower speeds.
I'll do 970 SLI next, after that maybe Fury X CF, if I can find a second card..
 
IIRC Wolfenstein had interesting results last time. Maybe add this game to 970 test?
 
This is a great read as its good to get some answers to questions many people ask when it comes to PCIE scaling.

This still shows us how as long as you have Sandy-Bridge or higher (Heck even chips like the i7 920) you can still have games maxed out without worrying about PCIE performance scaling.
 
Cool... So I'm all good to go for another few years with my 770 chipset then.:laugh: (edit) Seeing GhostRyders post makes me cry now.

I like the new look of the charts by the way. :lovetpu:
 
Thank you.
This kind of tests define what a tech site is, and this is a tech site.
 
Thanks for this great article!
 
great review W1zzard, any tests for GTX980ti? or maybe you tell me what you guess ? i think same as this
 
@W1zzard can you do PCI scaling on a board which has PLX chips? I wonder the impact of PLX latencies with regards to crossfire/SLI on non-plx vs PLX board

Anyway, thanks for the test and great review.
 
I'll do 970 SLI next, after that maybe Fury X CF, if I can find a second card..

Please please please ... I'm very much interested to see what happens now that the crossfire bridge is gone. There are many users out there which have 16x+4x slots (both on 2.0 and on 3.0).
 
In the coming weeks, we will test GTX 970 SLI using various PCIe settings to investigate what happens in a multi-GPU setup.

Don't Nvidia restrict SLI to only 8x and 16x slots?
 
Don't Nvidia restrict SLI to only 8x and 16x slots?
I think so, but with PCIe speeds 1.1, 2.0 and 3.0 there should be sufficient ways to test this
 
I think in reality, PCI-E scaling is more of a thing in CFX/SLI setups. It's 95% of the time, if someone has a PCI-E slot for a single graphics card that it's going to be a 16 lane slot with the exception of some very low end boards that only have 4 lane slots which would already be gimped by the platform and CPU.

What I want to see is the difference for dual GPU when its 16/16, 8/8, and 8/4 which seem to be the most common configurations for two GPUs.
 
one thing that stands out, theres a large amount of detail on intel platforms supporting PCI-E 3.0 in the first page, but nothing about AMD supporting it. seems odd to have such details on intel, yet forget the competition.

edit: poor wording. I'm aware that AMD has piss all support for it on the chipset side, but a comment on that seems like it belongs in this type of article.
 
Last edited:
one thing that stands out, theres a large amount of detail on intel platforms supporting PCI-E 3.0 in the first page, but nothing about AMD supporting it. seems odd to have such details on intel, yet forget the competition.
AM3/+ doesn't really support PCI-E 3.0 in any meaningful way which only leaves APUs. On the other hand, my 3820 is currently running my 390 using PCI-E 3.0 which just gives you an idea for how long Intel has had it available.
 
AM3/+ doesn't really support PCI-E 3.0 in any meaningful way which only leaves APUs. On the other hand, my 3820 is currently running my 390 using PCI-E 3.0 which just gives you an idea for how long Intel has had it available.

Yea but the funny thing is that AMD was right holding that off as really PCIE3 is not needed... especially for their budget-mid platform offerings. And these tests prove it.
 
Yea but the funny thing is that AMD was right holding that off as really PCIE3 is not needed... especially for their budget-mid platform offerings. And these tests prove it.
AMD didn't hold off. AM3/+ is just a dead platform. A lot of newer APUs have PCI-E 3.0, in particular the 7xxx series of APUs.
 
AMD didn't hold off. AM3/+ is just a dead platform. A lot of newer APUs have PCI-E 3.0, in particular the 7xxx series of APUs.

Yeah but it took them few years to have it at all.
 
This was a very interesting article, thank you for making it. I have to admit, I was expecting way more bad performance on the older PCI-E lanes, but to be honest, it really isn't that bad at all.
 
Still no worries between 8x and 16x 3.0 it seems.

In my current setup i'm limited to 8x 3.0 because of my dedicated Sound Card. I could put it in the 1x Slot, but then it's creamed on top of my Videocard and i'm not really fond of that.
 
Most pcie traffic in games is done during the loading screen and of course there's no point measuring performance of a progress bar.
So, during the actual gameplay pcie traffic spikes when streaming high res texture mip levels and/or geometry LOD.
Unreal Engine uses texture streaming extensively and streaming is multi threaded ... one would think it shouldn't be a problem to saturate pcie bus this way on a old pcie 1.1 dual cpu mobo with xeons.
The thing is unreal engine (and probably others too) have system in check to monitor bandwidth used for texture streaming, and a mechanism to optimally use available pcie bandwidth in order to avoid stuttering.
In Unreal Engine games you can type in console "stat StreamingDetails" and see bandwidth that streaming uses.
It would be interesting to see if those values exceed any of the pcie modes, and if it's adjusted to work stutter free on all modes.
 
How does the bandwidth affect minimum fps? I remember in the good old days at least there was more significant change in minimum fps, not so much in average. More dips in FPS when you definitely didn't want.
 
Back
Top