• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6600 XT PCI-Express Scaling

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,935 (3.75/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
When the Radeon RX 6600 XT launched with an interface limited to PCI-Express 4.0 x8, lots of discussion emerged about how AMD crippled the bandwidth, and how much it affects the gaming experience. In this article, we're taking a close look at exactly that, comparing 22 titles running at PCIe 4.0, 3.0, 2.0, and even 1.1. Frametimes are included, too.

Show full review
 
Thank you very much for your article. well done.
It doesn´t struggle as much as the 4GB versions of 5500(XT), right?
InfinityCache helping a lot, but leaving those exotic corner cases as you mentioned?
 
Thank you very much for your article. well done.
It doesn´t struggle as much as the 4GB versions of 5500(XT), right?
InfinityCache helping a lot, but leaving those exotic corner cases as you mentioned?
Infinity Cache doesn't have anything to do with PCIe bandwidth bottlenecks; it's just helping alleviate memory bandwidth pressure due to the smaller memory bus. The 6600 XT doesn't run into the same performance issues as the 5500 XT because it's not running out of VRAM and trying to shuttle data back and forth from main memory.
 
Probably the most important question for many of you is "How much performance is lost due to AMD cutting the PCIe interface in half, down to x8 from x16?"
Given this previous data:
relative-performance_1920-1080.png

I am pretty confident that it is safe to assume that the delta would be nil. (As PCIe 4.0 x8 = PCIe 3.0 x16.)

Also, small other thing (unrelated) that's been annoying me for awhile now: Currently, GPU reviews are structured like this:
Introduction > Individual Gamespam > Performance Summary > Conclusion.
Given how many games are tested, I'd like to suggest this structure instead:
Introduction > Performance Summary > Conclusion > Appendices for the Individual Gamespam
The individual Games are of course relevant and should be published, but it's kinda really annoying to click through them just to reach the Performance Summary and Conclusion pages since for the vast majority of users that's what they're after since the grand total is reasonably representative (and perhaps a few individual titles they themselves play). The individual games should instead be relegated to the Appendices, and then each linked individually in a list there, like this
* Game A
* Game B
*Game C
et cetera
 
Low quality post by london
Hey W1z great review as usual!

Any chance of a 68/6900 XT scaling review?
I see the latest AMD card was the 5700 XT and from that review there didn't seem to be much variance in the PCI-E scaling from say the
RTX 2080 TI & 3080. It would be interesting to see what the latest high end AMD cards deliver. I fully understand if this cant be achieved.
Also on the opening page you have RTX 980 which I'm pretty sure should be GTX 980 : )

Thanks.
 
Last edited:
Cool, so at this class of card more bandwidth is always better but we're getting real close to diminishing returns at 8GB/s (so PCIe 3.0 x8 and PCIe 4.0 x4) and outside of a few games, even PCIe 3.0 x4 isn't leaving too much on the table.

another SAD attempt to push this utter crap GPU
Seems like a pretty decent GPU to be honest - cheap to make, power-efficient, and capable well into 1440p for AAA gaming.
If you're moaning about the price, join the queue; In 2019 this class of card would have been a $299 product, sure, but this isn't 2019.
 
Price aside, I think the card is decent in performance. However, what I don’t like is that for a card that is meant for “budget” gamers who are mostly on PCI-E 3.0, the fact that you may not be able to get the most out of the GPU is quite annoying, even it’s not a common issue. I wonder if the main driver for AMD to cut down on number of PCI-E lane support is due to cost and power savings.
 
First thought before reading anything:

STICK IT IN A 1x SLOT

Edit: wow, the loss is actually quite small. <20% on the really outdated 1.1 8x is impressive, and the 2.0 results are almost not noticeable in general use.

another SAD attempt to push this utter crap GPU
How is it crap? its a great 1080p/1440p budget card, and the prices slaughter nvidia in many regions

capture070.jpg
 
I had to go look at Hitman's results to see the worst case scenario. That is pretty severe in my opinion. @W1zzard how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16? I am wondering if the worst case scenario would still be bottlenecked by full PCIE 4.
 
Last edited by a moderator:
no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then :roll:
 
How is it crap? its a great 1080p/1440p budget card, and the prices slaughter nvidia in many regions
The prices slaughter Nvidia probably because demand is relatively weak, ergo the market consider it relatively crappier? Performance is solid at 1080p, but 1440p is a stretch and even AMD knows it and appropriately markets it as a 1080p card (Cyberpunk particularly stands out at less than 40 fps, but even another relatively recent title like Valhalla can't hit 60 fps); in addition a relatively worse video encoder (I understand that streaming is very popular between 1080p esports players) and purely theoretical raytracing capacities make for an overall worse card compared to a 3060ti and even the less performing 3060 can be an overall more attractive option for some folks. Of course it's a decent option if one is desperate for a replacement, for instance because the old GPU has bought the farm, and can't afford to spend more money, but I wouldn't really consider it otherwise.
 
Last edited:
no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then :roll:
Well considering the RT performance of the 3060 I think it’s pretty pointless at this level…
 
Interesting. Some game engines are definitely streaming in assets on the fly (from CPU/RAM) via PCIe for GPU to render or display.

PCIe bandwidth has never really been a huge issue. You can get around it by copying data to GPU VRAM during level load. Seamless open world games don't really have this opportunity though, especially as you get farther away from preloaded/cached area. Has to be streamed in. Optimizing texture quad sizes for stream-in can help maximize available PCIe bandwidth without bogging down or hitching. Most engine designers probably expect a minimum of PCIe 3.0 these days. That shows in the small difference between 3.0/4.0, but much larger discrepancies for 2.0/1.1.
 
how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16?
Not a lot. If you look at any of our RX 6600 XT reviews and check Hitman 3, 1080p. Look where 6600 XT is in relation to other cards it's not way down. Maybe a few percent, if even that much

no test done with RT enabled? I guess 6600XT is a "1080p non-RT GPU" then :roll:
6600 XT is not usable with RT at 1080p imo and it's not worth sacrificing other settings just for RT

Most engine designers probably expect a minimum of PCIe 3.0 these days.
I'd say that most engine designers expect a console ;) some slack and are done at this point, others do test on PC

It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?
View attachment 213984
Could be test scene. Did they specifically search for a worst case or is that their typical standard test scene?
 
Great testing. Would be interested in if PCIe 4 x 4 gets the same results as PCIe 3 x 8. I expect they would but you never know.
 
It's interesting to understand why Techspot/HWUnbox Doom testing had such a big FPS gap between 4.0 and 3.0. Early drivers, maybe?
View attachment 213984

It depends on where you test. For example I still do not understand how Wizzard doesnt see regressions in DOOM Eternal with 8GB of VRAM cards. My RTX 2080 and RX 5700 XT both had issues at 4K and even some (though much fewer) at 1440p Due to VRAM amounts.

So apparently, Wizzard is testing in an area that is just much MUCH lighter on memory, HU tests in an area that is much harder on memory and I test the DLC areas that are huge and even more demanding on VRAM (also, actually fun to play unlike the easy initial 3 levels). Also, I play for at least 5 minutes, while a short benchmark run may not be nearly as harsh on VRAM by design.

So that really is it. IMHO the biggest ever mistake was doing WItcher 3 CPU tests in White Orchard's forest area... that is like doing a CPU test while staring at the sky in GMOD.
 
6600 XT is not usable with RT at 1080p imo and it's not worth sacrificing other settings just for RT

Could be test scene. Did they specifically search for a worst case or is that their typical standard test scene?

Ultra vs High settings is just as arbitrary as RT ON/OFF, though I bet more people could notice RT (Reflections) than High vs Ultra settings during gameplay.

Could you include Doom Eternal into the RT benchmark suite? the game get insane high FPS and is already in the benchmark suite anyways. BTW not only HUB is experiencing big drop off perf in Doom Eternal with PCIe Gen3, every other 6600XT review I have seen is saying the same thing, did you disable Resolution Scaling option in Doom Eternal?

6600.jpg
 
disable? you mean enable?

I just checked in-game settings and the option is called Resolution Scaling Mode, which should be set to OFF instead of Dynamic to have a fair comparison between GPUs
 
yeah but we cant ban people just for having bad opinions, at least anyone seeing his post has a good rebuttal and knows he's incorrect

Bans are not the answer. What the new cultures on internet need is a straight up reminder their opinion is irrelevant and facts - only facts - are relevant.

Every time, in their face, they need to be told they're wrong and idiots for sticking with nonsense (perhaps in a nicer way). Keep bashing it in until there is no escape. That goes for all those new alternative facts we see. And if the facts are wrong, present a fact that proves it. That's how you get proper discussions.
 
This is a great e-gpu card! Those usually are constrained to low pci-e specs because the adapter.
 
Back
Top