• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6600 XT PCI-Express Scaling

disable? you mean enable?
This is a pretty interesting thing to know. Resolution scaling can easily mitigate the bandwidth issue in that game... And if it does, the conclusion of the article might not be as accurate as it looks.
 
This is a pretty interesting thing to know. Resolution scaling can easily mitigate the bandwidth issue in that game... And if it does, the conclusion of the article might not be as accurate as it looks.
I just checked in-game settings and the option is called Resolution Scaling Mode, which should be set to OFF instead of Dynamic to have a fair comparison between GPUs
I used "Off" of course, otherwise results are useless
 
Can you test horizon zero dawn as well, that game seems to like bandwidth.
 
I had to go look at Hitman's results to see the worst case scenario. That is pretty severe in my opinion. @W1zzard how much more FPS would you estimate Hitman 3 would have if it was PCIE 4 x16? I am wondering if the worst case scenario would still be bottlenecked by full PCIE 4.
The rebooted hitman franchise has been a stuttery, poorly-optimised anomaly for lots of reviewers over the years. Both in terms of messy frametime plots (useless 99th percentile scoring) and odd engine limits that get in the way of both CPU and GPU scaling.

Whilst the PCIe scaling does clearly show that it needs a lot of bandwidth I wouldn't treat this as representative of other games on the market. It's just an edge-case curiousity that shows there are more than zero situations where running at PCIe 3.0 x8 might be sub-optimal.
 
IF you like your GPUS with SERIOUS native bottlenecks, LESS -everything- LESS lanes, memory, bandwidth. CERO RT performance, etc... by all means BUY IT!! Me? HARD PASS AMD ! WAY TO MANY CUTS! This thing has short legs! (wait for the real next gen Hitman... this will DIE!) CERO VALUE AND NO REAL MARKET.

I think the RADEON division is FLOPPING HARD, THEY ARE 100% CLUELESS. AMD took a nice little GPU perfect for HTPC and casual gaming at 75w and $$$$ to hell INTO THIS MEGA JOKE.

WHY??? simple, at this level of price and power consumption? there are MUCH BETTER OPTIONS. Like a 3060TI that CRUSHES THIS ( btw for all the complainers. SORRY this is not my native language, dont expect fluent or persuasive speaking from me )
 
Last edited:
IF you like your GPUS with SERIOUS native bottlenecks, LESS -everything- LESS lanes, memory, bandwidth. CERO RT performance, etc... by all means BUY IT!! Me? HARD PASS AMD ! WAY TO MANY CUTS! This thing has short legs! (wait for the real next gen Hitman... this will DIE!) CERO VALUE AND NO REAL MARKET.

I think the RADEON division is FLOPPING HARD, THEY ARE 100% CLUELESS. AMD took a nice little GPU perfect for HTPC and casual gaming at 75w and $$$$ to hell INTO THIS MEGA JOKE.

WHY??? simple, at this level of price and power consumption? there are MUCH BETTER OPTIONS. Like a 3060TI that CRUSHES THIS ( btw for all the complainers. SORRY this is not my native language, dont expect fluent or persuasive speaking from me )
Not TYPING in random caps LOCK might make you come across as slightly less DeRaNgEd no matter what YOUR NAtive language is.

Although I'm not sure about that when you're suggesting it's possible to buy a 3060 Ti for anything like the same price as one of these, unless you happen to be A) in a country that has FE drops and B) online in the 30 seconds per month that they're available.
 
Low quality post by london
btw... why bother testing in a PERFECT VACUUM ??? The Ryzen 7 5800X @ 4.8 GHz IS DOING ALL THE HEAVY LIFTING HERE. THATS CHEATING... Try using a Ryzen 5 1600 and see how that goes .... this are the CPUS most folks that game a 1080p use IN THE REAL WORLD . AT WHAT MARKET IS THIS AIMED AT AMD? IT DOES NOT EXIST. ( btw i have a 5600x and 3600, STILL I will not touch this, don't expect years of badass PERFORMANCE from this crap)
 
another SAD attempt to push this utter crap GPU
You know , this gpu with less spec is faster than 5700XT and equal RTX 2080 ! That's crazy.
 
Stay on topic.
Read the guidelines/rules before posting.

Here is a sampling:
All posts and private messages have a "report post" button on the bottom of the post, click it when you feel something is inappropriate. Do not use your report as a "wild card invitation" to go back and add to the drama and therefore become part of the problem.
If you disagree with moderator actions contact them via PM, if you can't solve the issue with the moderator in question contact a super moderator.
Under no circumstances should you start public drama.

Thank You and Have a Good (On-Topic) Discussion
 
Can you test horizon zero dawn as well, that game seems to like bandwidth.
As mentioned in the conclusion Death Stranding uses the same engine as HZD and is affected by PCIe bandwidth limitations, too. Given the limited popularity of those two games, I have no plans to bench two games using Decima Engine, which would be almost 10% of the games test group
 
As mentioned in the conclusion Death Stranding uses the same engine as HZD and is affected by PCIe bandwidth limitations, too. Given the limited popularity of those two games, I have no plans to bench two games using Decima Engine, which would be almost 10% of the games test group
And Death Stranding is a WAY better implementation of the same engine. HZD still has overall performance issues
 
And Death Stranding is a WAY better implementation of the same engine. HZD still has overall performance issues
This. Death Stranding was designed by Kojima Studios from the ground up for an eventual cross-platform release and Kojima Studios also handled the PC version.

HZD was designed as a PS4 exclusive, with zero consideration given to PC compatibility and the PC port was outsourced to a third party (Virtuous Studios) who had no affiliation with the original developer; Even now they are still patching bugs in the PC port that are nothing to do with the original PS4 version and solely as a result of the third party learning from their mistakes as they go along. Rather than thinking of HZD PC as a PC version of the cross platform game, imagine that a newbie developer was given HZD PS4 assets and told to create a new game from scratch that looks like a copy of the PS4 version.
 
This. Death Stranding was designed by Kojima Studios from the ground up for an eventual cross-platform release and Kojima Studios also handled the PC version.

HZD was designed as a PS4 exclusive, with zero consideration given to PC compatibility and the PC port was outsourced to a third party (Virtuous Studios) who had no affiliation with the original developer; Even now they are still patching bugs in the PC port that are nothing to do with the original PS4 version and solely as a result of the third party learning from their mistakes as they go along.
Yeah it’s really night and day usage of the same engine tho DS is using a later version as I understand it but both games are pretty equal as far visuals, open world etc. but you would never think they were both the same engine..
 
Price aside, I think the card is decent in performance. However, what I don’t like is that for a card that is meant for “budget” gamers who are mostly on PCI-E 3.0, the fact that you may not be able to get the most out of the GPU is quite annoying, even it’s not a common issue. I wonder if the main driver for AMD to cut down on number of PCI-E lane support is due to cost and power savings.
This is such a weird take, and makes me wonder if you read the article at all. "The fact that you may not be able to get the most out of the GPU" - how does that align with 1-2% average performance drop on PCIe 3.0? Yes, there are outliers that are worse than that, as there always will be. But they are highly specific outliers. The overall results from testing this is that you will get a level of performance not perceptibly different from the full 4.0 speed. That is what the conclusion says. Besides, if you're on a PCIe 3.0 platform, chances are you'll be more held back by whatever CPU you are using on that platform than by the PCIe bandwidth. (Unless, that is, you're using a 9900K, 10700K or similar with a new midrange GPU fr some reason.)
 
Tbf I feel like AMD cheaping out on the lanes (while understandable) is like, really cheap for a card of this class (midrange / entry-level midrange). Now, if this was something around a 1650 (ie, a 6500 or something), or even more budget I can totally understand that, but given how NVidia is quite consistently giving all their cards down to the x50s series an x16 (not that the bottommost would benefit but that's quite besides the point here), I cannot completely shake off the feeling that AMD's cheaping out on us here. Given their track record of being the budget vendor, that's not the smartest move they could've pulled imho.
 
This is such a weird take, and makes me wonder if you read the article at all. "The fact that you may not be able to get the most out of the GPU" - how does that align with 1-2% average performance drop on PCIe 3.0? Yes, there are outliers that are worse than that, as there always will be. But they are highly specific outliers. The overall results from testing this is that you will get a level of performance not perceptibly different from the full 4.0 speed. That is what the conclusion says. Besides, if you're on a PCIe 3.0 platform, chances are you'll be more held back by whatever CPU you are using on that platform than by the PCIe bandwidth. (Unless, that is, you're using a 9900K, 10700K or similar with a new midrange GPU fr some reason.)
The thing is, most people dropping $600+ on a scalped/marked-up GPU will not be using an ancient motherboard. B550/X570/Z490/Z590 all have PCIe 4.0 anyway.

The 1-2% performance loss (for the most part) on PCIe 3.0 x8 is negligible if it's going to be held back even more than that by an old AMD 2600X or Skylake quad-core, for example. Like you say, who would have spent big bucks on a 9900K only to then pair it up with a crap GPU that's already in need of an upgrade?
 
Tbf I feel like AMD cheaping out on the lanes (while understandable) is like, really cheap for a card of this class (midrange / entry-level midrange). Now, if this was something around a 1650 (ie, a 6500 or something), or even more budget I can totally understand that, but given how NVidia is quite consistently giving all their cards down to the x50s series an x16 (not that the bottommost would benefit but that's quite besides the point here), I cannot completely shake off the feeling that AMD's cheaping out on us here. Given their track record of being the budget vendor, that's not the smartest move they could've pulled imho.
Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.
 
Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.
Where anywhere in the review outside of obvious outliers was it kneecapped and still beating the 3060 x16 “advantage” ?
 
Where anywhere in the review outside of obvious outliers was it kneecapped and still beating the 3060 x16 “advantage” ?
"where in this review outside of cases where it matters can you find examples of it mattering"

Well if you're going to immediately throw out evidence you dont like this conversation will go nowhere.
 
Frankly this is what AMD does when they catch up, they immediately kneecap themselves. It's not the first (or third) time in recent history they've done this.
... again, how are they kneecapping themselves? There is no notable performance limitation here. There is a spec deficit with no real-world consequences worthy of note. If that amounts to "kneecapping themselves", then you have some rather absurd standards. Or are all GPUs without HBM or a 512-bit memory bus also kneecapped? Yes, there are a couple of outliers. One has a ~7% deficit, the other has a ~15% one. The former is a console port running in an engine primarily developed for consoles and well known for porting issues. The other is a game notorious for buggy performance. If your favourite game genre is "buggy ports", then yes, these are highly relevant. If not, then no, they aren't. They are outliers, and while absolutely true and likely representative of their respective games, they aren't representative of modern games overall - the rest of the tested field demonstrates that. Remember, that 1-2% overall deficit includes those outliers.
The thing is, most people dropping $600+ on a scalped/marked-up GPU will not be using an ancient motherboard. B550/X570/Z490/Z590 all have PCIe 4.0 anyway.

The 1-2% performance loss (for the most part) on PCIe 3.0 x8 is negligible if it's going to be held back even more than that by an old AMD 2600X or Skylake quad-core, for example. Like you say, who would have spent big bucks on a 9900K only to then pair it up with a crap GPU that's already in need of an upgrade?
Exactly. If I were to buy one of these and stick it into my travel PC (an old and heavily modified Optiplex 990 SFF) with its i5-2400 and PCIe 2.0, the PCIe 2.0 really isn't what would be holding me back. That would be the CPU.
 
Last edited:
First thought before reading anything:

STICK IT IN A 1x SLOT

Edit: wow, the loss is actually quite small. <20% on the really outdated 1.1 8x is impressive, and the 2.0 results are almost not noticeable in general use.


How is it crap? its a great 1080p/1440p budget card, and the prices slaughter nvidia in many regions

capture070.jpg

And $700 budget card slaughter customers.


In that store 3060 and 3060 Ti have same prices.
 
... again, how are they kneecaping themselves? There is no notable performance limitation here.
If that amounts to "kneecapping themselves", then you have some rather absurd standards. Or are all GPUs without HBM or a 512-bit memory bus also kneecapped?
Now there's a strawman argument. Wher did I say any of that? I didnt. The only thing I said was that AMD has a habit of kneecaping themselves when they start catching nvidia. Rebranding cards, too little memory (4GB 580) or a x8 bus that impacts performance in some games (6600xt, 5500xt was hit by BOTH of these issues).

There is a spec deficit with no real-world consequences worthy of note.
You know, outside of software that did show a performance difference. Of course:

Yes, there are a couple of outliers. One has a ~7% deficit, the other has a ~15% one.
So no real world consequences. Outside of real world consequences, but who coutnts those?
The former is a console port running in an engine primarily developed for consoles and well known for porting issues. The other is a game notorious for buggy performance. If your favourite game genre is "buggy ports", then yes, these are highly relevant. If not, then no, they aren't. They are outliers, and while absolutely true and likely representative of their respective games, they aren't representative of modern games overall - the rest of the tested field demonstrates that. Remember, that 1-2% overall deficit includes those outliers.
Right, so any time performance doesnt line up with expectations there are excuses. Using a x16 bus like nvidia would fix that problem, but the GPU isnt gimped. Everyone known that buggy console port games NEVER sell well or are popular, ever. Right?

If you have to come up with excuses for why examples of a x8 bus hurting performance dont actually matter you've answered your own question. You've constructed your own argument here that you can never lose because you immedately discredit anything that goes against your narrative. I dont knwo what it is about the modern internet where any criticism against a product has to be handwaved away. The 6600xt is already a gargantuan waste of money, why defend AMD further screwing with it by doing this x8 bus thing that nvidia woudl get raked over the coals for doing?
 
Lol all these threads on the net about how AMD necked it's users by providing a PCI-E x8 type of card.

Just depends on the user case, but overall it's still within and twice as fast as a polaris. @W1zzard > how does PCI-E overclocking yield with such cards and it's performance? You could use a older board without a NVME setup and be able to push it to 112Mhz PCI-E bus or so. Should be perfectly possible.
 
Now there's a strawman argument. Wher did I say any of that? I didnt. The only thing I said was that AMD has a habit of kneecaping themselves when they start catching nvidia. Rebranding cards, too little memory (4GB 580) or a x8 bus that impacts performance in some games (6600xt, 5500xt was hit by BOTH of these issues).


You know, outside of software that did show a performance difference. Of course:


So no real world consequences. Outside of real world consequences, but who coutnts those?

Right, so any time performance doesnt line up with expectations there are excuses. Using a x16 bus like nvidia would fix that problem, but the GPU isnt gimped. Everyone known that buggy console port games NEVER sell well or are popular, ever. Right?

If you have to come up with excuses for why examples of a x8 bus hurting performance dont actually matter you've answered your own question. You've constructed your own argument here that you can never lose because you immedately discredit anything that goes against your narrative. I dont knwo what it is about the modern internet where any criticism against a product has to be handwaved away. The 6600xt is already a gargantuan waste of money, why defend AMD further screwing with it by doing this x8 bus thing that nvidia woudl get raked over the coals for doing?

Since we don't have 4.0 x16 numbers, you can't say that AMD is "kneecapping" themselves with this choice. There is a grand total of ONE performance scenario in this review where the difference matters between 4.0 and 3.0 x8 (9 FPS vs 7 is irrelevant). As for previous generations, both the 5500 XT and the 480/580 had 8 GB versions available to those with a tiny bit more money. There's just really no basis for your argument.
 
Back
Top