• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6500 XT PCI-Express Scaling

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,804 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
The AMD Radeon RX 6500 XT comes with only a narrow PCI-Express x4 interface. In this article, we took a closer look at how performance is affected when running at PCI-Express 3.0; also included is a full set of data for the academically interesting setting of PCI-Express 2.0.

Show full review
 
If this card cannot even get100 FPS in Doom Eternal at 1080P it is a waste. This reminds me of the R7 250.
 
This design is a bit baffling to me. There's no way AMD isn't aware of the factors you mentioned regarding the hardware that a good chunk of potential customers will be pairing a card like this with. Beyond that, the fact that the relative performance loss from 4.0 to 3.0 is fairly consistent across tests (outside of RT) suggests, as you mentioned, that performance may be bus-limited even at 4.0x4. Why would AMD intentionally limit performance like that? That is not a rhetorical question.
 
As if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
I don't think they purposely gimp it unless it doesn't matter anyway.
The 6500 XT also has a larger L3 cache buffer like all other desktop RDNA 2 cards, thus isn't susceptible to bus bandwidth.
It's probably enough bandwidth for that performance anyways, so would be much ado about nothing.
I just trust AMD engineers know what they are doing there
I doubt it will saturate the bus even with that downgrade
Lots of engineers these days on these forums, in where their skills obviously exceeds that of AMD mosts professional out there.
 
The only issue is it's slower than the RX570 in most Games.
 
The only issue is it's slower than the RX570 in most Games.
That's not the only issue by a long shot. How about "its not only slower then the 570 but it's also $200 AND is gimped for non AMD platforms"

Because if it WAS faster then a 570, that still means it offers worse price performance then a $200 RX 480 from 6 YEARS AGO. And it would still scale very pporly on anythign that isnt rocket lake, alder lake, or AMD 500 series chipsets, AKA the majority of buyers looking for a low end card like this. Few are going to be buying a $200 GPU to go with a new $500 CPU after all, and for anyone with a PCIe 3.0 platform (AMD from k10 to zen 2, intel from ivy bridge to comet lake) is going to lose more performance.

At $200 this thing would need to be consistely outperforming the 1660 super, and even then wouldn't be a very good value. Add on that pathetic 4GB VRAM buffer and this thing should be a sub $100 GT 1030 competitor. And dont forget, this thing draws 100 watts of power when gaming, compared to the ~140 watts pulled by the 6 year old 12nm RX 480 or the ~70 watts pulled by the 12nm 1650, or the ~95-100 pulled by the 1650 super which occasionally outperforms this 6nm rDNA2 card.

This thing is an attrocious GPU.
 
Does anyone have an educated guess as to how much is being saved with this 4x bus compared to let's say a x8 bus? I'm specifically interested in raw materials (PCB), SMD's on the PCB and perhaps die space.

I wonder whether the shortage of loads of devices and materials also had something to do with this decision.
 
Does anyone have an educated guess as to how much is being saved with this 4x bus compared to let's say a x8 bus? I'm specifically interested in raw materials (PCB), SMD's on the PCB and perhaps die space.

I wonder whether the shortage of loads of devices and materials also had something to do with this decision.
PCB/SMD only, 20-30 cents? Maybe $1 with the current component shortage.
No idea about die space.
 
Gamers should not buy that GPU for even $200 with hw encoding absent and 4GB of VRAM, even if they own a PCIE 4.0 board. AMD should price it closer to $150. But we see people buying $2000 GPUs so...
 
Gamers should not buy that GPU for even $200 with hw encoding absent and 4GB of VRAM, even if they own a PCIE 4.0 board. AMD should price it closer to $150. But we see people buying $2000 GPUs so...
Desktop PC's will usually have enough oomph/power to just do encoding and decoding in software, and laptops will all have the dedicated hardware integrated into the iGPU. It's not such a big deal imho.
 
Gamers should not buy that GPU for even $200 with hw encoding absent and 4GB of VRAM, even if they own a PCIE 4.0 board. AMD should price it closer to $50. But we see people buying $2000 GPUs so...
FTFY. The GT 1030 was a $79 MSRP product with GRDDR5 memory when it came out, and that had media decoders on board. This....doesnt.

Desktop PC's will usually have enough oomph/power to just do encoding and decoding in software, and laptops will all have the dedicated hardware integrated into the iGPU. It's not such a big deal imho.
That doesnt justify wasting power on it, decoding hardware especially has been standard in GPUs for over a decade at this point. The last AMD GPU lineup that didnt have a dedicated decoder was evergreen. From 2009.
 
FTFY. The GT 1030 was a $79 MSRP product with GRDDR5 memory when it came out, and that had media decoders on board. This....doesnt.


That doesnt justify wasting power on it, decoding hardware especially has been standard in GPUs for over a decade at this point. The last AMD GPU lineup that didnt have a dedicated decoder was evergreen. From 2009.
but it does cost additional die space; something there is an undeniable shortage of.
If the absence of encoding hardware means more people will be able to get a card at all instead of no card, then there is something to be said for that imho.
 
What the hell is happening in The Wither 3 and Control's RT test?
 
Low quality post by Forza.Milan
Well if it was only crippled at 1440p or above then it wouldn't be a big deal, but seeing significant penalty at 1080p for a card designed for that segment is alarming. I see that @W1zzard 's test rig used 4000MT/s RAM. I am willing to bet that performance drastically tanks with more mainstream system memory speeds.
 
but it does cost additional die space; something there is an undeniable shortage of.
If the absence of encoding hardware means more people will be able to get a card at all instead of no card, then there is something to be said for that imho.
The media decoder is TINY. Encoder you have a point on, but a decoder? Really? That's not going to make a substantial difference in die space as to affect availability.

There's also the issue of the RX 560, built on 14nm node, which has a media decoder, media ENCODER, 4k H264 encoder, the same number of cores (1024) and is only 137mm in size. The 6500 is 107mm2. If they wanted to save "die space" as you say, they could have simply dropped infinity cache altogether, given it a 128 bit memory bus, media encoder and decoder, and had a smaller, better GPU.

There really is 0 defending AMD on this one. A 560 on 6nm likely would have not only performs just as well given the attrocious performance of the 6500xt but would have been nearly the same size die wise.

EDIT: there's also the issue of "it gets mroe cards to people" that is blatantly wrong. The 6500xt at $200 is already sold out a minute after the review dropped, and the ones in stock are now going for $350. Bet you good money they'll be gone within an hour adn then will be unobtanium like the rest of the 6000 line.
 
The media decoder is TINY. Encoder you have a point on, but a decoder? Really? That's not going to make a substantial difference in die space as to affect availability.

There's also the issue of the RX 560, built on 14nm node, which has a media decoder, media ENCODER, 4k H264 encoder, the same number of cores (1024) and is only 137mm in size. The 6500 is 107mm2. If they wanted to save "die space" as you say, they could have simply dropped infinity cache altogether, given it a 128 bit memory bus, media encoder and decoder, and had a smaller, better GPU.

There really is 0 defending AMD on this one. A 560 on 6nm likely would have not only performs just as well given the attrocious performance of the 6500xt but would have been nearly the same size die wise.

EDIT: there's also the issue of "it gets mroe cards to people" that is blatantly wrong. The 6500xt at $200 is already sold out a minute after the review dropped, and the ones in stock are now going for $350. Bet you good money they'll be gone within an hour adn then will be unobtanium like the rest of the 6000 line.
Well, okay. I'm just trying to figure out why certain choices were made in the context of a company which is full of smart people and which is trying to sell as many GPU's as possible.
I don't believe this GPU is what it is because "people are dumb", or that they're intentionally building something worse than what they're capable of given the confines of their design goals.
 
You've got to love e-waste! And, livin' in the twilight zone that is 2020 thru ???? (not ending soon enough!)
 
Well, okay. I'm just trying to figure out why certain choices were made in the context of a company which is full of smart people and which is trying to sell as many GPU's as possible.
I don't believe this GPU is what it is because "people are dumb", or that they're intentionally building something worse than what they're capable of given the confines of their design goals.

There are a couple of explanations that spring to mind. The first is that project management set the design constraints to intentionally limit the card to a specific performance envelope, though why that limit would be so far behind the 6600XT is anyone's guess. The other is that, as others have hypothesized, that this chip was primarily designed as a mobile component, and is being repurposed for desktop parts. That may not be true, but it could help explain some of the weird design decisions, like the 64-bit memory bus and x4 lane count.

As an aside, it's easy to blame the engineers and designers, but they're not always the reason a product is bad.
 
Well, okay. I'm just trying to figure out why certain choices were made in the context of a company which is full of smart people and which is trying to sell as many GPU's as possible.
I don't believe this GPU is what it is because "people are dumb", or that they're intentionally building something worse than what they're capable of given the confines of their design goals.
I feel like this is very optimistic thinking. The last few years we have seen the proverbial mask slip numerous times as companies treat their consumers like cash bags on two legs, insulting them and trating them like garbage. There is not much reason to believe that AMD, the company that jacked up the price of VEGA after launch and said "oh well it's an introductory price didnt you know?", sold the AM4 platform on "support till 2020" then changed their tune to "when we said support we meant we supported you buying a new motherboard before 2020 if you want our new CPU" and raised the price of their 6 core offering from $179 to $300 and left the entire budget market to rot for over a year. is excluded form this mindset.

Then again this is AMD, a company that has never failed to mismange itself out of success. The examples are numerous over the year, to keep it blunt: polaris was a fluke, so was the 5700xt. The nonsesnse over the 5500/5600xt/vega launches is normal for them. The 6500xt is likely engineered to be as absolutely cheap as possible, since everything sells right now regardless of how good it is.

There are a couple of explanations that spring to mind. The first is that project management set the design constraints to intentionally limit the card to a specific performance envelope, though why that limit would be so far behind the 6600XT is anyone's guess. The other is that, as others have hypothesized, that this chip was primarily designed as a mobile component, and is being repurposed for desktop parts. That may not be true, but it could help explain some of the weird design decisions, like the 64-bit memory bus and x4 lane count.

As an aside, it's easy to blame the engineers and designers, but they're not always the reason a product is bad.
Agreed, 64 bit may have been a command from high management that MUST be adhered to to maintain product segmentation, regardless if a 6 GB 96 bit bus would have worked better.
 
I feel like this is very optimistic thinking. The last few years we have seen the proverbial mask slip numerous times as companies treat their consumers like cash bags on two legs, insulting them and trating them like garbage. There is not much reason to believe that AMD, the company that jacked up the price of VEGA after launch and said "oh well it's an introductory price didnt you know?", sold the AM4 platform on "support till 2020" then changed their tune to "when we said support we meant we supported you buying a new motherboard before 2020 if you want our new CPU" and raised the price of their 6 core offering from $179 to $300 and left the entire budget market to rot for over a year. is excluded form this mindset.

Then again this is AMD, a company that has never failed to mismange itself out of success. The examples are numerous over the year, to keep it blunt: polaris was a fluke, so was the 5700xt. The nonsesnse over the 5500/5600xt/vega launches is normal for them. The 6500xt is likely engineered to be as absolutely cheap as possible, since everything sells right now regardless of how good it is.


Agreed, 64 bit may have been a command from high management that MUST be adhered to to maintain product segmentation, regardless if a 6 GB 96 bit bus would have worked better.
It's certainly not the first time I've been called an optimist :p Thanks all for the answers regardless.

Regarding the 64bit vs 96bit bus: I wonder whether they even have a GDDR "design module" (or whatever you call the things they build their chip designs out of) in a smaller than 64bit size.
 
It's certainly not the first time I've been called an optimist :p Thanks all for the answers regardless.

Regarding the 64bit vs 96bit bus: I wonder whether they even have a GDDR "design module" (or whatever you call the things they build their chip designs out of) in a smaller than 64bit size.
Therre's nothing technically stopping you, their memory chips use 32 bit channels each, so you could put a single memory chip on a 32 bit bus and it would technically work. I doubt the standard body actually wrote one out though, given 32 bit cards.....wait did those every exist in the first place?
 
Back
Top