• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6500 XT Limited To PCIe 4.0 x4 Interface

Joined
Mar 31, 2020
Messages
1,519 (0.79/day)
The recently announced AMD Radeon RX 6500 XT only features a PCIe 4.0 x4 interface according to specifications and images of the card published on the ASRock site. This is equivalent to a PCIe 3.0 x8 link or a PCIe 2.0 x16 connection and is a step down from the Radeon 6600 XT which features a PCIe 4.0 x8 interface and the Radeon 6700 XT with a PCIe 4.0 x16 interface. This fact is only specified by ASRock with AMD, Gigabyte, ASUS, and MSI not mentioning the PCIe interface on their respective pages. The RX 6500 XT also lacks some of the video processing capabilities of other RX 6000 series cards including the exclusion of H264/HEVC encoding and AV1 decoding.



View at TechPowerUp Main Site
 
I get that they're trying to cheap out here and there, but God damn amd, how low are you gonna go.
RX 6300 oem only, pcie 4.0 x1
 
6500XT is so bad that it's good, for people desperate enough :D
 
As if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
x16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.
 
As if it could use the bandwidth even if they wired it up to x16. People just need another thing to complain about.
The point is that people who use it on PCIe 3.0 or perhaps even 2.0 board will also be limited to x4 link but wil much less bandwidth than 4.0 x4 would provide. Obviously 4.0 x4 is just fine for this card but it may not be for 3.0 or 2.0 users.

Based on TPU's GPU database and assuming 6500XT has roughly the performance of GTX 980 it could lose up to 14% with 2.0 and up to 6% with 3.0: https://www.techpowerup.com/review/nvidia-gtx-980-pci-express-scaling/21.html
 
Going to wait for W1zzard's numbers before passing judgement. I don't think they purposely gimp it unless it doesn't matter anyway.
 
x16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.

The bandwidth of the RX 470 isn't relevant. A card of that caliber never saturates x16, and was wired for x16 for the current draw. The 6500XT doesn't need the extra slot current nor bandwidth, and thus isn't wired for it.

The point is that people who use it on PCIe 3.0 or perhaps even 2.0 board will also be limited to x4 link but wil much less bandwidth than 4.0 x4 would provide. Obviously 4.0 x4 is just fine for this card but it may not be for 3.0 or 2.0 users.

It won't be a problem on 3.0, and god help anyone still on 2.0. You're going to face UEFI issues on most 2.0 platforms before you ever have the opportunity to face bandwidth problems.

Based on TPU's GPU database and assuming 6500XT has roughly the performance of GTX 980 it could lose up to 14% with 2.0 and up to 6% with 3.0

The 6500 XT also has a larger L3 cache buffer like all other desktop RDNA 2 cards, thus isn't susceptible to bus bandwidth. AMD and nVidia's cards handle bandwidth differently as well, and are thus not comparable.
 
I'd be more worried about the cut video processing capabilities than the PCIE lane amount. This makes a huge difference for anyone planning to use this for HTPC environment.
 
While there might be no discernable difference, I won't say there's no difference between x4, x8 or x16. Even if the GPU itself can't sustain the full rate. There has to be some difference from , say, the buffer being loaded in 1/n time instead of 2/n time, however minuscule.

It would be similar to 1Rx16 and 2Rx8 SDRAM. Most of the time you won't notice the difference but it's there if you look hard enough, and some architecture are more sensitive to it than the other.
 
The horror... The horror...

Seriously, let's wait until actual benchmarking.... Bandwidth equivalent to 3.0x8 should be plenty for a 1080p card... What would be preferable is if at least one reviewer throws it I to an X370/470 system with PCIe 3.0 and see if it has any effect... If in the end 8t keeps cost down and noticeably expands supply, giving up 5% of the performance is a worthwhile trade off in my opinion
 
People in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited
 
People in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited

That would be the 5500XT 4GB with 8x PCIe bandwidth
 
This here could make partners do graphics cards that has a physical PCI-E x4 and x8 slot on the cards I remember some partners made a lower end Nvidia GeForce GPU with physical PCI-E x1 port on their card.

I think one was Zotac back in the day for their GT 520 or 710.
 
The horror... The horror...

Seriously, let's wait until actual benchmarking.... Bandwidth equivalent to 3.0x8 should be plenty for a 1080p card... What would be preferable is if at least one reviewer throws it I to an X370/470 system with PCIe 3.0 and see if it has any effect... If in the end 8t keeps cost down and noticeably expands supply, giving up 5% of the performance is a worthwhile trade off in my opinion
Just wait for @W1zzard or Hardware Unboxed to run his usual PCI-e scaling test when the card is released. Also if the PCI-e is really limited to x4 link then I hope some AIB will make a card with physical x4 slot for SFF PCs.
 
That would be the 5500XT 4GB with 8x PCIe bandwidth

Correct, thanks! So I guess PCI-E bandwidth may matter for this card more than others. Any way you look at it, it's not worth the asking price. It also has the same maximum theoretical tflops (5.8) as the RX480, which was also a $200 card on release about 5 years ago, so things have come nowhere in 5 years? I'm an AMD fan (mainly CPU) but it's hard to see how AMD isn't ripping people off here. All I can think of is that the margins on previous GPUs was so low that it was barely worth it for them.
 
Correct, thanks! So I guess PCI-E bandwidth may matter for this card more than others. Any way you look at it, it's not worth the asking price. It also has the same maximum theoretical tflops (5.8) as the RX480, which was also a $200 card on release about 5 years ago, so things have come nowhere in 5 years? I'm an AMD fan (mainly CPU) but it's hard to see how AMD isn't ripping people off here. All I can think of is that the margins on previous GPUs was so low that it was barely worth it for them.

It's bad but could be a lot worse is my take. Prices on memory or raw materials even are currently expensive but they are also clearly taking advantage of the situation, like they did with the 5700g/5600g/5300g(oem only) where they are still at around the same prices (the 5300g is even an OEM exclusive).

Hey at least it's not a 3090 tie :D
 
It's bad but could be a lot worse is my take. Prices on memory or raw materials even are currently expensive but they are also clearly taking advantage of the situation, like they did with the 5700g/5600g/5300g(oem only) where they are still at around the same prices (the 5300g is even an OEM exclusive).

Hey at least it's not a 3090 tie :D

It will be the most expensive "tie" you properly ever purchased that you cannot wear and needs other parts to function :roll:
 
x16 is definitely not needed, but x4 is just straight up insulting. Put it in 3.0 system and you got yourself a quarter of the PCI bandwidth of an RX470. A card launched 4 and a half years ago.

Sure, but this is a 64-bit bus GPU that even with the fastest memory around will barely crack the 128 GB/s mark. That's about as much bandwidth as the HD 5870 had 13 years ago. Assuming 18 Gbps memory that would amount to 144 GB/s, still slower in raw bandwidth compared to what the GTX 480 had 12 years ago (~177 GB/s).

It most likely could do okay enough on a 4.0 x1 link.
 
All connections look the same in the picture, how does one even see what is 4x, 8x and 16x?
 
All connections look the same in the picture, how does one even see what is 4x, 8x and 16x?

Look closer and notice the lack of pcb traces along the entire connector on the 6500xt and their presence until middle on the 6600xt and entire card on the 6700xt

(asrock also confirms this on the spec sheet, could be just an asrock thing but unlikely)
 
People in the AMD subreddit are complaining because the card also has a small amount of memory (4GB) meaning it will have to make more calls to system memory than a card with more ram; therefore, the limit in pci express bandwidth will be felt more. They did provide an example of another AMD card (can't remember which) that suffers badly when the pci express bandwidth is limited

PCI-E traffic is almost only GPU commands from the processors, Assets exchange with the processors and Assets loading/swapping.

For any cards, PCI-E is just too slow on the latency side to be useful for rendering assets if the local memory is overwhelm. You have first to get via the latency of the bus, then latency of the destination (Memory latency if you access the main memory or SSD latency if you go thru storage). That might not be a big issue at very low fps, but it prevent by it's nature average to high FPS. We are hearing since AGP 1x that you can use the bus to access the main memory to expend the amount of ram the GPU have access but it was never been used this way. Well not for live rendering at least.

But it's being used to load or swap asset more easily.

Infinity Cache will not cache anything that it's not on the GPU memory since it's a L3 cache. So it will not be any help for the traffic that is going via the PCI-E bus.

In reality, this game will run at lower resolutions (Lower amount of memory needed for frame buffer), will probably not run game at max details unless very old, and the frame rate will be average.

I think 4 GB for that kind of card is enough and i think since it will be lower fps with lower details, the bus will probably get lower number of GPU commands. Previous test of Bus speed seems to demonstrate that the bus speed only matter at really high FPS. And we all know that if you go beyond the local GPU memory, you will stutter to the Bus latency.

This chip was designed to be a laptop chip running at low power on low-mid range gaming laptop paired with a IGP. The IGP would have all the video decoding function. In those situation while even more limited, the smaller bus would also mean less power usage.

But since every GPU produce is sold and the Geforce GTX 710 is still sold at 100+ US, AMD saw an opportunities to sell more GPU. The 4 GB buffer will keep it away from the miners as it's no longer enough to mine. So for low end gamer, it might not be such a bad deal in the end. But avoid that for a media box if possible. Better just get a good IGP instead for this purpose.

Will wait for benchmark and real price to see if it's a success or not. But i suspect it's going to be a great success .... for AMD finances.
 
If its OEM only, and only for systems with PCI-E 4.0 out of the box that more than enough bandwidth
 
Back
Top