• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PCI-e 3.0 x8 may not have enough bandwidth for RX 5500 XT

If these games consume more VRAM than there is available on the card then this isn't actually about the PCI-e bandwidth, is it ? This was spun off in the most stupid way that I can think of.

You can clearly see in those graphs that there is an obvious separation from the 8GB and the 4GB results and that this has almost nothing to do with the PCI-e connection speed. When you need to swap memory contents obviously a higher transfer rate between the host and card is going to be beneficial.

This card has enough PCI-e bandwidth, what it doesn't have is enough memory.
 
Last edited:
I'm not buying that. Even with RTX 2080 Ti there is little to none penalty from 3.0 x8.

 
If these games consume more VRAM than there is available on the card then this isn't actually about the PCI-e bandwidth, is it ? This was spun off in the most stupid way that I can think of.

You can clearly see in those graphs that there is an obvious separation from the 8GB and the 4GB results and that this has almost nothing to do with the PCI-e connection speed. When you need to swap memory contents obviously a higher transfer rate between the host and card is going to be beneficial.

This card has enough PCI-e bandwidth, what it doesn't have is enough memory.
I kinda agree. Bandwidth is a different thing from VRam capacity. Although you can see that the 4gb version performs better when using PCIe 4.0 than 3.0. Maybe the greater bandwidth can compensate for the lower capacity of VRam. Also there is not much difference in 8GB when used with both version of PCIe. I think it is actually good that it works better on the version 4.0. This means that there is some sort of improvement and maybe within time switching to PCIe 4.0 will be justified.
 
I'm not buying that. Even with RTX 2080 Ti there is little to none penalty from 3.0 x8.

A 2080Ti will also never run out of VRAM. A 4GB card might.

This card has enough PCI-e bandwidth, what it doesn't have is enough memory.
The card doesn't have enough VRAM therefore needs additional PCIe bandwidth to compensate.
I mean how do you explain the differences in 4GB results. Some of which are insanely large. This one is 20ish%. That's in a different product category.
1.png
 
A 2080Ti will also never run out of VRAM. A 4GB card might.
GTX 980 has 4GB as well.

 
GTX 980 has 4GB as well.

That testing was done exactly 5 years ago. Games have become more demanding.
 
That testing was done exactly 5 years ago. Games have become more demanding.
I knew that the reply would be something like that. :)


With 1080, the same "8GB is enough"?
 
The card doesn't have enough VRAM therefore needs additional PCIe bandwidth to compensate.

Nothing will ever compensate for the lack of VRAM, the 12 GB/s PCI-e transfer rate will never make up for the 200+ GB/s the VRAM is capable of. For all intents and purposes the performance will still degrade in a noticeable manner irrespective of the PCI-e bandwith.

I find it really strange that instead of saying PCI-e 4.0 brings some slight advantage to cards that are VRAM limited, which is the reality here, this was twisted into "AMD made a PCI-e limited card".
 
I knew that the reply would be something like that. :)


With 1080, the same "8GB is enough"?
I'm not sure you're getting the point. Every testing was done on a flagship card at the time that had the highest amount of VRAM of that time. That none of the games could max out.

I find it really strange that instead of saying PCI-e 4.0 brings some slight advantage to cards that are VRAM limited, which is the reality here, this was twisted into "AMD made a PCI-e limited card".
In the 4GB case, the advantage was not slight. This would most likely not happen if the cards were x16
 
This would most likely not happen if the cards were x16

This is the wrong conclusion, this would have not happen if there was more VRAM. You can see that even under PCI-e 4.0 (the equivalent of x16 under PCI-e 3.0) there's still a gap between it and the 8GB version, the root of the problem is the amount of VRAM there is no going around it.
 
I'm not sure you're getting the point. Every testing was done on a flagship card at the time that had the highest amount of VRAM of that time. That none of the games could max out.
I do get exactly the point. And they are still fully comparable to today's cards. Just like my 980 Ti can be compared to 1660 Super/Ti, both have similar performance and even the memory amount is the same.

1080 was released in 2016, Mirror's Edge Catalyst from that year for example can utilize more than 6GB.
 
I do get exactly the point. And they are still fully comparable to today's cards. Just like my 980 Ti can be compared to 1660 Super/Ti, both have similar performance and even the memory amount is the same.

1080 was released in 2016, Mirror's Edge Catalyst from that year for example can utilize more than 6GB.
Everything you said would make sense if all the testing was done on the same set of games. That 980 with 4GB was tested with games that couldn't max out it's VRAM at the time. 5500XT was tested with games that can. Your 980Ti CAN be compared to 1660 because at 1080p you are never going to run out of VRAM. And just because the game utilizes 6GB does not mean it needs 6GB. In the case of 4GB cards, more often than not, you need over than 4GB.
 
I give up, this is like playing chess with a pigeon.
 
TL DR - In my testing of 4GB and 8GB cards, we saw significant differences in results in Forza, BF V, Far Cry 5, and SOTR (using default ultra settings, 1080p).

EDIT: It is interesting AMD set up the card this way...essentially shooting themselves in the foot with many users running PCIe 3.0 systems and wanting the budget card (how many people are rocking a brand spanking new X570 motherboard with PCIe 4.0 and buying an entry level GPU?)

This seems to be caused by the swapping of data when the vram is full.. yikes. Why did they do this to a 4gb card? Makes more sense to do it to the 8gb card where the data transaction are less..yikes.
 
Last edited:
When VRAM runs out, the data will get put to system RAM and accessed over PCIe, doubling that bandwidth is expected to make a huge difference and it does seem to do exactly that.
That
 
I give up, this is like playing chess with a pigeon.

This whole thread is like an insult to all common sense. The 4GB card runs out of VRAM, PCI-e 4.0 helps with this somewhat, that's all there is to it, there is nothing here that we haven't seen before.

Everyone is just out of their mind, suggesting that AMD made all of this on purposes for their X570 motherboards and whatnot. In numbs my mind reading these conspiracy theories.

4GB are starting to be a significant constraint. No wonder, something like an R9 290 shipped with 4GB as standard what, six years ago ? It's time to move on.
 
Last edited:
5700XT has twice the PCIe lanes and twice the memory. At the same time article addresses the memory part where PCIe 3.0 vs 4.0 results are pretty close with 8GB of memory. Low-end cards are often enough with PCIe x8 but 5500 is the first one with 4.0 capability so there is nothing to compare this to.

When VRAM runs out, the data will get put to system RAM and accessed over PCIe, doubling that bandwidth is expected to make a huge difference and it does seem to do exactly that.
Frametimes in milliseconds. Less is better and FPS is 1000/Frametime in ms - for example at 25ms frametime, 1000/25 = 40 FPS.

Oh, ok. This makes sense based on what (little) knowledge I have in CA.
 
This is the wrong conclusion, this would have not happen if there was more VRAM. You can see that even under PCI-e 4.0 (the equivalent of x16 under PCI-e 3.0) there's still a gap between it and the 8GB version, the root of the problem is the amount of VRAM there is no going around it.
I think that many of the modern games ask for system RAM to compensate for any lack of VRAM. So, when 5500XT is connected via PCI3 X8, when it tries to get to the system RAM using that it slows down much more than when it asks for RAM via the PCIE4. My 5c.

UPDATE: Some seem to have posted the same reason for it already, although somewhat reverse-wise. Didn't read that but good for the forum to solve those tech mysteries soon enough.
 
Everyone is just out of their mind, suggesting that AMD made all of this on purposes for their X570 motherboards and whatnot. In numbs my mind reading these conspiracy theories.
Literally only one post in this entire thread suggested that, and more as a joke than anything else.
The 4GB card runs out of VRAM, PCI-e 4.0 helps with this somewhat, that's all there is to it, there is nothing here that we haven't seen before.
Which is exactly what everyone was saying. And that it's married to an X8 bus doesn't help as well.
I give up, this is like playing chess with a pigeon.
Hey you can identify as a pigeon, owl or an AH64 Apache, who am I to judge...
 
If these games consume more VRAM than there is available on the card then this isn't actually about the PCI-e bandwidth, is it ? This was spun off in the most stupid way that I can think of.

You can clearly see in those graphs that there is an obvious separation from the 8GB and the 4GB results and that this has almost nothing to do with the PCI-e connection speed. When you need to swap memory contents obviously a higher transfer rate between the host and card is going to be beneficial.

This card has enough PCI-e bandwidth, what it doesn't have is enough memory.
+1x10
 
It's something we see on yuzu, PCIe bandwidth can be saturated (even 3.0 16x) when you do many RAM<>VRAM copies, like when you run out of VRAM.
This is normal behaviour, and a very stupid way to make people look at PCIe 4.0.

What's with this tendency to give cheaper cards less PCIe lanes? We had 16x since the first days on Geforce 6, not we get nerfed with 4x and 8x cards.
 
PCIE4 can help,
exactly 4 352 MB total VRAM, 4096+256MB for 224+14GBs
with PCIE3 can be saturated by only 128MB less, 4 224 MB, 224+7GBs

so they found the perfect example that used 128-256 too much more than 4352 MB, and PCIE3 latency manifested noticeably.
 
Last edited:
It's something we see on yuzu, PCIe bandwidth can be saturated (even 3.0 16x) when you do many RAM<>VRAM copies, like when you run out of VRAM.
This is normal behaviour, and a very stupid way to make people look at PCIe 4.0.

What's with this tendency to give cheaper cards less PCIe lanes? We had 16x since the first days on Geforce 6, not we get nerfed with 4x and 8x cards.
Then RX 5500 XT would have been even more expensive.
 
Then RX 5500 XT would have been even more expensive.
Would it though? If I was AMD, the pittance it takes to run x16 traces and the performance improvements on 4GB cards that run out of VRAM is worth it. For the 8GB card it still helps...

Win win... this move is really questionable considering the results of the 4GB card in VRAM hog titles.
 
Inb4 people forgets the GT1030 suffers the same problem.
 
Back
Top