• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PCI-e 3.0 x8 may not have enough bandwidth for RX 5500 XT

So AMD's lack of good delta compression finally bites them.

Inb4 people forgets the GT1030 suffers the same problem.

People aren't expecting playable framerates on a 1030. Take your whataboutism elsewhere.
 
People aren't expecting playable framerates on a 1030. Take your whataboutism elsewhere.
That's not an excuse. It's a Pascal like the rest.
 
My point is, no one is excused for this crap (well, maybe Intel, for now.). Those GPUs are already dirt cheap to produce, and I bet the savings in PCIe lanes are marginal at best. Sounds more like AMD and Nvidia found a new way to make people look at low/mid end GPUs with more VRAM, by artificially limiting their bandwidth.
Else, who would get the 8GB version of the 5500?
 
It could be a bug in the card's BIOS that affects the card's performance in certain systems/chipsets? Right?
 
So AMD's lack of good delta compression finally bites them.
There is also driver optimizations such object culling matters a lot when you are vram limited.
It isn't as clear cut as just nVidia having better compression. Here are cases of the 1650 Super actually fare worse in 1440p while it fared better the 5500XT 4GB in 1080p.
borderlands-3-1920-1080.png
borderlands-3-2560-1440.png

anno-1800-1920-1080.png
anno-1800-2560-1440.png

People aren't expecting playable framerates on a 1030. Take your whataboutism elsewhere.
People were definitely expecting playable fps from 1050 ti and many of those were 8x cards.
This is also true for 1650.
So all of a sudden these lowest end GPUs gets "bottlenecked" by the pcie bus while every thing else isn't?
04G-P4-6253-KR_XL_7.jpg

GTX1650_dual_01.png
 
Last edited:
Not sure what's the problem here with this discussion. Get an 8GB card and the problem is solved. Besides, some argue that it is unforgivable to limit cards to x8 PCI? If the PCIe 3.0 x8 is the maximum the card can utilize, that is weird. Maybe it is bios problem. Besides do you really need more than 3.0 x8? Because I've been looking over some benchmarks measuring PCIe bandwidth affecting gameplay and the difference wasn't there between x8 and x16. Maybe now it is changing? I think the lack of VRAM is the issue not bandwidth because when you get an 8GB card the problem is gone. Using system memory instead of VRAM is always slower no matter what PCIe lanes you get.
 
Last edited:
Well, PCIe 4.0 x8 has the same bandwidth as PCIe 3.0 x16 so there is not bandwidth constraints.
Who is going to pair a midrange RX 5500 4GB card with a X570 mainboard just for PCIe 4.0?
 
Aren't they testing the pcie 3.0 runs on 9900k & the 4.0 runs on Ryzen 3xxx if so how is this Apples to apples?
Who is going to pair a midrange RX 5500 4GB card with a X570 mainboard just for PCIe 4.0?
Depends on what you're doing with it, are you telling me people don't pair a TR (or Intel HEDT) with Nvidia's mid range xx50 cards? This card may simply be used for driving the display & occasional gaming for users sporting 3900x or 3950x.
 
Because then you wont run out of vram, nor do you need more than low end card.
 
Who is going to pair a midrange RX 5500 4GB card with a X570 mainboard just for PCIe 4.0?
From the market standpoint somebody will. Who ? I don't know because I din't planned or market these products.
Although, if you consider a 3950 or 3900 as a small server or a workstation the 5500 non and XT might be of a value to you in a content creation ballpark?

Aren't they testing the pcie 3.0 runs on 9900k & the 4.0 runs on Ryzen 3xxx if so how is this Apples to apples?
Depends on what you're doing with it, are you telling me people don't pair a TR (or Intel HEDT) with Nvidia's mid range xx50 cards? This card may simply be used for driving the display & occasional gaming for users sporting 3900x or 3950x.
BTW the 8x PCIe4 has exactly the same bandwidth as 16x PCIe 3.0 The graph doesn't say how many lanes they are actually using. I think this problem is blown out of proportions anyway. It's like creating a problem for a reason than actually looking if there is one.
 
Last edited:
BTW the 8x PCIe4 has exactly the same bandwidth as 16x PCIe 3.0 The graph doesn't say how many lanes they are actually using. I think this problem is blown out of proportions anyway. It's like creating a problem for a reason than actually looking if there is one.
But Navi14 has 8 PCI-e lane not 16. When you put 5500XT in a PCI-e x16 slot it will only initialize 8 lane cause the card has 8 physical lane.
 
From the market standpoint somebody will. Who ? I don't know because I din't planned or market these products.
Although, if you consider a 3950 or 3900 as a small server or a workstation the 5500 non and XT might be of a value to you in a content creation ballpark?
My point may of been missed. What I was trying to get at was to avoid a performance hit because of AMD offering a 4 GB on-board RX 5500 option for sharing wiith system RAM via x8 PCIe 4.0 interface who out there is going to bother with PCIe 4.0 when it is likely to be replaced by PCIe 5? The combination of the two didn't make any sense to me just for the specification bump.
 
But Navi14 has 8 PCI-e lane not 16. When you put 5500XT in a PCI-e x16 slot it will only initialize 8 lane cause the card has 8 physical lane.
Well that is an issue but AMD claims it is enough. I think it is enough since still, the issue here is with the VRAM capacity. That is the main problem not the bandwidth. These 4GB cards are basically not for heavy gaming anyway and for what they are for, the 4GB is more than enough.
My point may of been missed. What I was trying to get at was to avoid a performance hit because of AMD offering a 4 GB on-board RX 5500 option for sharing wiith system RAM via x8 PCIe 4.0 interface who out there is going to bother with PCIe 4.0 when it is likely to be replaced by PCIe 5? The combination of the two didn't make any sense to me just for the specification bump.
I wouldn't be so sure about who would bother because PCIe 5 is coming out. Cause when AMD announced that they will support PCIe 4.0 with the upcoming x570 chipset, people started rumbling about, why PCIe4.0 ? PCI 3.0 is enough and besides the 4.0 costs so damn more. Now you say why bother with 4.0 when 5.0 is coming out?
There's already 6.0 announced to be released so why bother with 5.0 anyway?
They will bother with 3.0 and 4.0 because, certain graphics cards may not need so much bandwidth or wont be able to utilize it anyway. Different segments different needs. So why bother going with more expensive 5.0 when 4.0 or even 3.0 will suffice? 5500 is a good example isn't it? It is limited to 8 lanes despite PCIe version and it is enough as long as you keep the VRAM capacity in check for the tasks you want to use that graphics for.
In conclusion I disagree with your statement,"who will bother with PCIe 4.0 when 5.0 is approaching".
 
Last edited:
Aren't they testing the pcie 3.0 runs on 9900k & the 4.0 runs on Ryzen 3xxx if so how is this Apples to apples?
No. Same system. Just flipped from 3 to 4 in the bios.

But Navi14 has 8 PCI-e lane not 16. When you put 5500XT in a PCI-e x16 slot it will only initialize 8 lane cause the card has 8 physical lane.
it has x8 electrical. It is a x16 slot, physically. Only 8 lanes are used in the card.
 
People aren't expecting playable framerates on a 1030. Take your whataboutism elsewhere.
People don't always play on high settings. GT 1030 runs BF1 1080p low very well for example.

Haven't actually tested does that PCIE2.0 x4 bottleneck a GT 1030 how much..
 
So AMD's lack of good delta compression finally bites them.

They have DCC since Polaris if I remember correctly, wasn't as effecient as nVIDIAs tho. Probably RDNA had improved uppon that, so don't know if it's a problem to be honest.

Wolfenstein New Colossus on Mein Leben settings satures the 6GB VRAM on my GTX 1060, effectivelly making it an uplayable stutterfest on certain maps and zones. 4GB is just really not enough to drive that game and many newer ones anymore on the highest settings (especially textures) regardless of actual framerates. Heck... there are some games that wont even stream in the highest quality textures regardless the graphics settings if it goes way beyond the actual available videomemory.

I'm sorry to break this to some... but in late 2019 / early 2020, the new dGPUs with 4GB VRAM are the new 2GB cards. I'm even contemplating to not buy a new card with anything less then 8GBs for the next couple of years to come.

On the matter of being bandwith starved on PCIe 3.0 8X... well yes and no. It's more like PCIe 4.0 8X easing the problem somewhat, not really solving the matter of running out of VRAM. You probably wouldn't be playing any sort of shooter game at 30-40fps. At least I wouldn't.

AMD should've slapped on the other 8 PCIe lanes regardless I highly doubt it would've eaten that much die space or PCB.
 
Ok Article that claims x8 3.0 isn't enough for 5500xt is from what can tell a joke. Problem with with everything is running these games are ultra settings will eat 4gb. As even techpowerup tested with a 2080ti which that 5500xt can't hold a candle when you look at avg on TPU here (73.3fps vs 174.4fps, https://www.techpowerup.com/review/gigabyte-radeon-rx-5500-xt-gaming-oc-8-gb/28.html). Seems like who ever wrote the was writing a click bait title to get people to click and i can't read language its writen in to see if what he said was just that click bait or at the end he points out that likely ram limiting issue. For people that report that 5500xt is 8 lane limited which means its 8x pci 3.0 and 4.0 no matter what its on. 3.0 x16 used on 2080ti is same bandwidth as 4.0 8x. Shows that even a card that is 2.5x times faster in avg fps isn't being crippled by 8x pci3.0. Throw on top that the graph on that other article even shows 8gb cards are near performance though 4.0 yes is faster which is normal since less time takes to move the data to the card.

End summary is if you plan to game on ultra settings, DON'T BUY A CARD WITH 4GB MEMORY unless those games are older games that won't ever use 4gb. Any new games you want 8gb memory min or be ready to turn off some settings like AA to start with then work rest from there.
 
My point is, no one is excused for this crap (well, maybe Intel, for now.). Those GPUs are already dirt cheap to produce, and I bet the savings in PCIe lanes are marginal at best. Sounds more like AMD and Nvidia found a new way to make people look at low/mid end GPUs with more VRAM, by artificially limiting their bandwidth.
Else, who would get the 8GB version of the 5500?
Lower end Sku's have nearly always got restrained bandwidth.
This is pure bad smell Bs , I could have shown the exact same issues between the 1060 3 and 6 gig GPUs or the 470/580 4-8GB ..

Pick the right load and boom issues.

Our own guy here w1zzard has the pciex at minimal, anyone else who's tried will find down to pciex3x4 quite capable of running a Gpu.

I get point slips if I drop pciex speeds but damn the hackish nonesense in this.
 
I know pcghw since they started, not a mag/site that does that kind of crap.
They wanted to show that even with the card limited to x8, running it on pcie 4 (vs 3) does help with performance.

That said, having more vram isnt gonna help much on most smaller cards.
Just because i give it 8gb doesn't mean you will actually have the horsepower to get the fps needed to run the game.

Tho usually running pcie 3 cards at x8, usually gets you a bit better performance (less overhead).
Dont have a ti myself, but all xx70/80 cards i tested, ran better.
Guess ill do some benching this weekend to confirm with pcie 4 (x570 and 2080)..
 
Last edited:
Well that is an issue but AMD claims it is enough. I think it is enough since still, the issue here is with the VRAM capacity. That is the main problem not the bandwidth. These 4GB cards are basically not for heavy gaming anyway and for what they are for, the 4GB is more than enough.
If you read the story its easy to see its a vram issue, If 3.0 8x was really a problem then the 8gb 3.0 8x card would show a same kinda loss as well but it doesn't.
 

1577085413822.png


PCIe bandwidth IS important on VRAM limited scenarios.
What do you use after VRAM is full? RAM.
Where does the info between RAM and VRAM travel? PCIe.
 
Last edited:
Back
Top