• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

PowerColor Radeon RX 9060 XT Reaper 8 GB

This just fits my argument that the VRAM is too often swapped via the PCIE bus. The bus speed is most likely too slow for that.
Moose muffins. None of the other 8GB cards are doing this. And W1zzard's testing would have shown it.
These two are 4GB cards and there are no scaling issues with them.

And before anyone says raytracing is different, no it isn't.

Simply put, ComputerBase.de is doing something wrong.
Anyone who claims this nonsense is real, needs to use their mush for something other than a seat cushion.
 
Last edited:
Before Blackwell and RDNA4 officially released their 60-class GPUs or confirmed pricing, several times my take on here for this tier has been:

- $250(£250) for 60-class standard model (ideally, 12GB)

- $300(£300) for 16GB variants

That was just a rough estimate based on the current pricing mess, not because I believe this heavily limited performance tier actually deserves those price points.

Here in the UK, these RDNA4 GPUs closely reflect those earlier ramblings with the 9060 XT 8GB going for £270, and £314 for the 16GB model. Ideally this should feel encouraging enough to give these cards a thumbs up. Its good to see cheaper options with current features for the mainstream crowd, especially for DIY buyers who are on a tight budget (the types i come across all the time). But, i'm still on the fence with the overall performance fragmentation - you know the performance delta which is growing wider with each generation, from top to bottom. Its like getting a nice new bike for cheap but the cheeky buggers keep making the hill you're riding up even steeper.

Personally, I wouldn't recommend 8GB cards unless the games you play are BY DESIGN perfectly fine running within that limited memory bandwidth. For heavy hitting newer games, future releases and the desire for less visual quality compromise - 8GB is just a headless goose chase.

Basically, I'm split 50/50. Its good to see 'some' passably acceptable performance at a somewhat reasonable price by todays BS standards, but overall the way performance gets nerfed these days is honestly outrageous. For eg, take Nvidia - aside from the 5090, most of their cards feel like they're dressed up to pass as a tier above. That also implies, AMDs 9600 XT 8GB only barely beats the RTX 5050 (aka 5060). It does make you wonder: is AMD truly catching up in this space or is Nvidias wider-than-ever performance gap just creating the illusion that they are? Yep, its all a mess - I'm just gonna climb back on to that barbed wire 50/50 fence and rip another hole in my trousers.
 
For eg, take Nvidia - aside from the 5090, most of their cards feel like they're dressed up to pass as a tier above.

The xx60s? Sure, maybe, although as I've said before, I think Nvidia is intentionally prioritizing cost control over performance gains in this segment. But I don't see how this can be said for the xx70s and up. The 5070 targets 1440p, the 5070-Ti targets ultimate 1440p or good 4k, and the 5080 is great 4k. That's a very good division. It just doesn't make sense to say the 5070 is trying to pass as an xx80 card.
 
I think I'd hit the used market and get a 3070 or 3080 before dropping $350 on this new.
 
So you agree that $300 is too much?

I wasn't arguing against you, just agreeing that more than about $250 is too much for a new GPU with only 8GB. Same as I said for the 5060 and 5060Ti 8GB reviews. I'm not picking sides or displaying any brand loyalty here - the statements I make are valid for all three GPUs and this is the third GPU review I've mentioned it in.

It sure would be nice if there was a new GPU on sale at that sort of price point but I picked used simply because nobody is making a brand new card at that price point right now. AMD would rather sell you an APU like the 8700G instead, and Nvidia's 5050 is probably 6 weeks out.

Given that you feel $240-270 is a reasonable price for this, and the (still overpriced) 5060 is $300, AMD's strategy of "Nvidia -20%" would put the sensible price of a 9060 8GB at $240. Especially since FSR4 lacks the developer support that DLSS4 lacks, and buying AMD also locks you out of the vast library of CUDA-specific applications, should you wish to do more than just gaming on your GPU.

Man, I already said my piece. So we are clear:
"....
It has been a while since we've been able to bring something comparable to a competent console from the gaming side...and in my book that's some sort of a win.

Unfortunately, what we actually need is the 300 to be closer to 240-270 dollars..."

-you should read in the tone of severe disappointment with the first bit-


I think the $300 is cool, but it's just too little. It can't be combined to get a 1080p console breaker...and that's the problem. I see this performance class as the very minimum before you get into video and transcode cards designed for low power, high efficiency, and network video (think Jellyfin or Plexus). The 16 GB versions (5060 and 9060xt) are our first gamers...and that's just way too much to support 1080p gaming when consoles exist. So...I think we agree at the point of good product, terrible price point for offering.
 
Yikes. looks like somes games might be unplayable on a PCIe 4.0 system
View attachment 403125
The most shocking part is that the TPU review doesn’t show these huge 1% low differences between GPUs with 16GB vs. 8GB.
Presenting the 1% lows in a separate chart and basically hiding them in a spoiler is also a poor choice. It’s perfectly possible to include both sets of data in the same charts. :toast:


1749505569219.png

1749506089732.png
 
Moose muffins. None of the other 8GB cards are doing this. And W1zzard's testing would have shown it.
These two are 4GB and there are no scaling issues.

And before anyone says raytracing is different, no it isn't.

Simply put, ComputerBase.de is doing something wrong.
Anyone who claims this nonsense is real, needs to use their mush for something other than a seat cushion.
It's absolutely baffling to me that you still fail to grasp this issue despite it being explained to you multiple times. The cards and games testing from 9-11 years ago have zero relevance so I have no idea why you'd even post them. The other modern examples you provide also have zero relevance. The reasons for this should be obvious if you actually grasped the issue at hand. The fact that you picked these cards as examples makes it clear you either don't grasp the issue, or you are being obtuse purposely.

This is a significant problem when a graphics card uses settings that cause it to exhaust its onboard VRAM, forcing the system to access main system memory over the PCIe bus. Yes, enabling ray tracing on an 8 GB graphics card can easily lead to exceeding the VRAM buffer in many modern games. As multiple review outlets have demonstrated, there's a consistent pattern when VRAM is exceeded and the game handles it by accessing main system memory. In some cases, exceeding VRAM results in texture loading issues or reduced texture quality and you lose image quality rather than frame rate performance. Assuming the former scenario, restricting PCIe bandwidth on an 8 GB graphics card (compared to its 16 GB counterpart with the same GPU) will more severely reduce performance.

If W1z decides to test the 5060 Ti 8 GB's PCIE scaling capabilities, it's likely that any tests from the previous 5060 Ti 16 GB PCIe scaling article that require more than 8 GB VRAM, and where the game handles insufficient VRAM by accessing main system memory, will exhibit significant performance degradation as PCIE bandwidth is reduced. The specific combination of 8 GB VRAM and a x8 PCIe interface is inherently problematic on many modern games with certain settings, and this issue is compounded by using motherboards with older PCIe standards.

The 6500XT was shown to have the same issue with it's 4 GB of VRAM and PCIE 4.0x4 interface. It's the same issue rearing it's ugly head with more powerful GPUs and newer more demanding games.

Here is DF showing the problem on the 5060:


Even with it's x16 look how bad the 9060 XT 8 GB suffers when bandwidth is restricted compared to the 16 GB:
2025-06-05-image-4.jpg

2025-06-05-image-5.jpg
 
Last edited:
The xx60s? Sure, maybe, although as I've said before, I think Nvidia is intentionally prioritizing cost control over performance gains in this segment. But I don't see how this can be said for the xx70s and up. The 5070 targets 1440p, the 5070-Ti targets ultimate 1440p or good 4k, and the 5080 is great 4k. That's a very good division. It just doesn't make sense to say the 5070 is trying to pass as an xx80 card.

Lets be honest, Nvidia could easily crush AMD in the mid-range (60/70 tiers) by leveraging its engineering lead and pushing price2perf much further. But they don’t, because keeping AMD alive helps justify higher margins and “competitive” pricing. More generous specs at the same price would easily shake up the market, but at the cost of profit and lets face it the king of the hill, or the duopoly, don't like such compromises - not with the gaming market being completely dwarfed by the profit heavy enterprise and AI bubble inflation.

Nvidia deliberately underdelivers to nudge buyers towards higher end cards. Thats part of why performance gaps between tiers keeps growing gen-2-gen, and why mid-range "standard" 60 or 70 class GPUs feel increasingly gimped. In fact, the last three GPU generations have all jumped on the same bandwagon: "pay more, get less" and hope no one notices.

As for the RTX 5070 with its limited bandwidth and 12GB VRAM, feels more like a 60.5-class GPU than a true 70-tier card. In contrast, the 5070 TI feels like the real gold standard for what a 70-class baseline should be, which was a late fix for an underwhelming 5070 launch. For a long while, the TI variant was hovering around $900 or if you got lucky with availability you could have potted one for $850 (ish). At that price, it didn’t exactly scream 70-class. Even if we pull back to MSRP, thats a pretty hefty price tag for the class of 70 admission. 80-class currently available for $1300+, no comment! Thank the seas for keeping us guys apart, in the UK some less interesting models can be had for around £1K. Great gaming GPU, but when you start breaking down the performance drop from the 5090, especially at higher resolutions, its hardly a cause for celebration.

In my view, the performance tiers aren’t just underdelivering (special emphasis on base models in their respective tiers) but once you throw pricing into the mix, the lines are so blurred its almost a joke trying to tell them apart. Everyones got their own take on what feels right, wrong or how things should’ve gone. Theres no winning argument here, just different ways we’re all trying to make sense of the mess the GPU market has become.
 
Not all PCIe lanes are used?
Probably slow on PCIe 2.0
 
The card has no issue, more choice is always better, and for some 8GB may be a good trade off for paying less.
The problem is the price, it's beyond stupid.
 
I'm not buying this at all. Calling BS! There has got to be glitch somewhere because other older 8GB cards are not showing this behaviour. 2060S/2070/2080/3060/3070/Etc are just not exhibiting the performance drop shown in that graph. Whether it's is a game engine issue, a driver problem or that website flat out falsifying numbers, there is a problem in the testing, not with the card or else ALL 8GB cards would show the same kind of results and people would have thrown a tantrum about it much sooner. Complete moose muffins!
Looks like some problem with the particular game. But some games seem to be affected as well, some more, some less.

Few examples from article linked above, when switching from PCIe 5.0 x8 to 4.0 x8 in 1440p:
Final Fantasy XVI ... 22% loss
F1 24 ... 9% loss
Indiana Jones and the ... 18% loss
Horizon Forbidden West ... 11% loss
Monster Hunter Wilds ... 15% loss
Spider-Man 2 ... 38% loss
The Last Of Us Part II ... 16% loss

Anyone willing to buy RTX 5060 8 GB or RTX 5060 Ti 8 GB to pair it with older board (PCIe 3.0, 4.0) should (re-)consider RX 9060 XT 8 GB as an option. It has full-sized x16 5.0 PCIe bus interface, thus should not cause performance losses (as mentioned above) even with older boards. It doesn't matter RTX 5060 Ti 8 GB is in general 11% faster, when in game that depends heavily on VRAM caching it is crippled so much that it negates the whole +11% perf. difference, or even ends up being slower.


Also, check out this article:
The results published by author seem somewhat strange to me, I'd expect much bigger performance impact. It's x1 5.0 vs. x8 5.0, after all.
There's no information on how the card was forced to run in x1 mode. For instance, on my current mobo, I can only choose between pcie generation speeds, not number of lanes.
He might have used some kind of riser cable.

Anyway, the article mentions important thing - there's most probably a glitch in PCIe 5.0 implementation on RTX 5060 (Ti).
Maybe the card keeps switching between various modes quickly and all of a sudden, causes problems even for GPU-Z to determine link speed properly.
IMHO, this is all linked together to bugs that Blackwell is currently facing and Nvidia struggles to solve them.
For Blackwell, mitigating this (hardware) issue can only be done to a certain degree, I'm afraid.
 
Looks like some problem with the particular game. But some games seem to be affected as well, some more, some less.

Few examples from article linked above, when switching from PCIe 5.0 x8 to 4.0 x8 in 1440p:
Final Fantasy XVI ... 22% loss
F1 24 ... 9% loss
Indiana Jones and the ... 18% loss
Horizon Forbidden West ... 11% loss
Monster Hunter Wilds ... 15% loss
Spider-Man 2 ... 38% loss
The Last Of Us Part II ... 16% loss

Anyone willing to buy RTX 5060 8 GB or RTX 5060 Ti 8 GB to pair it with older board (PCIe 3.0, 4.0) should (re-)consider RX 9060 XT 8 GB as an option. It has full-sized x16 5.0 PCIe bus interface, thus should not cause performance losses (as mentioned above) even with older boards. It doesn't matter RTX 5060 Ti 8 GB is in general 11% faster, when in game that depends heavily on VRAM caching it is crippled so much that it negates the whole +11% perf. difference, or even ends up being slower.


Also, check out this article:
The results published by author seem somewhat strange to me, I'd expect much bigger performance impact. It's x1 5.0 vs. x8 5.0, after all.
There's no information on how the card was forced to run in x1 mode. For instance, on my current mobo, I can only choose between pcie generation speeds, not number of lanes.
He might have used some kind of riser cable.

Anyway, the article mentions important thing - there's most probably a glitch in PCIe 5.0 implementation on RTX 5060 (Ti).
Maybe the card keeps switching between various modes quickly and all of a sudden, causes problems even for GPU-Z to determine link speed properly.
IMHO, this is all linked together to bugs that Blackwell is currently facing and Nvidia struggles to solve them.
For Blackwell, mitigating this (hardware) issue can only be done to a certain degree, I'm afraid.
I might switch to amd finally :pimp:

This is fixed in DOOM The Dark Ages btw
Could you investigate the RT performance difference between the 5060/Ti and rx 9060 8GB/16 :p
 
Last edited:
...NVIDIA does the same with their L2 on some SKUs but it's a bit different. If it works it works.

The other aspect would be RDNA 4's Memory Management (out of order).

But well all I am trying to say is that too many people forget about the Infinity Cache :p (and even for how measly CDNA or Instinct cards might be in marketshare it carries them too.)

https://www.techpowerup.com/review/amd-radeon-rx-6900-xt/2.html has a decent overview if anyone wants to dig a bit more as it's been a few years.
Could be cache config on AMD side.
RTX 5060 has 32 MB of L2 cache.

This ought to be competitive with the 32 MB of Infinity (L3) cache the RX 8600 9060 XT has, which makes the lead in 4K games of the 9060 XT 8 GB unexplainable by me since it has 126 GB/s less memory bandwidth. Although the 9060 also has another 4 MB of L2 cache. (And traditionally a small cache + big cache is lower latency than one big cache.)
 
Looks like some problem with the particular game. But some games seem to be affected as well, some more, some less.

Few examples from article linked above, when switching from PCIe 5.0 x8 to 4.0 x8 in 1440p:
Final Fantasy XVI ... 22% loss
F1 24 ... 9% loss
Indiana Jones and the ... 18% loss
Horizon Forbidden West ... 11% loss
Monster Hunter Wilds ... 15% loss
Spider-Man 2 ... 38% loss
The Last Of Us Part II ... 16% loss

Anyone willing to buy RTX 5060 8 GB or RTX 5060 Ti 8 GB to pair it with older board (PCIe 3.0, 4.0) should (re-)consider RX 9060 XT 8 GB as an option. It has full-sized x16 5.0 PCIe bus interface, thus should not cause performance losses (as mentioned above) even with older boards. It doesn't matter RTX 5060 Ti 8 GB is in general 11% faster, when in game that depends heavily on VRAM caching it is crippled so much that it negates the whole +11% perf. difference, or even ends up being slower.


Also, check out this article:
The results published by author seem somewhat strange to me, I'd expect much bigger performance impact. It's x1 5.0 vs. x8 5.0, after all.
There's no information on how the card was forced to run in x1 mode. For instance, on my current mobo, I can only choose between pcie generation speeds, not number of lanes.
He might have used some kind of riser cable.

Anyway, the article mentions important thing - there's most probably a glitch in PCIe 5.0 implementation on RTX 5060 (Ti).
Maybe the card keeps switching between various modes quickly and all of a sudden, causes problems even for GPU-Z to determine link speed properly.
IMHO, this is all linked together to bugs that Blackwell is currently facing and Nvidia struggles to solve them.
For Blackwell, mitigating this (hardware) issue can only be done to a certain degree, I'm afraid.
That info only suggests there is a glitch of some sort going on. I tested my 2080 and 3080 in a PCIe2.0 based system and the performance differences were minimal VS PCIe3.0. I also tested them in a 4x slot within that system, marginal differences.

That is not a PCIe 8x VS 16x problem. It's something else. Maybe driver glitch?
 
The beginning of this Daniel Owen video that shows the issue in Monster Hunter Wilds:
Daniel Owen used the "Highest Res Textures" which said by the developers requires at least a 16gb card, i ran the benchmark with my 8gb 4060 with High Settings and RT, and in the same spot i couldn't find a single bad looking texture besides the original ones (which will look the same no matter what amount of vram you have, and even the "highest res" doesn't look any better from what i've seen).

There's no performance degradation at all, and the frame time is kinda ok (by MH Wilds standards).

 
Last edited:
RTX 5060 has 32 MB of L2 cache.

This ought to be competitive with the 32 MB of Infinity (L3) cache the RX 8600 9060 XT has, which makes the lead in 4K games of the 9060 XT 8 GB unexplainable by me since it has 126 GB/s less memory bandwidth. Although the 9060 also has another 4 MB of L2 cache. (And traditionally a small cache + big cache is lower latency than one big cache.)

AMD and NVIDIA are generally a bit different with memory, even across higher end models. Some games are using around 2GB more on red team. Prob just optimization though.
 
AMD and NVIDIA are generally a bit different with memory, even across higher end models. Some games are using around 2GB more on red team. Prob just optimization though.
AMD and Nvidia compress a lot of what's in memory. It could be that team green's approach to compression works better on the data some games have in memory. Which could also be the basis of an optimization difference; if the game developer has to ensure the data in memory will compress well and only tests this on Nvidia, then the end result will probably compress to the smallest size on Nvidia cards.
 
Daniel Owen used the "Highest Res Textures" wich said by the developers requires at least a 16gb card, i ran the benchmark with my 8gb 4060 with High Settings and RT, and in the same spot i couldn't find a single bad looking texture besides the original ones (which will look the same no matter what amount of vram you have, and even the "highest res" doesn't look any better from what i've seen).

There's no performance degradation at all, and the frame time is kinda ok (by MH Wilds standards).

I'm not really concerned with the settings that Daniel Owen is using other than whether or not they are the same settings that W1z is using. The only reason I used it as example was that it shows that the game may not load textures properly when it runs out of VRAM. The recent TPU reviews have results for this game in the RT section. All show a significant performance fall off for 8 GB vs 16 GB Nvidia cards of the same model at all 3 resolution settings. This strongly suggests the cards are running out of VRAM with the settings that are being used in these tests. I'm concerned that there could be results included in the reviews that have nerfed textures. I would consider these to be invalid results and I think most reasonable people would agree with that. I just wanted to make sure @W1zzard was aware of this. It might be something he needs to look at to ensure all of the results in the charts for this game are apples to apples.

Either way it's good to know you can turn down textures and get a pretty good result without this issue. Thanks for posting the video.
 
Last edited:
I have a lot of respect for how TPU presented the 8 GB versus 16 GB debate after seeing this article. LTT suggested that AMD didn't sample the 8 GB card to reviewers because of how terrible it would perform. I tested my own RX 8600 9060 XT 16 GB in nearly every way I could think and in only one test did Task Manager show over 8 GB used, and only barely. This review shows similar results. Seems like LTT got it backwards; AMD didn't send out review samples of the 8 GB models because it's cheaper and people would've seen that it's a better value.

(I would still have bought the 16 GB model. Once Ollama adds support for it on Windows I'll want the extra memory. And I can't explain why the 8 GB 5060 Ti falls behind the 8 GB 9060 XT at 4K, which leads me to suspect PCIe bandwidth plays a role when out of memory, and my PCIe 3.0 computer has very little PCIe bandwidth. And lastly I do want this to work as well as possible occasionally on my 4K TV.)
 
The recent TPU reviews have results for this game in the RT section. All show a significant performance fall off for 8 GB vs 16 GB Nvidia cards of the same model at all 3 resolution settings.
Honestly i don't understand how exactly vram works in MH Wilds, on native 1080p High Settings with RT, the game tells you that there shouldn't be any problem with vram because it's only gonna use like 5gigs at most, leaving room for things like frame gen.
 

Attachments

  • Captura de pantalla 2025-06-10 165320.png
    Captura de pantalla 2025-06-10 165320.png
    1.6 MB · Views: 24
Honestly i don't understand how exactly vram works in MH Wilds, on native 1080p High Settings with RT, the game tells you that there shouldn't be any problem with vram because it's only gonna use like 5gigs at most, leaving room for things like frame gen.
TPU uses highest quality setting (unless indicated otherwise) at native resolution without upscaling. All of this is noted in the test setup section on page 6 of this review. I downloaded and installed the benchmark. On my system the VRAM estimator says it will use 9.5 GB on the 1080p ultra preset without RT. This is even with FSR enabled by default. Do the textures get borked on your 4060 on ultra like the Daniel Owen video shows?

Screenshot From 2025-06-10 17-36-29.png
 
Last edited:
Do the textures get borked on your 4060 on ultra like the Daniel Owen video shows?
Indeed.

However, even with the blurry textures the avg performance remains the exact same but with a lot more stutters (So i guess my shitty 4060 outperforms the 8gb 5060ti, lmao). Like i said it's kinda obvious that an 8gb gpu won't perform well with a texture pack that requieres at least a 16gb gpu.
 

Attachments

  • Captura de pantalla 2025-06-10 194508.png
    Captura de pantalla 2025-06-10 194508.png
    2 MB · Views: 23
  • Captura de pantalla 2025-06-10 195716.png
    Captura de pantalla 2025-06-10 195716.png
    796.8 KB · Views: 23
  • Captura de pantalla 2025-06-10 195758.png
    Captura de pantalla 2025-06-10 195758.png
    1,023.6 KB · Views: 25
  • 1749600444086.png
    1749600444086.png
    124.4 KB · Views: 25
Indeed.

However, even with the blurry textures the avg performance remains the exact same but with a lot more stutters (So i guess my shitty 4060 outperforms the 8gb 5060ti, lmao). Like i said it's kinda obvious that an 8gb gpu won't perform well with a texture pack that requieres at least a 16gb gpu.
Thanks for pointing this out. I didn't realize the hi res textures were a separate download in the full game. I'm guessing Daniel Owen doesn't either. I believe I heard him mention he doesn't own this game and is using the free benchmark for his testing with the cut-scenes removed via a mod. The stand alone benchmark installation includes the hi res textures. I wonder if the ultra preset without the hi res textures installed, RT enabled, and up-scaling disabled, still gets borked textures. Is that essentially what your video showed? Were all other settings on page 2 & 3 the same as the ultra preset?
 
My video showed the "High Preset" in native 1080p with High RT, the main diferences between that and "Ultra Preset" (besides the high res textures ofc) are things like Mesh Quality, Distant Shadows, Ambient Occlusion and Sky/Clouds. I ran again the benchmark with those settings maxed out and i couldn't spot any "borked textures", however there were a few instances where performance was just awful.

But I think this is more due to the "Horsepower" of the gpu itself, not the vram because, as i said, the textures retain their original quality(And visuals, I seriously didn't notice any significant difference between the High and Ultra presets).
Yeah, it looks like the texture issues are specific to using the hi res texture pack on 8 GB cards. Since you have to download them separately in the actual game (as opposed to them being included in the free benchmark by default) this shouldn't be an issue for people with 8 GB cards. It spells it out pretty clearly that you need 16 GB of VRAM in the texture pack description.
 
Back
Top