• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

PowerColor Radeon RX 9060 XT Reaper 8 GB

It looks like a great showing against the 5060, Less great against the 5060ti 8 gig, and a bad showing against the 16 gig cards.

IMHO, this, the 5060 8 gig, and the b580 are pretty much in the same target audience price to performance wise.
 
the most interesting thing I found was

"Every single board partner that I talked to—who was willing to discuss pricing—said that the MSRP of $350 (for the 16 GB model) is a fantasy, and it will be impossible to reach without kickbacks from AMD. Usually such campaigns are limited to a certain number of GPUs sold, or a certain percentage of the overall volume, so prices won't last. Let's hope that things are better for the 8 GB model and that it can stay at $300 consistently—it has to."

Do nvidia board partners have the same thoughts with the 5060 which should be more expensive due to the gddr7??
 
This card is actually good, better RT then similarly priced cards. The biggest problem with FSR4 is it not being in all new games and AMD not figuring out how to backport or patch old games with FSR4. I am guessing AMD is trying to figure this out, if i was them i would be trying. I think it may be 25 to 50 dollars too expensive due to the FSR 4 situation. If FSR4 was as wide spread as DLSS then this would be a huge win for AMD.
 
It's the game that limits the texture resolution, not nvidia.

Correct. It's the games not properly loading textures when VRAM runs out. Both Hogwarts and Monster Hunter have been shown to exhibit this behavior with ray tracing enabled on 8 GB cards. It's entirely possible that the ray tracing data for these 2 games could include invalid results with nerfed textures on 8 GB cards.

You are looking at average FPS, which is affected by the 0 score for TLOU due to the card crashing. Relative performance excludes it, which I think is a better indicator of what to expect.

It would be helpful if this was mentioned in the reviews somewhere. I understand that a zero result can't be factored into a geometric mean and needs to be excluded. Perhaps you could make note of this to avoid confusion when the average and relative performance charts don't correlate. The description in the relative performance section could be more clear. It isn't even entirely obvious from the description that the relative performance chart is based on geometric mean, and it certainly isn't obvious that the average and relative performance charts are based on different data sets in some circumstances. It can also be confusing when the average in the chart with the individual games doesn't match the overall average chart right below it without an explanation.
 
It would be helpful if this was mentioned in the reviews somewhere
good idea, added. does it help?

from the description that the relative performance chart is based on geometric mean
It is a slightly different method yet. For relative performance each test's FPS is scaled to "100" for the tested card, and then everything is averaged.
this ensures: equal weight for each test, high/low FPS don't get different weights. also it avoids using geomean, which in many cases doesn't scale in the intuitive way you'd expect.
 
Aim all the flame my way, but... I play at widescreen 3440x1440, well, when and if I get time to play. And I've been using laptop with 3060 6GB Mobile card. That'd be at about 65% of performance on relative perf graph. So if I'm good with casual gaming on 65% at 3440x1440, why would I mind 100% at 1440p? While a few newer games would need tweaks to be enjoyable at 1440p - there's several THOUSAND games that are just ripe for 1440p and 3440x1440 on that card. I too was surprised by almost no difference between 8gb and 16gb card, I mean, 5% is hardly noticable. So while usually I'd be all for higher spec card... *Looks at my Steam backlog... Looks at current card vs 9060 XT 8GB* NAH... Don't tell people this is a bad card. And finally at a price I could say is almost good after 5-6 years of disaster after disaster. Hopefully prices stick (or lower) even after initial batch of "kickbacks" is over.
 
Would have been nice to have the RX 6700 XT in the graph to compare.
 
good idea, added. does it help?
Yes. That should definitely make things less confusing. Thank you for adding this.
It is a slightly different method yet. For relative performance each test's FPS is scaled to "100" for the tested card, and then everything is averaged.
this ensures: equal weight for each test, high/low FPS don't get different weights. also it avoids using geomean, which in many cases doesn't scale in the intuitive way you'd expect.
A more detailed explanation of this in the description on the relative performance page would be an excellent addition IMO.

This page shows a combined performance summary of all the game tests (i.e. non-RT) on previous pages, broken down by resolution. Each chart shows the tested card as 100% and every other card's average FPS performance as relative to it.

This is an adequate description of how the chart should be interpreted but leaves a bit to be desired in terms of explaining how the performance summary for each card in the chart was calculated.

In regards to the texture issue I brought up in my previous post, you may want to take a closer look at Monster Hunter Wilds and Hogwarts as to how it may affect the data in the RT results of your review. @ARF provided a screen shot of Hogwarts. The beginning of this Daniel Owen video that shows the issue in Monster Hunter Wilds:
 
Last edited:
Even that won't convince many to buy a Radeon. Even if the price difference is 100%, the choice is clear - Nvidia.

Personally I refuse to buy any Nvidia graphic cards. I know too well how old the Nvidia 3070 is. I bought instead a radeon 6600XT when it came fresh on the market. 2.5 years later i bought near end of life a radeon 6800 non xt. That card i had to ditch after 3 months because MSI can not make any proper AMD graphic cards. Half a year later i bought my current 7800XT.
I would not show those oudated charts anymore. 3070 was more on the high end side. 6800 non xt was a middle end card when you wawnt to be nice. there were 4 other 6000 series card above it. You need to compare it with the 6800 XT or better in that time frame for the nvidia 3070.
= similar i would compare my radeon 7800xt with the nvidia 710.


Times have changed. Feature set and "basic raytracing support".

Do nvidia board partners have the same thoughts with the 5060 which should be more expensive due to the gddr7??

I suspect 8GiB GDDR7 is cheaper as 16Gib GDDR6
 
Last edited:
I was not expecting 9060xt 8GB to (basically) tie 5060ti 8GB at 4K.
9060xt has less memory bandwidth (322.3 GB/s vs 448.0 GB/s)

If the VRAM is saturated at 4K, I figured that would give 5060ti 8GB an advantage.
Instead, 4K is (relatively) the best look for 9060xt 8GB.
There are two factors at play here, the first one would be the Infinity Cache (RDNA2+) which I never see mentioned when people are looking at AMD's VRAM bandwidth or rather comparing. AMD has some wild claims of much higher (effective) bandwidth with it alongside major latency reductions.
Here are some slides
1749417384351.png

1749417394505.png

1749417480767.png

1749417578552.png

1749418419687.png

The impact is major regardless of how valid each claim is, I only sent them for illustrative purposes. I think it's obvious that the effective bandwidth on RDNA2+ cards is truly a magnitude higher otherwise quite a few SKUs couldn't even touch higher resolutions. If I had to guess, the 9060 XT should have a higher "true" bandwidth - as 34% of an increase should be outscaled by the IC even at 32MB of it.
NVIDIA does the same with their L2 on some SKUs but it's a bit different. If it works it works.

The other aspect would be RDNA 4's Memory Management (out of order).

But well all I am trying to say is that too many people forget about the Infinity Cache :p (and even for how measly CDNA or Instinct cards might be in marketshare it carries them too.)

https://www.techpowerup.com/review/amd-radeon-rx-6900-xt/2.html has a decent overview if anyone wants to dig a bit more as it's been a few years.
 
Last edited:
In any remotely realistic test, among the "lousy 8GB stack", the 9060 XT will be moderately better because it's PCIe x16. That’s one of the reasons it outperforms its counterparts in RT.
Ray tracing puts a heavy load on VRAM, forcing GPUs to rely on system RAM. In this scenario, AMD’s GPUs benefit from significantly higher bandwidth, whereas the 8GB 5060/5060 Ti are bottlenecked by their x8 interface.
 
In any remotely realistic test, among the "lousy 8GB stack", the 9060 XT will be moderately better because it's PCIe x16. That’s one of the reasons it outperforms its counterparts in RT.
Ray tracing puts a heavy load on VRAM, forcing GPUs to rely on system RAM. In this scenario, AMD’s GPUs benefit from significantly higher bandwidth, whereas the 8GB 5060/5060 Ti are bottlenecked by their x8 interface.
I thought the only difference was the VRAM modules being twice as big???
Damn I thought they learned from cutting lanes from the 6500 XT.
 
The whole state of the industry/market is just so bleh. Trying to remember the last time we had a proper banger, no-brainer entry-mid range card, and you have to go back a decade.
 
Oh well for people who can only fork out 300usd instead of 400usd for the 16GB models, looks like the 5060 is the much better option.

How AMD want to gain marketshare with this kind of budget GPU is beyond me
 
Would like to see some deviation from the standard format in terms of cooling solution for these 8GB models. Their appeal is mostly lost in the face of an extant 16GB model in the same ballpark (depending on stock/retailer markup), but they're in a decent position to fill a niche as oddball cards like single-slot blowers or cute little 2 slot half-height cards to fit into SFF office PCs or specialized SFF cases. Those are obviously crackpot ideas, but given I wouldn't expect much traffic from these cards apart from SIs, it might be a good opportunity to appeal to niche markets while most of these N44 dies get turned into 9060 XT 16GB cards.
 
So a combination of that + poor cooler drops it below the 5060 Non-Ti on average @ 1080p :| Bit odd from AMD and poor showing from PowerColor

Obviously the 8Gb is a no go for any side, Nvidia and AMD, but regarding the PowerColor implementation I think they actually did a great job considering this is the smallest 9060XT card in the market even was mention in the conclusion:
What I like is that it's a highly compact card—dual slot, just 20 cm long, which means it should fit into virtually every computer case on the planet as a quick upgrade...

Still, that doesn't mean it uses a bad cooler. While it is weak it is definitely sufficient for the GPU's heat output. By allowing higher temperatures, PowerColor found excellent fan settings that still result in an awesome low-noise experience. With just 27.5 dBA at full load, the card is whisper quiet and temperatures are still good enough. Great job, PowerColor! We've seen very high fan speeds from other vendors on their entry-level cards with similar hardware configs.
 
The 9060 line is not gimped in the pcie bus like the 5060 line.
 
Nvidia really needs to fix their memory management on the 8GB cards, it absolutely kills the benchmarks on certain games like Stalker 2 or Indiana Jones.
 
In any remotely realistic test, among the "lousy 8GB stack", the 9060 XT will be moderately better because it's PCIe x16. That’s one of the reasons it outperforms its counterparts in RT.
Ray tracing puts a heavy load on VRAM, forcing GPUs to rely on system RAM. In this scenario, AMD’s GPUs benefit from significantly higher bandwidth, whereas the 8GB 5060/5060 Ti are bottlenecked by their x8 interface.
Yep, seems that NV lives is in their own planet as to cripple it's GPU where it supposed to shine the most- RT- and knowingly not care about it.
Going cheap with pcie 8x with 8GB is the worst combo, except 4x pcie...

They so havely really on dlss4 software mambo and sacrifice the rest for a little coin.
 
Nvidia really needs to fix their memory management on the 8GB cards, it absolutely kills the benchmarks on certain games like Stalker 2 or Indiana Jones.
To my knowledge GDDR6 is less efficient than GDDR7. Wonder what's happening here where the 5060 8GB / Ti 8GB are being impacted before the 9060 XT 8GB.

RT results are very interesting...
 
Nvidia really needs to fix their memory management on the 8GB cards, it absolutely kills the benchmarks on certain games like Stalker 2 or Indiana Jones.

Doesn't need fixing in that way, need fixing like in more Vram.
 
Back
Top