• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 9060 XT Leads TechPowerUp Frontpage Poll for its Price-Performance

UE4 RT games having 10 second stutters isn't the fault of devs since it's specifically an RDNA4 problem. Let's not start making up excuses.

Senuas Sacrifice, Returnal , Ascent, Ghostwire tokyo, Hogwarts Legacy are some UE4 games with RT.
What excuses? Everyone is shitting on all devs using UE universally how incompetent they are. That's entirely outside what graphic camp you're worshiping. Of all those games I only own Hogwarts Legacy and I've not yet played it on Radeon so I can't tell for that one specifically.
 
The latest versions of Unreal demand a lot of the hardware because too many advanced rendering techniques are used. It's really good at showing hardware deficiencies, so unless you have each and every component balanced out, it's going to perform poorly. You need a very strong CPU all around equipped with reasonably fast memory, and it will also put GPUs and their graphics drivers to the test - deficient architectures and/or graphics drivers will be felt with this engine, especially if it's a 5.x build.

For example, AMD's Windows driver still needs tons of polish, they've been working on its stability and that has paid off, but the next fight, which is making sure that all API extensions are fully implemented, functional and performant has only started for them. Their decision to use old, slow G6 memory may also show through under these conditions. Ultimately, they aren't customer concerns - gamers just want to play their games and play their games well. And that is what they'll have to deliver.
 
As already known 9060 xt isn't powerful enough (<10% over 5060) to take advantage of the 16gb VRAM. After a couple (literally) of years more and more games will need the "right settings", exactly as it's mentioned in the 5060 review... Future proofing on GPUs is unfortunately short, usually 3-5 years, depending mostly on raw horsepower.
A 12gb version, like 6700 xt, could have been a wiser choice, but it would mean no 8gb version.
Yes it is. Check 9060 XT 8GB vs 16GB benchmarks. 8GB holds it back so much that sometimes 16GB can double the FPS.
5060 Ti is also powerful enough to benefit from 16GB.

9060 XT is faster than these cards:

RX 6800 16GB
RTX 2080 Ti 11GB
RTX 4060 Ti 16GB
GTX 1080 Ti 11GB
Arc A770 16GB
Radeon VII 16GB

I dont remember someone saying back then that 2080 Ti etc would run out of steam to utilize it's VRAM properly.

As for 5060 im unsure, but considering it' s"only"10% slower than 9060 XT im pretty sure this would be able to flex itself too considering the examples of old cards i posted above.

Unfortunately the only way to know that is someone mods it to 16GB somehow. Right now we can only speculate on how much 8GB is really holding it back.
3-5 years is exactly what most people want. I doubt many are buying a higher VRAM models in the hopes of using it 10 years until driver support ends.

Higher VRAM can mean the difference of needing to upgrade in two years when the next series comes out (and suffering those two years) or skipping the next series and upgrading in 4 years with smooth sailing all the way.

Their decision to use old, slow G6 memory may also show through under these conditions.
That's pretty meaningless difference. Yes on paper the 5070 Ti has 39% higher bandwidth, but it's only 4% faster. And this was at launch. Now (with the exception of PT) the performance is pretty much equalized between the two. XT bandwidth is 644GB/s. Ti bandwidth is 896GB/s.

So AMD is not losing much by using G6. Sure, they would be faster with G7, but not to any meaningful degree that would sway potential buyers and G6 is not degrading the experience for existing owners.
 
9060 XT is faster than these cards:

RX 6800 16GB
RTX 2080 Ti 11GB
RTX 4060 Ti 16GB
GTX 1080 Ti 11GB
Arc A770 16GB
Radeon VII 16GB

A 5 year old GPU with markedly reduced clock speeds and only 75% of its resources enabled
A 7 year old GPU with several clusters disabled, reduced memory bandwidth and less than half of the capacity of its full variant
Very debatable, it's within a few percent give or take, except that the 4060 Ti 16 GB has been in the market for 2 years and had been utterly rejected by customers who favored the 8 GB variant at the time
Ancient GPU that gramps got back in 2017, "I'll have you know it's still the GOAT" even though it was always just a further cut down version of the 2016 flagship
Intel's first crack at a dedicated GPU in ~25 years
The GPU that AMD abandoned in record time and never really supported well

It's a lot less impressive once you put that into perspective.
 
A 5 year old GPU with markedly reduced clock speeds and only 75% of its resources enabled
A 7 year old GPU with several clusters disabled, reduced memory bandwidth and less than half of the capacity of its full variant
Very debatable, it's within a few percent give or take, except that the 4060 Ti 16 GB has been in the market for 2 years and had been utterly rejected by customers who favored the 8 GB variant at the time
Ancient GPU that gramps got back in 2017, "I'll have you know it's still the GOAT" even though it was always just a further cut down version of the 2016 flagship
Intel's first crack at a dedicated GPU in ~25 years
The GPU that AMD abandoned in record time and never really supported well

It's a lot less impressive once you put that into perspective.
The fact is that 9060 XT is faster than all those mentioned cards. As it should be for a 2025 card.
So how its it that a 2025 card that is supposedly not powerful enough to utilize 16GB, but these older and slower cards were able?
You think a 22GB 2080 TI (RTX Titan) was able to fully utilize it's 22GB in games 7 years ago?

It's a BS argument. 16GB is perfectly reasonable pairing for a 9060 XT class GPU. We're not talking about 32GB or more here.
 
Semantics, it's functionally equal to the 4060 Ti 16 GB, a 2 year old card that received lukewarm at best reviews. That answers your question perfectly, 8 or 16, it didn't matter then, and it still doesn't matter today. The games that act up on 8 GB are, frankly, beyond what this level of hardware can reasonably run - unless you like to play games at (often sub) 30 fps. In any case, a turd is still a turd, tech tubers pushing 1440p and 4K ultra quality gaming on these dinky cards and then insisting that the 8 GB model is useless will never not be funny. If anything, the 16 GB versions are useful for LLMs, so at least there's that silver lining. But focusing on gaming, employment is a reliable fix if one can't afford to step up beyond this price range and absolutely demands to play UE5 games on higher settings.

It's not really a BS argument, I'm pointing out that we shouldn't be striving to compare the midrange of today to 7 to 9 year old hardware. Maybe I'm too harsh? Whatever. I'm not really impressed with the state of GPUs right now, especially at the midrange. Other than the RTX 5090 (may God have mercy on your wallet, because Jensen won't), it's basically just choosing which one is the least of a mediocre, substandard rip-off.
 
I got my XFX RX9060XT 16gb (twin fan version) for MSRP ($350) and honestly I adore it. FPS is good, even on my 1440p ultrawide. Probably better if I had pci-e 5x15 instead of the pci-e 4x16 i have on my B650M.


Also handy that there are several shortish card options, down to the 200mm Powercolor reaper model. Only wish we had some single fan or LP options for AMD, but so far nothing.

Ether way, my favorite midrange card by a good clip, and the most exciting one we've had in that space in some time.
 
I got my XFX RX9060XT 16gb (twin fan version) for MSRP ($350) and honestly I adore it. FPS is good, even on my 1440p ultrawide. Probably better if I had pci-e 5x15 instead of the pci-e 4x16 i have on my B650M.

The 9060 XT should run without losses on any Gen 3+ slot since it has all 16 lanes wired. Even 2.0 x16 slots on old systems should be OK, for what it's worth anyway. On this subject it's only a shame that AMD removed legacy BIOS support from this generation, so you can't install it on some old X58 machine. It's the only use case I personally find interesting for it (beyond a transcoding engine on a server).
 
Yes it is. Check 9060 XT 8GB vs 16GB benchmarks. 8GB holds it back so much that sometimes 16GB can double the FPS.
5060 Ti is also powerful enough to benefit from 16GB.
It benefits from more VRAM but not from the whole 16GB. A 12GB buffer would be better suited for that card. For games I mean. It would be more "future-proof" as they say.

HUB discussing a potential 5060 Super with 12GB @09:23.

^Mmmhhh that one's a bit too weak for that buffer, but in a few scenarios it will probably help.
That's pretty meaningless difference. Yes on paper the 5070 Ti has 39% higher bandwidth, but it's only 4% faster. And this was at launch. Now (with the exception of PT) the performance is pretty much equalized between the two. XT bandwidth is 644GB/s. Ti bandwidth is 896GB/s.

So AMD is not losing much by using G6. Sure, they would be faster with G7, but not to any meaningful degree that would sway potential buyers and G6 is not degrading the experience for existing owners.
Cache magic, gotta love it!

My guess is that MFG needs a lot of bandwidth hence why nVidia gave 50 series GDDR7 which provides higher bandwidth. But given MFG is trash (inb4 the haters, I mean the 3x/4x not the regular FG, that one is legit, so holster those guns) because most of the imagery is approximated then the whole "muh GDDR7 is so superior" doesn't hold much water.
The fact that RDNA4 cards are really strong in TopazAI is crazy, nVidia still gets more wins overall when looking at a very large sample of programs but the cracks in the facade are getting bigger and more numerous.
the 4060 Ti 16 GB has been in the market for 2 years and had been utterly rejected by customers who favored the 8 GB variant at the time
Yes but this can be (at least) partially attributed to the disgustingly large price gap between the two variants.
 
Yes but this can be (at least) partially attributed to the disgustingly large price gap between the two variants.

Yeah, it was bad from a performance standpoint and 500 usd. Also there were maybe 2-3 games and not very good ones that benefited from more than 8 when it released.

Now you can use very usable settings even with upscaling and break the 8GB cards if you just compare them to the 16 varients.

It's wild that we have 300+ usd cards a half decade after the PS5/Series X release with less available vram.

I agree though this whole segment should have been designed around 12GB but they gots to keep them margins up can't have gaming dragging down their 50-70% margins....
 
Yes it is. Check 9060 XT 8GB vs 16GB benchmarks. 8GB holds it back so much that sometimes 16GB can double the FPS.
5060 Ti is also powerful enough to benefit from 16GB.

9060 XT is faster than these cards:

RX 6800 16GB
RTX 2080 Ti 11GB
RTX 4060 Ti 16GB
GTX 1080 Ti 11GB
Arc A770 16GB
Radeon VII 16GB

I dont remember someone saying back then that 2080 Ti etc would run out of steam to utilize it's VRAM properly.

As for 5060 im unsure, but considering it' s"only"10% slower than 9060 XT im pretty sure this would be able to flex itself too considering the examples of old cards i posted above.

Unfortunately the only way to know that is someone mods it to 16GB somehow. Right now we can only speculate on how much 8GB is really holding it back.
3-5 years is exactly what most people want. I doubt many are buying a higher VRAM models in the hopes of using it 10 years until driver support ends.

Higher VRAM can mean the difference of needing to upgrade in two years when the next series comes out (and suffering those two years) or skipping the next series and upgrading in 4 years with smooth sailing all the way.
IIRC 2080 ti is from 2019 and still has more than enough VRAM for its processing power. It's actually very simple, if the VRAM isn't enough at launch, it will remain so in the future, like 3070 (ti) or 3080 10gb. For my 3060 ti 8gb was and still is absolutely sufficient. VRAM requirements rise together with raster performance needs, so for a newer, "heavy" game I have to find the right settings not because of VRAM, but due to limited processing power.
 
Absolute no brainer... Nvidia have totally messed up again... If the 5060 was a 12GB GDDR6 card it would have flown off the shelves.
 
Absolute no brainer... Nvidia have totally messed up again... If the 5060 was a 12GB GDDR6 card it would have flown off the shelves.

Jensen looking at that "total mess up" outperforming the competition in sales volumes in pretty much each and every market that isn't a tech forum right about now...

Woody Harrelson Crying GIF


The 5060 sells, and it sells exceptionally well, I must remind you that Nvidia has the OEM and system integrator market by the proverbial jewels. And 12 GB is completely irrelevant at this performance level, 99 out of 100 games will run on high settings at 1080p with an 8 GB GPU.
 
Back
Top