• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RX 9060 XT 16 GB GPU Synthetic Benchmarks Leak

Well, unless you've upgraded since you set your forum specs, you have a 3080, which means you can run DLSS 4's super sampling component at DLAA level (which is the purest image right now)
Is there a separate way to enable it or is it per game basis it has to be implemented?
 
Is there a separate way to enable it or is it per game basis it has to be implemented?

Easiest way is DLSS swapper and Nvidia Inspector. The new Nvidia app is the other way but not every game is supported in there and for me it's just easier to manually do it.

I use DLSS swapper to swap to the latest version of DLSS in every game although I personally do not like DLSS4 in third person games. Still it allows you to try different versions and see what your preference is.
 
Is there a separate way to enable it or is it per game basis it has to be implemented?

Per game, has to be implemented by the developer. Use Nvidia App or DLSS Swapper to replace DLLs if you want to upgrade them before the developers do. You can download the latest DLLs from TPU's download section and replace them manually as well.

Special K can be also used in games that have no anti-cheat protection for DLSS upgrade and render preset control.
 
Per game, has to be implemented by the developer. Use Nvidia App or DLSS Swapper to replace DLLs if you want to upgrade them before the developers do. You can download the latest DLLs from TPU's download section and replace them manually as well.

Special K can be also used in games that have no anti-cheat protection for DLSS upgrade and render preset control.
Special K is also a great cereal.

I check it out
 
The 5070 is an xx60 low end trash GPU? You do release that it will run circles around the 9600xt, right?
Yes because nvidia is not even trying to be competitive most people will buy it anyways. If nvidia would be more competitive than amd would be behind but it's not like that because usually AMD is better p/p wise. RTX 5060 and RTX 5070 prices are so good because performance is not there. Gen for Gen etc thanks to god that people are not so dumb and are seeing that. I'm not saying that RTX 5070 is a bad gpu just that it's hardware is hevelly cut down and nvidia will make big money on that.

Intel ARC B580 Die Size 272 mm² @260€
NVIDIA RTX 5070 Die Size 263 mm² @550€
 
Last edited:
Yes because nvidia is not even trying to be competitive most people will buy it anyways. If nvidia would be more competitive than amd would be behind but it's not like that because usually AMD is better p/p wise. RTX 5060 and RTX 5070 prices are so good because performance is not there. Gen for Gen etc thanks to god that people are not so dumb and are seeing that. I'm not saying that RTX 5070 is a bad gpu just that it's hardware is hevelly cut down and nvidia will make big money on that.

Intel ARC B580 Die Size 272 mm² @550€
NVIDIA RTX 5070 Die Size 263 mm² @260€
Don't use die size to compare GPUs, use performance results in real scenarios. All this talk of cut-down silicon and stuff like "a 5070 is really a 5060 because of the 192-bit bus, the xx60 cards were 192-bit for generations" is dumb. The 5070 is 50% faster than the 3070 in raster, 100% faster in RT, has 50% more VRAM, and it comes at a significantly lower MSRP when you adjust for inflation. Comparing Intel die size and bit-depth to AMD or Nvidia is also crazy, because they don't have the same level of refinement or experience in the GPU space, so they're brute-forcing their solution by just throwing more compute resources and bandwidth at the problem for now to reach the performance of smaller, more cost-effective (to manufacture) GPUs from AMD and Nvidia.

Two GPUs of similar size on the same architecture can have vastly different performance, the same way two vehicle engines of the same displacement can have vastly different power outputs, so judging a car's performance based solely on the size of the engine would make you wrong, almost every time. That analogy really does apply to GPUs across different generations, architectures, process nodes, bit depth etc.
 
Yes because nvidia is not even trying to be competitive most people will buy it anyways. If nvidia would be more competitive than amd would be behind but it's not like that because usually AMD is better p/p wise. RTX 5060 and RTX 5070 prices are so good because performance is not there. Gen for Gen etc thanks to god that people are not so dumb and are seeing that. I'm not saying that RTX 5070 is a bad gpu just that it's hardware is hevelly cut down and nvidia will make big money on that.

Intel ARC B580 Die Size 272 mm² @550€
NVIDIA RTX 5070 Die Size 263 mm² @260€

So i should be expecting the 9060xt to walk all over the 5070 then. Well, a week left for reviews.
 
The 5070 is 50% faster than the 3070 in raster, 100% faster in RT, has 50% more VRAM, and it comes at a significantly lower MSRP when you adjust for inflation.
Numbers doesn't lie.

RX 9070 XT @4k is ~ 44% faster than RX 7800 XT (average fps) and about ~ 84% faster in Ray Tracing. Both gpus technically have very similar die sizes and vram amount. RX 9070 XT have slightly bigger die 357mm² vs RX 7800 XT 346mm²

RTX 5070 @4k is only ~ 24.9% faster than RTX 4070 (average fps) and only about ~ 16% faster in Ray Tracing. Older RTX 4070 uses bigger die 294mm² vs RTX 5070 263mm²

RX 9070 XT droped in price again now best price is 661€ vs RX 7800 XT 441€ that's 49.9% more money for 9070 XT which overall is a decent upgrade performance wise but definitely not great value in general
as technically gpu costs aren't that more expensive in comparison to RX 7800 XT.
 
Last edited:
The die size comparison is funny. Apples to oranges, and as always, completely excludes the cost of research and development, as well as the software ecosystem's entry price.
 
So i should be expecting the 9060xt to walk all over the 5070 then. Well, a week left for reviews.
What ? Why you are thinking like that ? RX 7600 or RX 7600 XT was never a good gpu...... Just saying.
 
What ? Why you are thinking like that ? RX 7600 or RX 7600 XT was never a good gpu...... Just saying.
Since the 5070 is a garbage lowly xx60 gpu (your words) why shouldn't the 9060xt walk all over it?
 
completely excludes the cost of research and development, as well as the software ecosystem's entry price.
That's irrelevant if at the end p/p ratio is weak.
 
Last edited:
That's irrelevant if the end p/p ratio is weak.

No, it is certainly not irrelevant from a business' perspective. Neither AMD nor Nvidia are running a charity here. To think otherwise is delusional.
 
No, it is certainly not irrelevant from a business' perspective. Neither AMD nor Nvidia are running a charity here. To think otherwise is delusional.
So are you just gpu consumer or working directly on amd or nvidia ? It's just hilarious.
 
So you are just gpu consumer or working directly on amd or nvidia ? It's just hilarious.

How on God's green earth have you come to the conclusion I work for GPU vendors for stating the obvious? Driver engineers, research and development of GPUs aren't free, and you are paying for that. You're coming from a perspective of emotion, not reason.
 
No, it is certainly not irrelevant from a business' perspective. Neither AMD nor Nvidia are running a charity here. To think otherwise is delusional.
As a customer I don't care about the business' perspective, neither company is a charity and to think anyone should be paying for anything more than the hardware is delusional.
Also neither company is Apple, I know Nvidia is trying to outdo Apple on fleecing their customers upcharging for software, but for games I don't care about some "ecosystem".
 
As a customer I don't care about the business' perspective, neither company is a charity and to think anyone should be paying for anything more than the hardware is delusional.
Also neither company is Apple, I know Nvidia is trying to outdo Apple on fleecing their customers upcharging for software, but for games I don't care about some "ecosystem".

Certainly understandable that you don't, I'm not a shareholder in any so I don't either. But it remains a fact that we have to deal with.

Would hardly accuse Apple of fleecing customers considered the iPhone XR, their 2018 budget, is still getting iOS updates. God knows how many years ago Samsung, which is still the premier Android vendor overall, discontinued their 2018 Galaxy A series phones. So to that end, you get what you pay for. On the same vein, actually.
 
As a customer I don't care about the business' perspective, neither company is a charity and to think anyone should be paying for anything more than the hardware is delusional.
Also neither company is Apple, I know Nvidia is trying to outdo Apple on fleecing their customers upcharging for software, but for games I don't care about some "ecosystem".
Dlss 4 working on 2018 GPU's.

Fsr 4 not working on 2023 GPU's

Nvidia fleecing their customers. You are immune to reason and logic, aren't you?

Certainly understandable that you don't, I'm not a shareholder in any so I don't either. But it remains a fact that we have to deal with.

Would hardly accuse Apple of fleecing customers considered the iPhone XR, their 2018 budget, is still getting iOS updates. God knows how many years ago Samsung, which is still the premier Android vendor overall, discontinued their 2018 Galaxy A series phones. So to that end, you get what you pay for. On the same vein, actually.
Part of being a customer is having unreasonable unachievable demands. Else you are either a fanboy or a shareholder...
 
Numbers doesn't lie.

RX 9070 XT @4k is ~ 44% faster than RX 7800 XT (average fps) and about ~ 84% faster in Ray Tracing. Both gpus technically have very similar die sizes and vram amount. RX 9070 XT have slightly bigger die 357mm² vs RX 7800 XT 346mm²

RTX 5070 @4k is only ~ 24.9% faster than RTX 4070 (average fps) and only about ~ 16% faster in Ray Tracing. Older RTX 4070 uses bigger die 294mm² vs RTX 5070 263mm²

RX 9070 XT droped in price again now best price is 661€ vs RX 7800 XT 441€ that's 49.9% more money for 9070 XT which overall is a decent upgrade performance wise but definitely not great value in general
as technically gpu costs aren't that more expensive in comparison to RX 7800 XT.
I think you misread my post in multiple ways:

1. I didn't mention the 9070XT, I explicitly said 5070.
2. I compared 5070 to 3070. If I'd meant 4070 I'd have typed 4070. The additional clue was in the "50% more VRAM" ie, the 3070's 8GB vs 5070's 12GB.
 
Numbers doesn't lie.

RX 9070 XT @4k is ~ 44% faster than RX 7800 XT (average fps) and about ~ 84% faster in Ray Tracing. Both gpus technically have very similar die sizes and vram amount. RX 9070 XT have slightly bigger die 357mm² vs RX 7800 XT 346mm²

RTX 5070 @4k is only ~ 24.9% faster than RTX 4070 (average fps) and only about ~ 16% faster in Ray Tracing. Older RTX 4070 uses bigger die 294mm² vs RTX 5070 263mm²

RX 9070 XT droped in price again now best price is 661€ vs RX 7800 XT 441€ that's 49.9% more money for 9070 XT which overall is a decent upgrade performance wise but definitely not great value in general
as technically gpu costs aren't that more expensive in comparison to RX 7800 XT.
Lol at 4 nm vs 5/6 nm, lol at 53.9 million transistors vs. 36.3 million (I reckon this is the total value for GCD + MCD of the 7800 XT).
Of course it has considerably more performance given the 9070 XT is the successor of the 7800 XT but at a higher MSRP thanks to good guy AMD.

As for the 5070 vs 4070 they're both 5 nm, the GB205 has 31.1 million transistors vs 35.8 million of the AD104. If the die size and transistor count was similar then the 5070 would've performed better.
At 1440p it's very close to the 4070 Ti which has the full AD104 die, whereas the 5070 uses a slightly gimped GB205 die. A 5070 Super would have slightly more performance with a smaller die, less transistors and on the same node.

So even if Blackwell is spat on continuosly at least it manages to improve on Ada within the same technological constraints. It's a tock, why do people hate it for not being a tick?
 
Dlss 4 working on 2018 GPU's.

Fsr 4 not working on 2023 GPU's

Nvidia locked out all Pascal and previous generation gpu owners from DLSS.

Nvidia locked frame generation out from ampere owners even though the FSR varient works decent enough and it no longer requires the optical flow BS.

Nvidia locked MFG out from ADA owners.

How is this any different?

It's nice that Nvidia has updated the SR portion for Turing and beyond but don't be surprised when Lovelace gets locked out of 6x frame gen or some other technology.

That's just business 101 when neither company can give meaningful gains to actual raw performance.
 
Nvidia locked out all Pascal and previous generation gpu owners from DLSS.

Nvidia locked frame generation out from ampere owners even though the FSR varient works decent enough and it no longer requires the optical flow BS.

Nvidia locked MFG out from ADA owners.

How is this any different?

It's nice that Nvidia has updated the SR portion for Turing and beyond but don't be surprised when Lovelace gets locked out of 6x frame gen or some other technology.

That's just business 101 when neither company can give meaningful gains to actual raw performance.
Uhm, that's the point, it's not. So how does the claim "nvidia fleecing their customers" hold any merit when amd has done the same or worse. Cause I swear to god I've never heard anyone around here claiming left and right on every goddamn thread that amd is fleecing their customers.

Why is that, in your opinion ? :p

EG1. FSR exists for Pascal owners - thanks to nvidia. We both know that FSR wouldnt have existed without nvidia introducing DLSS, right?
 
Dlss 4 working on 2018 GPU's.

Fsr 4 not working on 2023 GPU's

Nvidia fleecing their customers. You are immune to reason and logic, aren't you?
As mentioned, Nvidia Pascal doesn't get DLSS, even though FSR runs fine on older GPU's.
Why does Nvidia restrict Ada cards from running MFG? The Ampere cards should be capable of MFG as well, 8GB cards like the 3060 and 3070 run out of VRAM in newer titles so MFG would be helpful for those who can't upgrade or don't want to settle for paying inflated prices for a new 8GB card.
And you can run the utility Lossless Scaling on any GPU for $6, why can't the multi-trillion dollar company come up with an upscaling tool to work on every GPU brand?
Nvidia keeps fleecing their customers for software they don't even update for older cards, the jacket man would rather you buy a new card.

EG1. FSR exists for Pascal owners - thanks to nvidia. We both know that FSR wouldnt have existed without nvidia introducing DLSS, right?
No, Nvidia didn't invent upscaling. Also, FSR is an open source API.
 
Last edited:
As mentioned, Nvidia Pascal doesn't get DLSS, even though FSR runs fine on older GPU's.
Why does Nvidia restrict Ada cards from running MFG? The Ampere cards should be capable of MFG as well, 8GB cards like the 3060 and 3070 run out of VRAM in newer titles so MFG would be helpful for those who can't upgrade or don't want to settle for paying inflated prices for a new 8GB card.
And you can run the utility Lossless Scaling on any GPU for $6, why can't the multi-trillion dollar company come up with an upscaling tool to work on every GPU brand?
Nvidia keeps fleecing their customers for software they don't even update for older cards, the jacket man would rather you buy a new card.
So let me get this straight, instead of focusing your criticism on the gpus you buy by your own admission (AMD ones) and complain that you are not getting FSR 4 cause amd is greedy as hell and wants you to upgrade, you focus your complains on GPUs that you have 0 intention of ever buying. Yeah, checks out man.
 
Why is FS4 locked behind these newer cards? I thought AMD said that wouldn't end up being the case?

Guess future for previous gen of RX cards will be XeSS and lossless scaling :(
'AI Engine' things.
More/less, FSR4 was built around RDNA4's RT/AI hardware (IIRC, the source of the 9070XT beating the 7900XTX in RT)

AMD will have to 'backport' and re-optimize FSR4 to/for RDNA3's 'AI hardware'

Reminds me of the RX Vega 56 (at least those with Samsung HBM chips) and the R9 Fury. It could be poor scaling at the hardware level, could be driver halfassery, could be a lot of things. That's where Chips and Cheese's technical deep dives tend to bring in the goods.
Same

Wrong/bad timings, cache misses, IF:VRAM oddities, memory compression engine things? :confused:

If VRAM OC doesn't help much, then it seems there could be something in the core overwhelmed.
IIRC on Vega10, there were 'sweetspots' in core clocks that clocked-up the SoC or FCLK or something, helping to alleviate that.

(Assuming '2/3 performance, 1/2 the hardware' on 9060 XT has validity)
Perhaps, AMD left Navi44's memory compression hardware 'full fat' from Navi48, but task it with half as much buswidth?

DLAA is amazing. If FSR 4 has a similar mode, I highly suggest you ditch Native with vaseline TAA and use that instead. And at least since DLSS 3.5, distinguishing Quality from Native has gotten very hard. More often than not, it's worth enabling on the power and heat savings alone.
You got me there. Temporal post-processing AA, sucks.
I have been seeing "Native AA" and "Native" as active options for FSR and XeSS in more games lately.
It does certainly look better v. TAA/TXAA/TMAA/TSSAA

p/p ratio is weak
:roll:
I am a child, apparently.
Sorry-not-sorry.
 
Last edited:
Back
Top