• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RX 9060 XT 16 GB GPU Synthetic Benchmarks Leak

Ampere (2080 ---> 3080) was over 50% as well, especially in 4k.
True! I have both. Like everything, it depends on the game in question, but yeah, the performance uptick from the 2080 to the 3080 was about 50%.

EDIT: I went back and looked at the reviews and the average was more like 33% to 45%. So not quite 50% but in many cases, it's close.
For those who want to see for themselves.
 
Last edited:
No, nothing beats Native. But due to every developer needing to use Unreal Crap 5 and optimize it using a Casio speak and spell computer, most games that have same graphics as Croc 2 requires upscaling of some sort to make it playable.
I would say FSR4 and DLSS without scaling aka DLAA beat native, especially when its using TAA. I would even say DLSS/FS4 scaled up beats native + TAA, TAA is just that bad. These tech's have basically saved us from trash AA.
 
True! I have both. Like everything, it depends on the game in question, but yeah, the performance uptick from the 2080 to the 3080 was about 50%.

EDIT: I went back and looked at the reviews and the average was more like 33% to 45%. So not quite 50% but in many cases, it's close.
For those who want to see for themselves.
It's 66% at 4k (2080 ---> 3080)



relative-performance_3840-2160.png
 
That the difference between them is 66% at 4k, just like your screenshots just showed. Im not sure what you are disagreeing with to be fair. Going from 60 to 100 is a 66% increase.
I'm not debating this. You need to brush up on how to read graph percentages.
 
I mean, even then at this scale it's mostly semantics, also considering the 3080 is a badly cutdown part and the 2080 is a tier down chip (while also slightly cut). It was around this time that doing generational, per-SKU comparisons went completely useless, because getting a x80 SKU no longer meant you had a full chip, even if a step down type (example, 980's full GM204 or 1080's full GP104). 1080 Ti used the 2016 Titan X's core with one memory channel disabled, and full GP102 was only sold as Titan Xp. 2080 was TU104 with a couple of SMs shaved off, 2080 Ti similarly TU102 with one memory channel shaved off, no clamshell, and a bunch of disabled cores vs. the Titan RTX.

The last time buying a x80 meant you got the fully enabled flavor of the biggest, meanest chip that Nvidia offered is now 15 years in the past - the GTX 580 and its full GF110.
 
I check this scaling on RX 9070 XT and RX 9070 in Geekbench Vulkan test have 91% performance of RX 9070 XT in games techpower reviews game results in 1440P have also 91% looks like 1 to 1 scaling in this test RX 9060 XT shows 68.5% performance of RX 9070 XT. Looks promising.
 
Yes it has 50% more VRAM but that still is not changing nvidia's poor performance improvements gen to gen. You cannot escape from obvious when facts are in front of you.
How’s that AMD performance doing again? Their current top end card just barely squeaks by Nvidia’s top end from two generations ago. That’s progress, I guess?
 
I'm not debating this.
There's nothing to debate here. It's in your own screenshot and TPU review links. The 2080 is 60% of the performance of the 3080. The 3080 is 1.66 x faster than the 2080. This shouldn't be that tough to wrap your head around. it's slightly confusing at first glance, and you could be forgiven for making this mistake, but you have to see that you're objectively wrong here. It's okay to admit it.
You need to brush up on how to read graph percentages.
Irony.
AMD will get a pass as usual. Nvidia fans aren’t nearly as toxic as AMD diehards from what I’ve seen, so there won’t be an audience to rile up for the clicks.
We shouldn't be "fans" of brands although I think it's fine to have a preference. I generally have a preference toward AMD products but I have zero problem criticizing them if i think it's warranted. Irrespective of brand preference, I think we should all be cheering for AMD and Intel to succeed in gaining GPU market share. We're the consumer, and competition is good for us. Nobody should be defending any of these companies when they do misleading or straight up anti-consumer shit. They all do it. Some more often and more egregiously than others. Being an apologist for any brand because you're a "fan" is pathetic. The whataboutism from diehard brand partisans is annoying as shit too.
How’s that AMD performance doing again? Their current top end card just barely squeaks by Nvidia’s top end from two generations ago. That’s progress, I guess?
The 9070XT has a comparable performance and die size to the 5070 Ti. RDNA4 is much more competitive in ray tracing than previous architectures have been. That's progress. RDNA4 is a step in the right direction. If they can take lessons from the relative swing and a miss that RDNA3 was, and build on the relative success that RDNA4 is, then they could be on to something next time around. If they can make the RDNA3 multi-die thing work on a better architecture that could be a huge advantage for them on flagship tier cards.

Does AMD need to do a lot more to be truly competitive and start gaining market share? Absolutely. They're architecture and feature set are still lagging behind, but it's good that they're closing the gap. They also need to ramp up supply if possible. I think most would agree that their OC cards being for sale at major retailers for 40% above MSRP isn't a good look. They need to fix that ASAP. They routinely step on their dicks in one way or another when it comes to MSRP pricing on their products. Their marketing department is bush league.

I also think AMD shit the bed by not seizing the opportunity to put out a flagship card this cycle. Even if a huge monolithic die would have killed margin, it still would have probably helped them gain some mind share.
 
Last edited:
The 9070XT has a comparable performance and die size to the 5070 Ti.
How does die size matter? The transistors are what matters. Die size depends on the node, a denser node will make for a smaller die but then you are paying more for the node. In terms of transistors - so if both cards were made on the same node - the 9070xt is a lot larget than a 5080 but is slower than a 5070ti.
 
How does die size matter? The transistors are what matters. Die size depends on the node, a denser node will make for a smaller die but then you are paying more for the node. In terms of transistors - so if both cards were made on the same node - the 9070xt is a lot larget than a 5080 but is slower than a 5070ti.
The die size matters for yields. The yield matters for the cost to produce the die. The transistor density is meaningless to the end consumer. All we should care about is price/performance.
 
The die size matters for yields. The yield matters for the cost to produce the die. The transistor density is meaningless to the end consumer. All we should care about is price/performance.
Yes the die size matters when it comes to yields but a similar die might be more expensive cause its on a denser node. The production cost is determined by the transistors at the end of the day, because the more transistors you have the bigger the die you need. With all else being equal the 5070ti is a lot cheaper to make cause it has much less transistors, and yet it is actually faster than the 9070xt.

Arguing that the 9070xt has the same die as the 5070ti completely misses the point of why die size matters. We use it to determine the cost of production - or in better terms - how much performance you get per the resources you need. And those resources are the transistors
 
Yes the die size matters when it comes to yields but a similar die might be more expensive cause its on a denser node. The production cost is determined by the transistors at the end of the day, because the more transistors you have the bigger the die you need. With all else being equal the 5070ti is a lot cheaper to make cause it has much less transistors, and yet it is actually faster than the 9070xt.

Arguing that the 9070xt has the same die as the 5070ti completely misses the point of why die size matters. We use it to determine the cost of production - or in better terms - how much performance you get per the resources you need. And those resources are the transistors
Both dice are built on variants of the same node. We determine how much a die costs to produce based on how many usable dice can be produced on a wafer, and based on how much that wafer costs. We have no idea what kind of per wafer pricing TSMC is giving either company, although I suspect the wafer pricing and yield of these dice is close enough to be moot. They're roughly the same size die, on variants of the same node, from the same foundry. AMD doesn't have a node advantage here. Their architecture is just more transistor dense.
 
How’s that AMD performance doing again? Their current top end card just barely squeaks by Nvidia’s top end from two generations ago. That’s progress, I guess?
I mean **70 Series progress over time not whole lineup. Technically AMD at least is not cutting its hardware in that sector.

GPU shrinkflation that's comes from nvidia mainly......
 
I mean **70 Series progress over time not whole lineup. Technically AMD at least is not cutting its hardware in that sector.

GPU shrinkflation that's comes from nvidia mainly......
But your comparison is fundamentally flawed - or should I say ridiculous?

The 6700xt had an msrp of 479$, and you are comparing it to the 9070xt which has a fake MSRP of 600$ and an actual msrp of 699.

Then you compared a 3070 (499$ msrp) to the 5070 (549$ msrp). So obviously the jump from the 6700xt to the 9070xt will be larger since it costs a whole goddamn more money, lol. If you equalize for price nvidia has given more performance for the money, by a longshot.

Nvidia 3070 to 5070 = 60% more performance for 10% more money.

AMD 6700xt to 9070xt = 210% more performance for 46% more money.

60% / 10% = 6% more performance for each 1% extra money for nvidia
210 / 46 = 4.56% more performance for each 1% extra money for amd

So nvidia is giving 32% more performance per price compared to AMD. 32 freaking %. AMD's gpu division is a freaking disaster but here we are talking about nvidia yet again :D
 
I am appalled by the level of sheer ignorance being displayed by people who are very clearly intelligent enough to work out the facts of reality... What the hell people, you're smarter than this! Good fricken grief.. :rolleyes:
 
Depends on region but right now

RX 9070 XT @4k is ~21% faster than RTX 5070 but for 24% more money.
RTX 5070 Ti @4k is ~5% faster than RX 9070 XT but for 21% more money.
RX 7900 XTX @4k is ~7% faster than RX 9070 XT but for 14.4% more money. RT may be an issue here as it's around 20%+ slower in this area @4k on average.

RTX 5070 has good p/p ratio but actual performance may not be enough for everybody when coming from RTX 3080, RTX 4070, RX 6800 XT, RX 7800 XT etc. Especially if you have UltraWide or 4k screen.

RX 9070 XT and RTX 5070 are very close to each other when it comes to p/p ratio but RTX 5070 Ti is clearly behind both and RX 7900 XTX is simply outdated with poor RT performance and no FSR4.
 
Last edited:
Back
Top