• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak

Assuming that the provisional specs for the 5060 Ti end up being real, the configuration is basically the full AD106 chip but "ported" to Blackwell.
nVidia didn't give us a desktop GPU with the full chip, it would've been a 4060 Ti Super.

Looking at the render config table there is a ~6% increase in everything but ROPs and cache. The Boost Clock is increased by less than 2%. All of this would amount to about 7-8% more performance. However the bandwidth then comes into play.
Personally I am convinced that the 4060 Ti was bottlenecked by the bandwidth but I don't know to what degree, if only slightly or perhaps severely.
The bandwidth is about 56% higher on the 5060 Ti, so whatever the bottleneck was on the 4060 Ti it is completely eliminated now thanks to such a significant bump.

Regarding performance, on the 4060 Ti gpu specs page where that card is taken as a baseline we then have three cards that have about 15% more performance -> the RTX 3070 Ti, the RX 7700 XT and the RX 6800. The RTX 4070 has 29% more performance.
My most favorable guess is that the 5060 Ti is going to have roughly 15% more performance than the 4060 Ti and thus it will slot in halfway between that and the RTX 4070.

Reminds me of the 1660 Super which had a huge 75% more bandwidth than the regular 1660. This only translated into about 9% faster performance overall though (the clocks being identical).
 
What an absolute joke.
The 5070 is a lower tier GPU than the 1650 Super was in the Turing days and this is even worse. Selling it for anything above $150 is an insult.
 
What an absolute joke.
The 5070 is a lower tier GPU than the 1650 Super was in the Turing days and this is even worse. Selling it for anything above $150 is an insult.

Repeating this over and over again may make it believable to you, but it does not necessarily make that true.
 
Where is cost to design, engineer, produce and sell?

You keep saying it to expensive without any facts of what the product costs to produce, let alone the value provided to the customer.

What an absolute joke.
The 5070 is a lower tier GPU than the 1650 Super was in the Turing days and this is even worse. Selling it for anything above $150 is an insult.
Thanks my ignore list was hungry for a new reg.
 
Efficiency with 600W GPUs and melting connectors?! lol sure :roll:
20% gen over gen after 2 years is ridiculous. Nvidia used to release new GPUs every year with 20-30% more performance, but now it’s every 2 years and except the x90 the rest is barely improving... So yes Gamers who have been buying and using Nvidia GPUs for so many years kinda have theird word to say. Without Gamers Nvidia would have never even existed!
I dont think I would want annual GPU releases with at least 30% uplift, talk about fast obsolescence, was it really that quick before?

On the subject of 16 gigs 5060ti vs limited VRAM 5070, the former is the better buy in my opinion. I got a 3080 FE 10G at which I initially thought was a good deal considering market conditions, but oh man, that VRAM was so crippling.
 
I dont think I would want annual GPU releases with at least 30% uplift, talk about fast obsolescence, was it really that quick before?
Yup. Back in the early to mid 00s the GPU market was insane. Forget “annual”, sometimes less time could pass before something completely blew your “new and cutting edge” video card out of the water. These days, one can comfortably skip a gen or even two depending on his needs and/or wants, hell, people on Pascal are only now feeling the sting and those cards are almost a decade old. Back then, your card could be straight up unusable without some software modifications in newer titles. I still remember crunchy as hell ghetto mods to run, say, Half-Life 2 or Oblivion on unsupported cards.
 
499 is beyond DOA. No way its gonna be 499. NVIDIA isn't the brightest sometimes, but they're not THAT stupid, right?
Nvidia doesn't care. They will extract as much $$$ as they can from whoever will buy the slop meanwhile Nvidia will focus on the markets they actually care about. Eyy Eye and data center.
 
I've been a gamer since the early 1980s. My first home gaming was on a Commodore 64, then an Amiga. In real terms these things cost a fortune compared to the GPUs today which have astronomical performance which would have been considered extreme science fiction back then. But nowadays it doesn't seem to matter how much things keep improving, someone will always say it isn't enough or it's too expensive. People are spoiled, that's the reality. Well maybe not most people, who are happy to buy these cards and are delighted with them. There is a small but very loud minority who will always complain though.

I've had Pong at home too and it was amazing back then, but we're not in the same era anymore. Sure we can (and should) be amazed by how far Technology has come, but younger people also get technology as granted. Back in the days we didn't even have Unlimited nor Fast Internet (DSL/Fiber), and Modems would need a minute to connect with those weird noises like a fax or something lol. But people have changed and have been accustomed to all these things because that's the way it is now, the same way you would not understand how can someone have a house without a fridge, dishwasher, tap water, AC, TV, etc. All those things have become usual.
The problem that we are facing is that we're getting closer and closer to that "nanometer wall" every day, and Moore's Law is dying too, therefore things become more and more expensive to make... but the truth is that we all knew that RT/PT was too demanding and with engines like UE5 that run poorly on most PCs with a lot of traversal stutters and huge performance drops (maybe too demanding for current hardware too?!), a lot of gamers feel that it's not always worth it... And Nvidia are not making things better with each new generation, RTX 50s being the worst ever (almost 0% IPC increase in Raster and RT/PT vs Lovelace, most GPUs barely got a performance bump (and have had crippled hardware over the years), Nvidia didn't use TSMC 3nm and Blackwell efficiency is not really better either, therefore efficiency went to the gutter.. there are still melting connectors 4 years after its release, Nvidia drivers are worse and worse (even though they used to be extremely reliable), etc.

I really think people don't mind paying a premium if it's worth it, but when you see the PS5 and PS5 Pro at $500 and $700 when an RTX 5080 is $1000 but you still need to buy all the other parts of the system too, then you wonder if it's really worth it.

They can only do so much now to increase performance, plus wafer costs and inflation I mean, I don't know what to expect here but gradually increasing prices, diminishing returns, yield issues on large chips and encroaching on absolute silicon limits.

As for RT and tensor cores, last I checked they account for under 10% of die size and disproportionately increase the capability relative to that size. Consumers also rarely dictate innovation, most just want more faster better cheaper of the same.

But lastly, I don't know why anyone says nobody asked for them, gamers have been asking continually for leaps in rendering realism since the dawn of video games, RT was the next big leap in lighting, which helps developers too. So if I said I asked for them, that's enough to completely undo the notion that nobody asked for them, that appears to just be something that a small fraction of an already vocal minority say because they don't like the way the industry is going.

We all know what their end goal is... Streaming! They want Gamers to Stream their games from the Cloud and have them pay a Subscription for it, that's all. PC Hardware is more and more expensive each generation and sooner or later most people won't be able to afford it. Therefore Games will be optimized for Cloud Gaming and that's it.

Regarding RT/PT, we did not "ask for them" because we all knew it was too demanding. Look at PT games (Cyberpunk 2077, Alan Wake 2, Indiana Jones, Black Myth: Wukong, etc.) if you try to run them at Native 4K on a 5090 (which is $2000 at MSRP) you get ~30fps... You need DLSS Performance (1080p) to get a good framerate and then use FG to get more fluidity, but it adds Latency and Artifacts... then there's MFG but it's even worse since there are even more artifacts. So yes we all want RT/PT games because it looks amazing, but the performance cost is too big as of now, that's the problem.
 
I dont think I would want annual GPU releases with at least 30% uplift, talk about fast obsolescence, was it really that quick before?

It was, but I agree with you, it's a double-edged sword at best. Developers have ended up targeting whatever the particular mainstream card is and calling it a day. So you end up having to upgrade every few years just to stand still. From a consumer's point of view it would be better if GPUs stopped getting quicker, and you just replace them when it wears out like you do your washing machine or fridge. We've reached the point where the existing hardware is good enough to produce incredible games and graphics already. In fact we passed that point many years ago, which is why the only thing left to do was shoot for ever higher resolutions (largely pointless in my view, going beyond 1080p or especially 1440p quickly falls into diminishing returns).

We all know what their end goal is... Streaming! They want Gamers to Stream their games from the Cloud and have them pay a Subscription for it, that's all. PC Hardware is more and more expensive each generation and sooner or later most people won't be able to afford it. Therefore Games will be optimized for Cloud Gaming and that's it.

Regarding RT/PT, we did not "ask for them" because we all knew it was too demanding. Look at PT games (Cyberpunk 2077, Alan Wake 2, Indiana Jones, Black Myth: Wukong, etc.) if you try to run them at Native 4K on a 5090 (which is $2000 at MSRP) you get ~30fps... You need DLSS Performance (1080p) to get a good framerate and then use FG to get more fluidity, but it adds Latency and Artifacts... then there's MFG but it's even worse since there are even more artifacts. So yes we all want RT/PT games because it looks amazing, but the performance cost is too big as of now, that's the problem.

I'm a luddite or naysayer where RT/PT is concerned, I don't think we need it, I think it benefits developers more than end users. Many of our best looking games produce incredible worlds and lighting without those features, and aren't nearly as demanding on the hardware. nvidia took us down this road to sell ever more expensive GPUs and convinced people it was the next big thing, like 3D televisions, or 4K for that matter - unnecessary and wasteful on resources.
 
I'm a luddite or naysayer where RT/PT is concerned, I don't think we need it, I think it benefits developers more than end users. Many of our best looking games produce incredible worlds and lighting without those features, and aren't nearly as demanding on the hardware. nvidia took us down this road to sell ever more expensive GPUs and convinced people it was the next big thing, like 3D televisions, or 4K for that matter - unnecessary and wasteful on resources.

Yeah some games like those ones look and run great even without RT :
The Last of Us Part II
Horizon Forbidden West
God of War: Ragnarök
Red Dead Redemption 2
Kingdom Come: Deliverance II
Forza Horizon 5
Microsoft Flight Simulator 2024

And:
Cyberpunk 2077 (looks great even without any RT/PT)
Metro Exodus: Enhanced Edition (runs great on most PCs even if it has RT by default)


But Nvidia, same Epic Games with UE5 engine or Unity with their Unity engine are all pushing for games with RT and even Path Tracing so the industry is going this way too...
 
This isn't really shocking, Nvidia is looking at the big picture with the most profits, and that isn't gamers

They also know the market very well, and they see what people are willing to pay, so, they take advantage of this

the only real solution is for people to stop spending so much for vid cards and the prices will eventually come down
 
You keep saying it to expensive without any facts of what the product costs to produce, let alone the value provided to the customer.
It might have been in that very video that HUB also go into the costs of manufacturing over the years and conclude that while Nvidia are making more per card, costs have also significantly increased, especially the chips themselves, and that it's all fairly reasonable. Just looking at a few cherry picked charts and products costs and going "SEEEE!!" is disengenuous.
 
Where is cost to design, engineer, produce and sell?

You keep saying it to expensive without any facts of what the product costs to produce, let alone the value provided to the customer.

GPUs definitely cost more to manufacture today than before, but when you look at the specs of the GPUs of the same tier most of them have a lot less CUDA Cores and smaller Memory Buses too compared to GPUs back then and a full chip enabled... Also Nvidia did NOT use TSMC 3nm but pretty much the same 4nm node that Lovelace is using, hence the poor performance increase and almost the same efficiency too. Blackwell doesn't even have better IPC (neither in Raster, RT or PT) it's the same as Lovelace, it's just a Refresh). They just rely on AI to do the heavy work for them.

Nvidia were already making huge margins back then and now they're extremely high, no company except maybe Apple have so high margins. So yes they could definitely afford to lower their prices.
 
GPUs definitely cost more to manufacture today than before, but when you look at the specs of the GPUs of the same tier most of them have a lot less CUDA Cores and smaller Memory Buses too compared to GPUs back then and a full chip enabled... Also Nvidia did NOT use TSMC 3nm but pretty much the same 4nm node that Lovelace is using, hence the poor performance increase and almost the same efficiency too. Blackwell doesn't even have better IPC (neither in Raster, RT or PT) it's the same as Lovelace, it's just a Refresh). They just rely on AI to do the heavy work for them.

Nvidia were already making huge margins back then and now they're extremely high, no company except maybe Apple have so high margins. So yes they could definitely afford to lower their prices.

End of the day.. The x80 class has relatively stayed the same since the GTX600 line if we factor inflation and "MSRP". (That's ignoring current market prices for AIB vendors.)

Massive generational gains are pretty much done with, but it's not like they're pushing smaller dies, it's just the focus shifted to RT and AI. Pre RTX peaked out at 20 SM for the same segment of x80 cards.

VRAM stagnation is real, I agree. It's intentional to upsell and make the lineup more variable as price performance used to favor the 60/70 class in most situations.. Now it's ironically the "$750" 5070 TI for this generation, which lines up closer to a legacy 70 card than anything else. 83% of die enabled.

Die sizes haven't really changed, its just SM/CUDA count favors a flagship 600mm2+ config.

AMD could only fit 64 CU in a 357mm2 config via NAVI48, but theres prob more room in regards RT config here.. design is more dense than GB203.
 
End of the day.. The x80 class has relatively stayed the same since the GTX600 line if we factor inflation and "MSRP". (That's ignoring current market prices for AIB vendors.)

Massive generational gains are pretty much done with, but it's not like they're pushing smaller dies, it's just the focus shifted to RT and AI. Pre RTX peaked out at 20 SM for the same segment of x80 cards.

VRAM stagnation is real, I agree. It's intentional to upsell and make the lineup more variable as price performance used to favor the 60/70 class in most situations.. Now it's ironically the "$750" 5070 TI for this generation, which lines up closer to a legacy 70 card than anything else. 83% of die enabled.

Die sizes haven't really changed, its just SM/CUDA count favors a flagship 600mm2+ config.

AMD could only fit 64 CU in a 357mm2 config via NAVI48, but theres prob more room in regards RT config here.. design is more dense than GB203.
They packed 20SM but the cores were a lot bigger... look at the 512 CUDA Cores of the GTX 580 and you'll see what I mean lol. Process nodes, packaging, software and optimization of the whole thing have enhanced a lot too! Games are a lot more demanding than they used to be and with UE5 and Path Tracing we can barely run some games properly! Gamers are probably the most demanding people but they are also the reason why Technology has evolved to quickly and why Nvidia are ahead of competition too.

VRAM capacity has been an issue for a while and even 12GB is not great either, 16GB is the minimum for future proofing your GPU (at least for a few years). Next-Gen consoles will probably have 24GB VRAM (if not 32GB) so Next-Gen GPUs will probably need 32GB+ to run games at 4K Ultra settings etc. on PC. GDDR7 will probably need to have 4GB chips by then.

AMD RDNA 4 is a great architecture, and I believe UDNA (ex RDNA 5) will bring a good performance bump again (Raster + RT/PT). Let's hope UDNA will also include High-End GPUs to compete with Nvidia x90 GPUs too.
 
Repeating this over and over again may make it believable to you, but it does not necessarily make that true.
My apologies, it's a 28.2% cut of the flagship instead of a 27.8% cut.
 
I dont think I would want annual GPU releases with at least 30% uplift, talk about fast obsolescence, was it really that quick before?

On the subject of 16 gigs 5060ti vs limited VRAM 5070, the former is the better buy in my opinion. I got a 3080 FE 10G at which I initially thought was a good deal considering market conditions, but oh man, that VRAM was
From GTX 900 all the way to RTX 30 series, each generation's 60-tier card was a little bit slower than the previous 80-tier card. The 2060 was even an outlier by beating out the 1080 in some tests.
Then the 4060 was barely faster than the 3060 Ti and it looks like history's about to repeat itself.

Nvidia has been selling smaller and smaller GPUs in each product tier except its top SKU for around the same prices after accounting for inflation. That's why you still see similar performance improvements per generation at the top end (5090 vs 4090, 4090 vs 3090/Ti etc) but further down it really doesn't pan out (5070 vs 4080, 4060 vs 3060 Ti etc).
 
My apologies, it's a 28.2% cut of the flagship instead of a 27.8% cut.

This... is meaningless. It still does not justify your claim the 5070 is worse than a 1650 Super on the stack overall, and that it should cost below $150.

It's just idealism - I've called out on the "shrinkflation" for years at this point, the unfortunate reality is that they don't have enough perfect wafers to satisfy demand throughout.
 
It was, but I agree with you, it's a double-edged sword at best. Developers have ended up targeting whatever the particular mainstream card is and calling it a day. So you end up having to upgrade every few years just to stand still. From a consumer's point of view it would be better if GPUs stopped getting quicker, and you just replace them when it wears out like you do your washing machine or fridge. We've reached the point where the existing hardware is good enough to produce incredible games and graphics already. In fact we passed that point many years ago, which is why the only thing left to do was shoot for ever higher resolutions (largely pointless in my view, going beyond 1080p or especially 1440p quickly falls into diminishing returns).



I'm a luddite or naysayer where RT/PT is concerned, I don't think we need it, I think it benefits developers more than end users. Many of our best looking games produce incredible worlds and lighting without those features, and aren't nearly as demanding on the hardware. nvidia took us down this road to sell ever more expensive GPUs and convinced people it was the next big thing, like 3D televisions, or 4K for that matter - unnecessary and wasteful on resources.
Just to reply saying in 100% agreement, with both comments, its rare I find someone who is totally in line with my own views.
 
This... is meaningless. It still does not justify your claim the 5070 is worse than a 1650 Super on the stack overall, and that it should cost below $150.

It's just idealism - I've called out on the "shrinkflation" for years at this point, the unfortunate reality is that they don't have enough perfect wafers to satisfy demand throughout.
I didn't say it's worse, I said it's the same tier - which it is. I know there's no way they'd ever price it at $150, but there's also no way anyone should buy it at $550.
In regards of yields, I'm willing to bet they've just put most their eggs in the enterprise basket and GeForce just gets a footnote of "oh yeah we should make some of those too". Yes, the chips all come from the same wafers, but GeForce is probably less than a single-digit percentage of their revenue at this point, and having all of it get scalped means higher margins for them anyway, so why bother making a good amount of them when the enterprise stuff has even bigger margins anyway? Nvidia is "no longer a graphics company", after all...
 
From GTX 900 all the way to RTX 30 series, each generation's 60-tier card was a little bit slower than the previous 80-tier card. The 2060 was even an outlier by beating out the 1080 in some tests.
Then the 4060 was barely faster than the 3060 Ti and it looks like history's about to repeat itself.

Nvidia has been selling smaller and smaller GPUs in each product tier except its top SKU for around the same prices after accounting for inflation. That's why you still see similar performance improvements per generation at the top end (5090 vs 4090, 4090 vs 3090/Ti etc) but further down it really doesn't pan out (5070 vs 4080, 4060 vs 3060 Ti etc).

That's what I've been saying for days but some people prefer to live in denial... Maybe they have some Nvidia stock or something lol :confused:

It was, but I agree with you, it's a double-edged sword at best. Developers have ended up targeting whatever the particular mainstream card is and calling it a day. So you end up having to upgrade every few years just to stand still. From a consumer's point of view it would be better if GPUs stopped getting quicker, and you just replace them when it wears out like you do your washing machine or fridge. We've reached the point where the existing hardware is good enough to produce incredible games and graphics already. In fact we passed that point many years ago, which is why the only thing left to do was shoot for ever higher resolutions (largely pointless in my view, going beyond 1080p or especially 1440p quickly falls into diminishing returns).
GPUs and Computer Hardware as a whole, have come a very long way for sure! The real problem of modern days is Optimization. Most video game studios do not even optimize their games, they release them in a broken state with crashes, stutters, lots of bugs, etc. Gone are the days where most studios would take their time to polish their games and not rush them to release them as a Beta and then get people's money to fix it. I wouldn't mind investing a bit of money to finance a game like Star Citizen do, aka labled as a Beta but you can still play it as an Early Access version (even though the game should have been released years ago but at this point it will probably be on Next-Gen Consoles).
With current Hardware we could already do amazing things if games were really optimized on PC... And with Next-Generation of consoles that will probably have a 12c/24t ZEN 6 CPU w/ 3D V-Cache, 24GB (or maybe 32GB) GDDR7 and maybe the equivalent of a 7900 XTX but on UDNA (ex RDNA 5) architecture!
 
Might be of interest when comparing this to a lower VRAM card that is a higher SKU. If you want a 5060ti, deffo get the 16 gig version.


Of interest, the software monitoring didnt detect the delayed frames (common issue when VRAM bottlenecked), low quality textures, random triangle symbols. He put in a weaker card with more VRAM and was far more playable.
 
They packed 20SM but the cores were a lot bigger... look at the 512 CUDA Cores of the GTX 580 and you'll see what I mean lol. Process nodes, packaging, software and optimization of the whole thing have enhanced a lot too! Games are a lot more demanding than they used to be and with UE5 and Path Tracing we can barely run some games properly! Gamers are probably the most demanding people but they are also the reason why Technology has evolved to quickly and why Nvidia are ahead of competition too.

VRAM capacity has been an issue for a while and even 12GB is not great either, 16GB is the minimum for future proofing your GPU (at least for a few years). Next-Gen consoles will probably have 24GB VRAM (if not 32GB) so Next-Gen GPUs will probably need 32GB+ to run games at 4K Ultra settings etc. on PC. GDDR7 will probably need to have 4GB chips by then.

AMD RDNA 4 is a great architecture, and I believe UDNA (ex RDNA 5) will bring a good performance bump again (Raster + RT/PT). Let's hope UDNA will also include High-End GPUs to compete with Nvidia x90 GPUs too.

I mean. It's obviously a larger node, but the die size is still representative of generational segmentation. GTX580 was on a flagship 40nm GF110 full die @ 520mm2.. (Fermi wasn't that great).

The Generation following this moved down to full die a 28nm 294mm2 GK104.. SM got halved, but there were actual architectural improvements given 8 SM on the 680 outclassed 16 SM on the 580. These huge gaps in uplift don't really exist anymore.

I agree with you if this is your main point.

---

The argument with AMD is simply that they're still behind of NVIDIA at the same class of die. 357mm2 is close to 378mm2. (64 vs 84 SM/CU) Big Navi (GB202 competitor) was obviously canceled.

*I would still favor the 9070XT if MSRP's actually existed*

Nvidia has been selling smaller and smaller GPUs in each product tier except its top SKU for around the same prices after accounting for inflation. That's why you still see similar performance improvements per generation at the top end (5090 vs 4090, 4090 vs 3090/Ti etc) but further down it really doesn't pan out (5070 vs 4080, 4060 vs 3060 Ti etc).

Except they're not. 20 series is a big exception. Whole generation was overly large. An outlier if you will.

Compare x80 going back to GTX680 and you'll see what I mean.
 
Last edited:
We all know what their end goal is... Streaming! They want Gamers to Stream their games from the Cloud and have them pay a Subscription for it, that's all.
Never gonna happen. How many times has this been tried, twice? People are too fucking broke for this shit, I feel like NVIDIA can only keep it afloat because they literally give themselves their own GPUs.
In fact we passed that point many years ago, which is why the only thing left to do was shoot for ever higher resolutions (largely pointless in my view, going beyond 1080p or especially 1440p quickly falls into diminishing returns).
Nah. Increasing PPI is extremely noticable. Past 300 is when returns diminish, so around 8K monitors is it. Definitely NOT pointless.
I'm a luddite or naysayer where RT/PT is concerned, I don't think we need it, I think it benefits developers more than end users.
I'm happy PT is in consumer GPUs. Renders take half the time and it actually does look awesome.
This... is meaningless. It still does not justify your claim the 5070 is worse than a 1650 Super on the stack overall, and that it should cost below $150.

It's just idealism - I've called out on the "shrinkflation" for years at this point, the unfortunate reality is that they don't have enough perfect wafers to satisfy demand throughout.
I really wonder how a new arch on 28nm would fare, or whatever the best cost/performance planar process is now. Would be something if there was a real reason for product segmentation instead of "no VRAM for you".
 
I really wonder how a new arch on 28nm would fare, or whatever the best cost/performance planar process is now. Would be something if there was a real reason for product segmentation instead of "no VRAM for you".


GM200 was the last 28nm flagship @ 600mm2. Only had room for 24 SM. 250W TDP via Titan X. Don't think it would be too smart to backtrack that far on die config.

The base AD107 (4060) @ 159mm2 from last gen had 24 SM and it's like 2x stronger just from architectural improvement. 6.691 TFLOPS vs 15 TFLOPS.

Might be more logical to go with a different fab (Intel/Samsung) and work out a pricing deal for lower end stuff. They actually did this for the 1050 TI IIRC (Samsung).

As for VRAM, 5060/TI and 5070 will inevitably get 3GB dies to replace the current 2GB ones. Base 9070 really cuts into the 5070 in every aspect, at least per "MSRP". More CU (56 vs 48 SM), 4GB more VRAM.. etc..

---

AMD and NVIDIA are actually real close in regards to SM/CU count ATM. It's pretty damn linear for the FP32 TFLOP metric. I would say NVIDIA has a leverage still given they can cram 84 SM into a similar sized die (GB203 vs Navi48).

9070XT (64CU) is better than 5070 TI in raw FP32, but the 5070 TI (70/84 SM. 83% of GB203) still edges out in games. Could be an AMD fine wine situation where drivers inevitably improve things. Who knows.
 
Last edited:
Back
Top