• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak

I could say many things about your arguments, even this so called argument that you post here. But I think I will really feel stupid if I continue this conversation with someone like you.
You're creating your own little imaginary world that makes no sense, with consumers that don't exist, to feed your ego and narrative that "you know better", and when your flawed logic is cornered, that's all the arguments you got.

:kookoo:
 
GPUs are a lot more expensive than they used to be when based on a Performance/Price ratio at a given time (given the year and performance available at that time). Look at the graphs I just posted...

I already addressed this multiple times. The graphs are skewed to the current flagships.

The 5080 (GB203) as a full die (378mm2) is one of the better x80 tier specs based on legacy metrics.. I provided all the information above.

The actual improvement per generation at the same die size has been significantly lower when you factor die size to SM config. It looks worse than it actually is, especially with inflation metrics.

Now.. if were talking about lower end cards.. yeah, the x60 class went back to a sub 200mm2 die after bouncing around in the upper 200 range for 2 generations.

6GB 1060 = 200 mm². TSMC 16. Full die GP106. 10/10 SM
---
6GB 1660TI = 284 mm². TSMC 12 (overly large generation). Full die TU116. 24/24 SM
6GB 2060 = 445mm². TSMC 12 (overly large generation). Cut down TU106. 30/36 SM. 83% of die.

1660 and 2060 were technically one generation.. just released side by side due to people not taking well to RTX.. lol
---
3060 = 276mm². Samsung 8. Cut down GA106, 28/30 SM. Kind of an oddball card. I think NVIDIA panic'd released this one due to the 12GB (2GB IC) spec... They couldn't release a 6GB card with looming console threat.

NVIDIA ended up canceling 16GB 3070 since crypto miners bought out every card regardless..

Both 20/30 series were a significant divergence in their own ways as the die size was closer to a legacy x70/x80 more than legacy x60.

Again, the actual performance per generation is just smaller. Only way to improve is to scale up physical size/SM config and push the power envelope further, Hence why a 5080 is pushing closer to 400W. EE design is a factor.. PCB layers, VRM etc.. PCI 5.0 spec. This cost money.

The generational improvement at the same die size has more or less stagnated. I won't defend the current x60 releases. They seem excessively overpriced, especially since a console offers a 36-60 CU AMD based APU with everything built in for 500-700 MSRP.

My argument was targeted at the 5080 specifically. It's not too much different than legacy pricing all things considered if buying "MSRP" via FE model. The lower end is obviously shafted.
 
Last edited:
I already addressed this multiple times. The graphs are skewed to the current flagships.

The 5080 (GB203) as a full die (378mm2) is one of the better x80 tier specs based on legacy metrics.. I provided all the information above.

The actual improvement per generation at the same die size has been significantly lower when you factor die size to SM config. It looks worse than it actually is, especially with inflation metrics.

Now.. if were talking about lower end cards.. yeah, the x60 class went back to a sub 200mm2 die after bouncing around in the upper 200 range for 2 generations.

6GB 1060 = 200 mm². TSMC 16. Full die GP106. 10/10 SM
---
6GB 1660TI = 284 mm². TSMC 12 (overly large generation). Full die TU116. 24/24 SM
6GB 2060 = 445mm². TSMC 12 (overly large generation). Cut down TU106. 30/36 SM. 83% of die.

1660 and 2060 were technically one generation.. just released side by side due to people not taking well to RTX.. lol
---
3060 = 276mm². Samsung 8. Cut down GA106, 28/30 SM. Kind of an oddball card. I think NVIDIA panic'd released this one due to the 12GB (2GB IC) spec... They couldn't release a 6GB card with looming console threat.

NVIDIA ended up canceling 16GB 3070 since crypto miners bought out every card regardless..

Both 20/30 series were a significant divergence in their own ways as the die size was closer to a legacy x70/x80 more than legacy x60.

Again, the actual performance per generation is just smaller. Only way to improve is to scale up physical size/SM config and push the power envelope further, Hence why a 5080 is pushing closer to 400W. EE design is a factor.. PCB layers, VRM etc.. PCI 5.0 spec. This cost money.

The generational improvement at the same die size has more or less stagnated. I won't defend the current x60 releases. They seem excessively overpriced, especially since a console offes a 36-60 CU AMD based APU with everything built in for 500-700 MSRP.

My argument was targeted at the 5080 specifically. It's not too much different than legacy pricing all things considered if buying "MSRP" via FE model. The lower end is obviously shafted.

If a 5080 $1000 MSRP for a GPU using 1/2 the full die is okay to you when compared to a GTX 1080 that had 71% of a full die but sold for only $600...then I don't know what you need man.


I already explained that GPU Die Size on RTX GPUs is irrelevant since :

1) GTX GPUs did NOT have RT & Tensor Cores!!!

2) because since Ampere half (50%) of the CUDA Cores can do FP32 or INT32 (and All CUDA Cores on Blackwell can do one or the other) whereas under GTX GPUs the CUDA Cores count was ONLY referred as FP32 Cores (aka real performance) !
The problem of RTX 30/40/50 is that ~1/3 of all their CUDA Cores need to be used as INT32 Cores to run Games! (NVIDIA already confirmed that when they launched Ampere, hence why the performance increase was farrr to be 2x vs Turing in Gaming and TFLOPS numbers no longer mean anything anymore).
 
Last edited:
If a 5080 $1000 MSRP for a GPU using 1/2 the full die is okay to you when compared to a GTX 1080 that had 71% of a full die but sold for only $600...then I don't know what you need man.


I already explained that GPU Die Size on RTX GPUs is irrelevant since :

1) GTX GPUs did NOT have RT & Tensor Cores!!!

2) because since Ampere half (50%) of the CUDA Cores can do FP32 or INT32 (and All CUDA Cores on Blackwell can do one or the other) whereas under GTX GPUs the CUDA Cores count was ONLY referred as FP32 Cores (aka real performance) !
The problem of RTX 30/40/50 is that ~1/3 of all their CUDA Cores need to be used as INT32 Cores to run Games! (NVIDIA already confirmed that when they launched Ampere, hence why the performance increase was farrr to be 2x vs Turing in Gaming and TFLOPS numbers no longer mean anything anymore).

You're skewing again.

1660 TI was designed without RT cores, but increased physical size to gain SM leverage relative to the GTX1060 it technically replaced. GP106 @ 200mm2 > TU116 @ 284mm2. 10 SM > 24SM and TSMC 16nm to 12nm respectively.

GTX1080 was a 314mm2 full die GP104 with 20/20 SM enabled. You're talking relative to flagship GP102 design which has no real relevance when it comes to yield and dies per wafer. This was also 9 years ago.. Inflation exist. $600 = $800 today.

My point is.. NVIDIA wants to make money. They could make a 1000mm2 die with lower yield and you would still find a reason to complain because you're viewing this whole ordeal completely backwards. Imagine a GB201 with 30k CUDA. Is 5070 a 5030 now? Lmao.

  1. A bigger die cost more money.
  2. Inflation is real
  3. TSMC is charging an arm and a leg for 5nm 4N wafers relative to previous fabrications. It's offsetting yield benefits from going smaller.
  4. VRM design is innately 2-3x more expensive, along with PCB layer count increasing for PCI 4.0/5.0 signaling. A lot more cards are using SMT EE too.
Tech tube is making people stupid.
 
Last edited:
3. Perhaps this might baffle you, but... absolutely no explanation is owed to anyone over what someone chooses to do with their money.
Nah man, don't you realise? they know better than everyone how their money should be spent :rolleyes:
And this is the reason, kids, why seething about “NV bad” and constructing elaborate arguments explaining WHY is completely pointless, a waste of time and will not turn the dial on the market at all. Thanks for coming to my TED Talk.
Sometimes I wonder when the crusade will stop, imagine putting this much wasted effort into trying to turn people off a company.

Personally, I'll wait till the product launches to properly make my mind up, and try my gosh darndest to resist having a go at people for what they end up choosing to buy, even if it's not what I'd buy.
 
My opinion? Lol...
Yes completely your opinion. Unless you can cite a credible source?

Yeah, I didn’t think you could.

I could say many things about your arguments, even this so called argument that you post here. But I think I will really feel stupid if I continue this conversation with someone like you.
Does that mean you’ll stop trolling Nvidia threads?
 
It just seems everything regarding Nvidia is just more salt in the wound
 
Nvidia have not improved their Rasterization
Not sure how many times you have to be told rasterization is obsolete.
 
Not sure how many times you have to be told rasterization is obsolete.
That isn't true. DLSS and what not have major issues. Including latency. Even an old game like control show that different things are factors. Also the Amount of Memory but that is also mixed in with the speed and bandwidth of the memory. DLSS and what not can not fix bad frames. They just replicate bad frames.

The problem is Nvidia says one thing in public via their CEO and their spec sheets say otherwise.

For instance they talk about frame gen and DLSS and how it can make all these games so playable. Yet their engineers on their spec sheets literally say it is advised to not use DLSS unless you are already getting a minimum of 100-120fps in the game.

Bottom line is you have to have good to amazing performance before you should use any of the other features. Otherwise the results won't be good.
 
Yet their engineers on their spec sheets literally say it is advised to not use DLSS unless you are already getting a minimum of 100-120fps in the game.
Can you link me to this documentation?
 
You're skewing again.

1660 TI was designed without RT cores, but increased physical size to gain SM leverage relative to the GTX1060 it technically replaced. GP106 @ 200mm2 > TU116 @ 284mm2. 10 SM > 24SM and TSMC 16nm to 12nm respectively.

GTX1080 was a 314mm2 full die GP104 with 20/20 SM enabled. You're talking relative to flagship GP102 design which has no real relevance when it comes to yield and dies per wafer. This was also 9 years ago.. Inflation exist. $600 = $800 today.

My point is.. NVIDIA wants to make money. They could make a 1000mm2 die with lower yield and you would still find a reason to complain because you're viewing this whole ordeal completely backwards. Imagine a GB201 with 30k CUDA. Is 5070 a 5030 now? Lmao.

  1. A bigger die cost more money.
  2. Inflation is real
  3. TSMC is charging an arm and a leg for 5nm 4N wafers relative to previous fabrications. It's offsetting yield benefits from going smaller.
  4. VRM design is innately 2-3x more expensive, along with PCB layer count increasing for PCI 4.0/5.0 signaling. A lot more cards are using SMT EE too.
Tech tube is making people stupid.

You're the one skewing again and defending a company who loves to screw you more and more each generation... I've been Gaming and using GPUs since the early 2000s and yes Nvidia has become a scummy company with misleading marketing and terrible practices.

Smaller process nodes means more chips per wafer and more transistors on it. Don't you think that Intel, TSMC, IBM and GloFo didn't spend a lot of money on their process nodes and could also have some terrible yields back then?!!

Even with inflation the 5080 is a ripoff... 84SM is literally the specs of a Full GA102 aka 3090 Ti (in 2022) and almost a 3090 (82SM) in 2020, but on a Samsung 8nm node compared to a TSMC 4nm node (which was already the case of RTX 40s even though Blackwell was originally supposed to be made on TSMC 3nm but since AMD canceled NAVI 41 they changed their plans)... 3090/Ti had 28.3B Transistors whereas the 5080 has 45.6B Transistors but the 3090/Ti also had a 384-bit bus and 24GB VRAM which the 5080 clearly doesn't even have, and is only half a 5090 specs wise... what a joke.

The GTX 1070/1080/1080 Ti had plenty of VRAM for back then, but nowadays Nvidia are barely giving the minimum possible. They added more L2 Cache on RTX 4060/Ti pretending that it was enough to bypass the 8GB limit but they also cut the memory bus of their GPUs to 128-bit etc. That's just wrong, more L2 Cache can help but will not replace the lack of VRAM.

Think whatever you want man, I have a 4090 so I'm not an Nvidia hater, but between the huge MSRP increases, false advertising (VRAM vs L2 Cache, Crippled Memory Buses, FG/MFG being "free performance" omitting latency increase and artifacts/smearing/ghosting/etc.), selling GPUs with missing ROPs, melting connectors even 4 years later, 600W GPUs (whey they used to be 250W back then), Drivers bricking GPUs and being unstable (with some studios even recommending older drivers), etc. If you think Nvidia really care about you then you're delusional.

Not sure how many times you have to be told rasterization is obsolete.

No it's not, 99% of Games are using Rasterization for raw performance... RT/PT is the future but it doesn't mean the rest is already dead. IPC increase, Higher CUDA Core counts, Higher Frequencies, Larger Cache, etc. are still a thing in 2025 ! We don't do that only for RT & PT lol.

Yes completely your opinion. Unless you can cite a credible source?

Yeah, I didn’t think you could.

Gamers Nexus are still legit and stating facts, and Jayz2Cents, Hardware Unboxed, der8auer, etc. have all said the same thing... If you decide to not believe them all, then fine stay in denial. But don't complain if all RTX GPUs have a poor value nowadays.
 
I’m not the one complaining, you are!

Rightfully so yes. Being a gamer since the 90s, I can tell you that this generation sucks real bad...
 
Rightfully so yes. Being a gamer since the 90s, I can tell you that this generation sucks real bad...
Being a gamer since the 70’s I think you need to get used to prices.
 
Being a gamer since the 70’s I think you need to get used to prices.

I have a 4090 so I don't mind paying for a premium but it better be worth it... The 4090 was the best value in the whole RTX 40 lineup, it was $100 more than the 3090 but 60% to 2x more powerful and more efficient too, whereas the 5090 is 25% more expensive than the 4090 for 30-40% more performance and consumes almost 600W which is not a great improvement after 2 years... The rest of the lineup is just awful and should be at least $150 less per SKU.
 
The rest of the lineup is just awful and should be at least $150 less per SKU.

Facts not in evidence. Do you know the difference between fact and opinion?
 
1744002248438.png


1744002255317.png


1744002279145.png



Here are some facts...
 
Here are some facts...
While I agree that the top vs lower dies the disparity has increased starting with Ada, lets also recognise that starting with Ada, the top chip increased in shader count by more than double the single largest increase in any gen listed here before it, and if you average the shader increase of those generations (similar to how HUB have shown averages), the increase is more than 3.5x relative to those.

Note that starting from 3 series where they doubled the core number based on the increase in capability, I've used half that number

1744007738674.png


They're absolutely pushing that top chip into bonkers level halo/prosumer/titan whateveryouwannacallit territory, but I'd say with the 4090 they also kinda earned it, less so with 5090 but so far that whole generation has suffered, largely from stagnant node advancement. Ampere if on TSMC 7nm like RDNA2 would have been a good chunk faster, but what happened was they were held back by Samsung 8nm and then the Jump back to TSMC for Ada netted them an insane increase that they capitalised on. At the end of the day performance is what matters the most, I find die size, shader count etc discussions interesting, but the real let down is when price to performance isn't getting any/much better, the rest just feels like such an academic thing to get hung up on.
 
I don't understand. How is 5060 Ti supposed to perform similarly to the 4060 Ti when it has almost double the bandwidth?
 
While I agree that the top vs lower dies the disparity has increased starting with Ada, lets also recognise that starting with Ada, the top chip increased in shader count by more than double the single largest increase in any gen listed here before it, and if you average the shader increase of those generations (similar to how HUB have shown averages), the increase is more than 3.5x relative to those.

Note that starting from 3 series where they doubled the core number based on the increase in capability, I've used half that number

View attachment 393736

They're absolutely pushing that top chip into bonkers level halo/prosumer/titan whateveryouwannacallit territory, but I'd say with the 4090 they also kinda earned it, less so with 5090 but so far that whole generation has suffered, largely from stagnant node advancement. Ampere if on TSMC 7nm like RDNA2 would have been a good chunk faster, but what happened was they were held back by Samsung 8nm and then the Jump back to TSMC for Ada netted them an insane increase that they capitalised on. At the end of the day performance is what matters the most, I find die size, shader count etc discussions interesting, but the real let down is when price to performance isn't getting any/much better, the rest just feels like such an academic thing to get hung up on.

I agree that it went up by a good amount, but CUDA Core count ≠ Performance that's the problem. The 4090 is only ~30% more powerful than the 4080 SUPER even though it has 60% more CUDA Cores... so there's a lot of performance left on the table! And it's due to the crippled L2 Cache (72MB out of 96MB on the 4090, and even the 5090 has 96MB out of 128MB). Also the 4090 should have gotten some 23 or 24 Gbps GDDR6X chips too (not 21), and the 5090 has 28 Gbps when the max available is 32 Gbps as of now!
Plus, Blackwell was originally designed to be made on TSMC 3nm but they canceled it after NAVI 41 got canceled... But the 5090 is sold $2000 (mostly due to the higher core count and 512-bit bus + VRAM capacity), but gen over gen the performance is not great, it's also hugely power limited even though it's using almost 600W (which is crazy!).

Sure die size since RTX are bigger and RT + Tensor cores are not free but nobody asked for them, Nvidia forced them on us...
 
I don't understand. How is 5060 Ti supposed to perform similarly to the 4060 Ti when it has almost double the bandwidth?

If it does then it will show the 4060 Ti was never that badly hamstrung by its 128-bit bus after all. (It's 55% more btw, not double, but still a lot.)

Rightfully so yes. Being a gamer since the 90s, I can tell you that this generation sucks real bad...

I've been a gamer since the early 1980s. My first home gaming was on a Commodore 64, then an Amiga. In real terms these things cost a fortune compared to the GPUs today which have astronomical performance which would have been considered extreme science fiction back then. But nowadays it doesn't seem to matter how much things keep improving, someone will always say it isn't enough or it's too expensive. People are spoiled, that's the reality. Well maybe not most people, who are happy to buy these cards and are delighted with them. There is a small but very loud minority who will always complain though.
 
I agree that it went up by a good amount,.....
. Also the 4090 should have Sure die size since RTX are bigger and RT + Tensor cores are not free but nobody asked for them, Nvidia forced them on us...
They can only do so much now to increase performance, plus wafer costs and inflation I mean, I don't know what to expect here but gradually increasing prices, diminishing returns, yield issues on large chips and encroaching on absolute silicon limits.

As for RT and tensor cores, last I checked they account for under 10% of die size and disproportionately increase the capability relative to that size. Consumers also rarely dictate innovation, most just want more faster better cheaper of the same.

But lastly, I don't know why anyone says nobody asked for them, gamers have been asking continually for leaps in rendering realism since the dawn of video games, RT was the next big leap in lighting, which helps developers too. So if I said I asked for them, that's enough to completely undo the notion that nobody asked for them, that appears to just be something that a small fraction of an already vocal minority say because they don't like the way the industry is going.
 
Optimizing GPUs by pairing faster-binned memory with narrower memory buses could reshape the market from the bottom up. This strategy boosts performance in lower-tier products, benefiting consumers with better accessibility and value while empowering AIBs to create simpler, cost-effective designs. For high-end GPUs, selectively integrating High Bandwidth Memory (HBM) beyond a certain performance threshold offers a pragmatic alternative to widening memory buses, which can be resource-intensive. HBM not only delivers unmatched bandwidth but also conserves the limited supply of faster-binned memory, ensuring scalability across the entire product stack. Together, these approaches drive innovation, address market challenges, and create a more balanced GPU ecosystem that caters to all user segments.
 
You're the one skewing again and defending a company who loves to screw you more and more each generation... I've been Gaming and using GPUs since the early 2000s and yes Nvidia has become a scummy company with misleading marketing and terrible practices.

Smaller process nodes means more chips per wafer and more transistors on it. Don't you think that Intel, TSMC, IBM and GloFo didn't spend a lot of money on their process nodes and could also have some terrible yields back then?!!

Even with inflation the 5080 is a ripoff... 84SM is literally the specs of a Full GA102 aka 3090 Ti (in 2022) and almost a 3090 (82SM) in 2020, but on a Samsung 8nm node compared to a TSMC 4nm node (which was already the case of RTX 40s even though Blackwell was originally supposed to be made on TSMC 3nm but since AMD canceled NAVI 41 they changed their plans)... 3090/Ti had 28.3B Transistors whereas the 5080 has 45.6B Transistors but the 3090/Ti also had a 384-bit bus and 24GB VRAM which the 5080 clearly doesn't even have, and is only half a 5090 specs wise... what a joke.

The GTX 1070/1080/1080 Ti had plenty of VRAM for back then, but nowadays Nvidia are barely giving the minimum possible. They added more L2 Cache on RTX 4060/Ti pretending that it was enough to bypass the 8GB limit but they also cut the memory bus of their GPUs to 128-bit etc. That's just wrong, more L2 Cache can help but will not replace the lack of VRAM.

Think whatever you want man, I have a 4090 so I'm not an Nvidia hater, but between the huge MSRP increases, false advertising (VRAM vs L2 Cache, Crippled Memory Buses, FG/MFG being "free performance" omitting latency increase and artifacts/smearing/ghosting/etc.), selling GPUs with missing ROPs, melting connectors even 4 years later, 600W GPUs (whey they used to be 250W back then), Drivers bricking GPUs and being unstable (with some studios even recommending older drivers), etc. If you think Nvidia really care about you then you're delusional.

I'm not defending the company because I want to, I'm defending the company because theres no logic here. The CUDA core translation is completely arbitrary as I explained above.
Smaller process nodes means more chips per wafer and more transistors on it. Don't you think that Intel, TSMC, IBM and GloFo didn't spend a lot of money on their process nodes and could also have some terrible yields back then?!!

TSMC 4N is much more expensive. Don't worry, AMD is prob paying more than NVIDIA is due to contracting via 357mm2 Navi48 dies. Relative to the 378mm2 GB203 (direct comparison on NVIDIA side)

I'll agree with you that NVIDIA is overpriced per chip sold, but my argument was targeted at legacy SKUs. NVIDIA hasn't really changed their game plan since the GTX600 series..

GTX680 was the first x80 series that diverged from the "flagship" die.

Even with inflation the 5080 is a ripoff... 84SM is literally the specs of a Full GA102 aka 3090 Ti (in 2022) and almost a 3090 (82SM) in 2020, but on a Samsung 8nm node compared to a TSMC 4nm node

Samsung 8 was significantly cheaper and the GA102 was 628mm2.. This is a completely different tier of GPU and shows how much more advanced 4N is. NVIDIA was able to cram 60 more SM units into AD102 (4090 was cut down), while being smaller than GA102. AD102 is 609mm2.

The current flagship die/nodes scale better per die size vs SM count, but also draw 400-600W power as the trade off.. 5080 runs around 100W lower power than 3090 TI on average via 400mm2 GB203 die.

84 SM vs 84 SM.

5080 is clocked higher.

3090/Ti had 28.3B Transistors whereas the 5080 has 45.6B Transistors but the 3090/Ti also had a 384-bit bus and 24GB VRAM which the 5080 clearly doesn't even have, and is only half a 5090 specs wise... what a joke.
The bit bus is irrelevant and has no impact on performance in this situation. Bandwidth does. 5080 is slightly lower (960 GB/s), but has a bigger cache pool, which will significantly offset this.

24GB of VRAM is a by product of using double stacked 1GB chips on each side of the PCB. 12+12 chips.

It's funny you mention this since the 5080 can actually be bumped up to 3GB dies in a year or so... ending at the same 24GB spec. (or 32GB with 4GB dies).

Plausible max spec "could be" 48 or 64GB if they double stack a PCB, but it wont happen for obvious reasons.

The GTX 1070/1080/1080 Ti had plenty of VRAM for back then, but nowadays Nvidia are barely giving the minimum possible. They added more L2 Cache on RTX 4060/Ti pretending that it was enough to bypass the 8GB limit but they also cut the memory bus of their GPUs to 128-bit etc. That's just wrong, more L2 Cache can help but will not replace the lack of VRAM.

The goal is to revise the product stack in a year or two once 3/4 GB dies end up mass production and cheaper.

Nvidia has been stagnating VRAM since the 20 series.. This isn't new... There was a rumored 3070 16GB but it got scrapped. Same with 20GB 3080, but they launched a 12GB 3080 instead.

I personally believe the 36SM 5060 TI will be a terrible value in both 8/16GB configs. Pretty sure they want to transition this card to 12GB durring a refresh.

600W GPUs (whey they used to be 250W back then)
Yet, you're complaining about a 400mm2 GB203 5080 only having 84 SM?

I don't think you quite understand what you're arguing... especially using a flagship 128(4090) /144 (AD102) or 170(5090) /192 (GB202) SM die as some kind of reference for "getting screwed".

If you think Nvidia really care about you then you're delusional.
I think Nvidia is cancer incarnate. You're completing missing the point lol
 
Last edited:
I don't understand. How is 5060 Ti supposed to perform similarly to the 4060 Ti when it has almost double the bandwidth?
Assuming that the provisional specs for the 5060 Ti end up being real, the configuration is basically the full AD106 chip but "ported" to Blackwell.
nVidia didn't give us a desktop GPU with the full chip, it would've been a 4060 Ti Super.

Looking at the render config table there is a ~6% increase in everything but ROPs and cache. The Boost Clock is increased by less than 2%. All of this would amount to about 7-8% more performance. However the bandwidth then comes into play.
Personally I am convinced that the 4060 Ti was bottlenecked by the bandwidth but I don't know to what degree, if only slightly or perhaps severely.
The bandwidth is about 56% higher on the 5060 Ti, so whatever the bottleneck was on the 4060 Ti it is completely eliminated now thanks to such a significant bump.

Regarding performance, on the 4060 Ti gpu specs page where that card is taken as a baseline we then have three cards that have about 15% more performance -> the RTX 3070 Ti, the RX 7700 XT and the RX 6800. The RTX 4070 has 29% more performance.
My most favorable guess is that the 5060 Ti is going to have roughly 15% more performance than the 4060 Ti and thus it will slot in halfway between that and the RTX 4070.
 
Back
Top