Friday, April 4th 2025

NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak
NVIDIA's unannounced GeForce RTX 5060 Ti 16 GB and 8 GB models are reportedly due for an official unveiling mid-way through this month; previous reports have suggested an April 16 retail launch. First leaked late last year, the existence of lower end "Blackwell" GPUs was "semi-officially" confirmed by system integrator specification sheets—two days ago, reportage pointed out another example. Inevitably, alleged launch pricing information has come to light as we close in on release time—courtesy of Board Channels; an inside track den of some repute. The "Expert No. 1" account has alluded to fresh Team Green rumors; they reckon that the company's incoming new model pricing will be "relatively aggressive."
Supply chain whispers indicate that NVIDIA will repeat its (previous-gen) MSRP guide policies, due to the GeForce RTX 5060 Ti cards offering "estimated similar performance" to GeForce RTX 4060 Ti options. Speculative guide price points of $499 and $399 are anticipated—according to industry moles—for the GeForce RTX 5060 Ti 16 GB and RTX 5060 Ti 8 GB SKUs (respectively). Expert No. 1 has tracked recent GeForce RTX 4060 Ti price cuts; intimating the clearing out of old-gen stock. Team Green's GeForce RTX 5060 design is reportedly a more distant prospect—slated for arrival next month—so supply chain leakers have not yet picked up on pre-release MSRP info.
Sources:
Board Channels, VideoCardz, Notebookcheck
Supply chain whispers indicate that NVIDIA will repeat its (previous-gen) MSRP guide policies, due to the GeForce RTX 5060 Ti cards offering "estimated similar performance" to GeForce RTX 4060 Ti options. Speculative guide price points of $499 and $399 are anticipated—according to industry moles—for the GeForce RTX 5060 Ti 16 GB and RTX 5060 Ti 8 GB SKUs (respectively). Expert No. 1 has tracked recent GeForce RTX 4060 Ti price cuts; intimating the clearing out of old-gen stock. Team Green's GeForce RTX 5060 design is reportedly a more distant prospect—slated for arrival next month—so supply chain leakers have not yet picked up on pre-release MSRP info.
182 Comments on NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak
The 5080 (GB203) as a full die (378mm2) is one of the better x80 tier specs based on legacy metrics.. I provided all the information above.
The actual improvement per generation at the same die size has been significantly lower when you factor die size to SM config. It looks worse than it actually is, especially with inflation metrics.
Now.. if were talking about lower end cards.. yeah, the x60 class went back to a sub 200mm2 die after bouncing around in the upper 200 range for 2 generations.
6GB 1060 = 200 mm². TSMC 16. Full die GP106. 10/10 SM
---
6GB 1660TI = 284 mm². TSMC 12 (overly large generation). Full die TU116. 24/24 SM
6GB 2060 = 445mm². TSMC 12 (overly large generation). Cut down TU106. 30/36 SM. 83% of die.
1660 and 2060 were technically one generation.. just released side by side due to people not taking well to RTX.. lol
---
3060 = 276mm². Samsung 8. Cut down GA106, 28/30 SM. Kind of an oddball card. I think NVIDIA panic'd released this one due to the 12GB (2GB IC) spec... They couldn't release a 6GB card with looming console threat.
NVIDIA ended up canceling 16GB 3070 since crypto miners bought out every card regardless..
Both 20/30 series were a significant divergence in their own ways as the die size was closer to a legacy x70/x80 more than legacy x60.
Again, the actual performance per generation is just smaller. Only way to improve is to scale up physical size/SM config and push the power envelope further, Hence why a 5080 is pushing closer to 400W. EE design is a factor.. PCB layers, VRM etc.. PCI 5.0 spec. This cost money.
The generational improvement at the same die size has more or less stagnated. I won't defend the current x60 releases. They seem excessively overpriced, especially since a console offers a 36-60 CU AMD based APU with everything built in for 500-700 MSRP.
My argument was targeted at the 5080 specifically. It's not too much different than legacy pricing all things considered if buying "MSRP" via FE model. The lower end is obviously shafted.
I already explained that GPU Die Size on RTX GPUs is irrelevant since :
1) GTX GPUs did NOT have RT & Tensor Cores!!!
2) because since Ampere half (50%) of the CUDA Cores can do FP32 or INT32 (and All CUDA Cores on Blackwell can do one or the other) whereas under GTX GPUs the CUDA Cores count was ONLY referred as FP32 Cores (aka real performance) !
The problem of RTX 30/40/50 is that ~1/3 of all their CUDA Cores need to be used as INT32 Cores to run Games! (NVIDIA already confirmed that when they launched Ampere, hence why the performance increase was farrr to be 2x vs Turing in Gaming and TFLOPS numbers no longer mean anything anymore).
1660 TI was designed without RT cores, but increased physical size to gain SM leverage relative to the GTX1060 it technically replaced. GP106 @ 200mm2 > TU116 @ 284mm2. 10 SM > 24SM and TSMC 16nm to 12nm respectively.
GTX1080 was a 314mm2 full die GP104 with 20/20 SM enabled. You're talking relative to flagship GP102 design which has no real relevance when it comes to yield and dies per wafer. This was also 9 years ago.. Inflation exist. $600 = $800 today.
My point is.. NVIDIA wants to make money. They could make a 1000mm2 die with lower yield and you would still find a reason to complain because you're viewing this whole ordeal completely backwards. Imagine a GB201 with 30k CUDA. Is 5070 a 5030 now? Lmao.
- A bigger die cost more money.
- Inflation is real
- TSMC is charging an arm and a leg for 5nm 4N wafers relative to previous fabrications. It's offsetting yield benefits from going smaller.
- VRM design is innately 2-3x more expensive, along with PCB layer count increasing for PCI 4.0/5.0 signaling. A lot more cards are using SMT EE too.
Tech tube is making people stupid.Personally, I'll wait till the product launches to properly make my mind up, and try my gosh darndest to resist having a go at people for what they end up choosing to buy, even if it's not what I'd buy.
Yeah, I didn’t think you could. Does that mean you’ll stop trolling Nvidia threads?
The problem is Nvidia says one thing in public via their CEO and their spec sheets say otherwise.
For instance they talk about frame gen and DLSS and how it can make all these games so playable. Yet their engineers on their spec sheets literally say it is advised to not use DLSS unless you are already getting a minimum of 100-120fps in the game.
Bottom line is you have to have good to amazing performance before you should use any of the other features. Otherwise the results won't be good.
Smaller process nodes means more chips per wafer and more transistors on it. Don't you think that Intel, TSMC, IBM and GloFo didn't spend a lot of money on their process nodes and could also have some terrible yields back then?!!
Even with inflation the 5080 is a ripoff... 84SM is literally the specs of a Full GA102 aka 3090 Ti (in 2022) and almost a 3090 (82SM) in 2020, but on a Samsung 8nm node compared to a TSMC 4nm node (which was already the case of RTX 40s even though Blackwell was originally supposed to be made on TSMC 3nm but since AMD canceled NAVI 41 they changed their plans)... 3090/Ti had 28.3B Transistors whereas the 5080 has 45.6B Transistors but the 3090/Ti also had a 384-bit bus and 24GB VRAM which the 5080 clearly doesn't even have, and is only half a 5090 specs wise... what a joke.
The GTX 1070/1080/1080 Ti had plenty of VRAM for back then, but nowadays Nvidia are barely giving the minimum possible. They added more L2 Cache on RTX 4060/Ti pretending that it was enough to bypass the 8GB limit but they also cut the memory bus of their GPUs to 128-bit etc. That's just wrong, more L2 Cache can help but will not replace the lack of VRAM.
Think whatever you want man, I have a 4090 so I'm not an Nvidia hater, but between the huge MSRP increases, false advertising (VRAM vs L2 Cache, Crippled Memory Buses, FG/MFG being "free performance" omitting latency increase and artifacts/smearing/ghosting/etc.), selling GPUs with missing ROPs, melting connectors even 4 years later, 600W GPUs (whey they used to be 250W back then), Drivers bricking GPUs and being unstable (with some studios even recommending older drivers), etc. If you think Nvidia really care about you then you're delusional. No it's not, 99% of Games are using Rasterization for raw performance... RT/PT is the future but it doesn't mean the rest is already dead. IPC increase, Higher CUDA Core counts, Higher Frequencies, Larger Cache, etc. are still a thing in 2025 ! We don't do that only for RT & PT lol. Gamers Nexus are still legit and stating facts, and Jayz2Cents, Hardware Unboxed, der8auer, etc. have all said the same thing... If you decide to not believe them all, then fine stay in denial. But don't complain if all RTX GPUs have a poor value nowadays.
Here are some facts...
Note that starting from 3 series where they doubled the core number based on the increase in capability, I've used half that number
They're absolutely pushing that top chip into bonkers level halo/prosumer/titan whateveryouwannacallit territory, but I'd say with the 4090 they also kinda earned it, less so with 5090 but so far that whole generation has suffered, largely from stagnant node advancement. Ampere if on TSMC 7nm like RDNA2 would have been a good chunk faster, but what happened was they were held back by Samsung 8nm and then the Jump back to TSMC for Ada netted them an insane increase that they capitalised on. At the end of the day performance is what matters the most, I find die size, shader count etc discussions interesting, but the real let down is when price to performance isn't getting any/much better, the rest just feels like such an academic thing to get hung up on.
Plus, Blackwell was originally designed to be made on TSMC 3nm but they canceled it after NAVI 41 got canceled... But the 5090 is sold $2000 (mostly due to the higher core count and 512-bit bus + VRAM capacity), but gen over gen the performance is not great, it's also hugely power limited even though it's using almost 600W (which is crazy!).
Sure die size since RTX are bigger and RT + Tensor cores are not free but nobody asked for them, Nvidia forced them on us...
As for RT and tensor cores, last I checked they account for under 10% of die size and disproportionately increase the capability relative to that size. Consumers also rarely dictate innovation, most just want more faster better cheaper of the same.
But lastly, I don't know why anyone says nobody asked for them, gamers have been asking continually for leaps in rendering realism since the dawn of video games, RT was the next big leap in lighting, which helps developers too. So if I said I asked for them, that's enough to completely undo the notion that nobody asked for them, that appears to just be something that a small fraction of an already vocal minority say because they don't like the way the industry is going.
I'll agree with you that NVIDIA is overpriced per chip sold, but my argument was targeted at legacy SKUs. NVIDIA hasn't really changed their game plan since the GTX600 series..
GTX680 was the first x80 series that diverged from the "flagship" die. Samsung 8 was significantly cheaper and the GA102 was 628mm2.. This is a completely different tier of GPU and shows how much more advanced 4N is. NVIDIA was able to cram 60 more SM units into AD102 (4090 was cut down), while being smaller than GA102. AD102 is 609mm2.
The current flagship die/nodes scale better per die size vs SM count, but also draw 400-600W power as the trade off.. 5080 runs around 100W lower power than 3090 TI on average via 400mm2 GB203 die.
84 SM vs 84 SM.
5080 is clocked higher. The bit bus is irrelevant and has no impact on performance in this situation. Bandwidth does. 5080 is slightly lower (960 GB/s), but has a bigger cache pool, which will significantly offset this.
24GB of VRAM is a by product of using double stacked 1GB chips on each side of the PCB. 12+12 chips.
It's funny you mention this since the 5080 can actually be bumped up to 3GB dies in a year or so... ending at the same 24GB spec. (or 32GB with 4GB dies).
Plausible max spec "could be" 48 or 64GB if they double stack a PCB, but it wont happen for obvious reasons. The goal is to revise the product stack in a year or two once 3/4 GB dies end up mass production and cheaper.
Nvidia has been stagnating VRAM since the 20 series.. This isn't new... There was a rumored 3070 16GB but it got scrapped. Same with 20GB 3080, but they launched a 12GB 3080 instead.
I personally believe the 36SM 5060 TI will be a terrible value in both 8/16GB configs. Pretty sure they want to transition this card to 12GB durring a refresh. Yet, you're complaining about a 400mm2 GB203 5080 only having 84 SM?
I don't think you quite understand what you're arguing... especially using a flagship 128(4090) /144 (AD102) or 170(5090) /192 (GB202) SM die as some kind of reference for "getting screwed". I think Nvidia is cancer incarnate. You're completing missing the point lol
www.techpowerup.com/gpu-specs/geforce-rtx-5060-ti.c4246
Assuming that the provisional specs for the 5060 Ti end up being real, the configuration is basically the full AD106 chip but "ported" to Blackwell.
www.techpowerup.com/gpu-specs/nvidia-ad106.g1014
nVidia didn't give us a desktop GPU with the full chip, it would've been a 4060 Ti Super.
Looking at the render config table there is a ~6% increase in everything but ROPs and cache. The Boost Clock is increased by less than 2%. All of this would amount to about 7-8% more performance. However the bandwidth then comes into play.
Personally I am convinced that the 4060 Ti was bottlenecked by the bandwidth but I don't know to what degree, if only slightly or perhaps severely.
The bandwidth is about 56% higher on the 5060 Ti, so whatever the bottleneck was on the 4060 Ti it is completely eliminated now thanks to such a significant bump.
Regarding performance, on the 4060 Ti gpu specs page where that card is taken as a baseline we then have three cards that have about 15% more performance -> the RTX 3070 Ti, the RX 7700 XT and the RX 6800. The RTX 4070 has 29% more performance.
My most favorable guess is that the 5060 Ti is going to have roughly 15% more performance than the 4060 Ti and thus it will slot in halfway between that and the RTX 4070.