• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4060, 4060 Ti & 4070 GPU Refreshes Spotted in Leak

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,328 (3.84/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
NVIDIA completed its last round of GeForce NVIDIA RTX 40-series GPU refreshes at the very end of January—new evidence suggests that another wave is scheduled for imminent release. MEGAsizeGPU has acquired and shared a tabulated list of new Ada Lovelace GPU variants—the trusted leaker's post presents a timetable that was supposed to kick off within the second half of this month. First up is the GeForce RTX 4070 GPU, with a current designation of AD104-251—the leaked table suggests that a new variant, AD103-175-KX, is due very soon (or overdue). Wccftech pointed out that the new ID was previously linked to NVIDIA's GeForce RTX 4070 SUPER SKU. Moving into April, next up is the GeForce RTX 4060 Ti—jumping from the current AD106-351 die to a new unit; AD104-150-KX. The third adjustment (allegedly) affects the GeForce RTX 4060—going from AD107-400 to AD106-255, also timetabled for next month. MEGAsizeGPU reckons that Team Green will be swapping chips, but not rolling out broadly adjusted specifications—a best case scenario could include higher CUDA, RT, and Tensor core counts. According to VideoCardz, the new die designations have popped up in freshly released official driver notes—it is inferred that the variants are getting an "under the radar" launch treatment.



View at TechPowerUp Main Site | Source
 
Wait wtf isn't there already a 4070 Super? Or is "refresh" not what that means?
 
The specifications remain the same. You don't get more shaders or anything more, except the physical die underneath the vapour chamber (if present) is different.

- Yep.

NV does this from time to time, cutting down garbage dies from a higher product tier to match the specs of a lower tier part normally made with a smaller die.

Every once in a while there are some weird effects with performance due to some underlying changes in core arch or how things are cut to arrive at the right shader and bus numbers.
 
RTX 4070 Ti Super Ultra incoming

Jokes aside, like ARF and Godisan said this is probably just NV doing some inventory trimming/die reallocation. Though I would be interested to see the change in thermal performance by using a super cut down AD103 vs it's corresponding AD104 die. More dark silicon to soak up and distribute heat.
 
No thanks, I'll just wait on the Ti-X5-Uber-Mega-Duper-Quadruple-Omega-particle-powered versions to come out, hahaha :)

Or by that time, hopefully the 5x series will be out, and the vicious greed-mongering, cash-cow milking cycle can start all over once again...with yet anutha round of minuscule, puny, dweeb-inspiring 3-7% performance increases for about 1.5-2x the price of current cards...

Oh yea, I can't wait !

n.O.t.. /s
 
It seems to have come out of the SSD manufacturers' playbook; I wonder why companies are free to do this without facing any lawsuits.
 
It seems to have come out of the SSD manufacturers' playbook; I wonder why companies are free to do this without facing any lawsuits.
Unlike the SSD manufacturers, this should have no discernable effect for the end users.
 
Yeas ago I stated that the 1050ti was an amazing GPU but IMHO it was deliberately crippled by Ngreedia so it could not out perform it's product line at the time. Now as we have people from Brazil tweaking older Video cards and what do you... They are getting some very good results in performance increases.

I am stating this once again is that my pervious comment is Now part of Ngreedia's update practices. To squeeze every last dollar off their created product and then some. They will tweak with firmware, some hardware changes as well as software chances.

Just be aware of this. We are never going to get the cost vs performance that we used to get.
 
Unlike the SSD manufacturers, this should have no discernable effect for the end users.
If I'm not mistaken, there were cases where it made a difference. Be that as it may.. It still sounds dirty to change the physical characteristics of a product without changing the name.
 
Chaos is a leader, confusion is the way.
 
One wouldn't know what sits under the hood anyway. NV could use AD102 and disable 66% of the chip for all I care to get rid of it on time for 50 series. we don't want them sitting on piles and piles of that. Dual NVencoders would be nice but no such luck.
 
I'm waiting for the next RTX 8080 Ti Super refresh, thank you very much.
 
There must be so much waste on these if Nvidia releases cut down versions of already cut down chips. With Ampere, everybody blamed the Samsung foundry, but what's going on now?
 
Just give us the 5090 so we all can bitch about a 2.500 dollar GPU
 
Unlike the SSD manufacturers, this should have no discernable effect for the end users.
The overall performance should be the same.
Every once in a while it may actually be an edge case where there are benefits too.

There must be so much waste on these if Nvidia releases cut down versions of already cut down chips. With Ampere, everybody blamed the Samsung foundry, but what's going on now?
This is nothing new, they even did this with the great Pascal generation, like cutting down "1080" to "1060".

This is what they usually do at towards the end of the production cycle; assess which chips they have left over which are too low quality for the normal bins, but still working. And the alternative would be to discard these chips, which is wasteful and a bit sad as they can provide usefulness and fun to many users.

And they probably don't earn a lot from it either, as they must sell these dirt cheap to the AiB vendors in order for it to be worth developing a separate PCB etc.
 
this is a very common practice that happens every generation for Nvidia, dies that haven't binned fully (defected/cut down) are placed in new skus and branded as a sku that aligns with the shader amount that was cut. just another way for Nvidia to make money and not lose profits on defected chips.

EDIT: there is no difference between the specs except the chip on the PCB, however since the die area is bigger and there is less active cores they will probably perform better and cool better, maybe a 1-5% difference in performance at best (sometimes the ROPs are different because of how they are coupled with the configuration). it is what it is.

in the rare case that the ROPs are different there can be a significant performance uplift in certain benchmarks/games
an example is the RTX 3060 GA104 with 64 ROPs vs 48
 
Last edited:
this is a very common practice that happens every generation for Nvidia, dies that haven't binned fully (defected/cut down) are placed in new skus and branded as a sku that aligns with the shader amount that was cut. just another way for Nvidia to make money and not lose profits on defected chips.

EDIT: there is no difference between the specs except the chip on the PCB, however since the die area is bigger and there is less active cores they will probably perform better and cool better, maybe a 1-5% difference in performance at best (sometimes the ROPs are different because of how they are coupled with the configuration). it is what it is.

in the rare case that the ROPs are different there can be a significant performance uplift in certain benchmarks/games
an example is the RTX 3060 GA104 with 64 ROPs vs 48
Isn't it also true they aren't the best bins to begin with, so they'll likely also need a bit more power for the exact same clock as a fully enabled OG chip?

The 1070ti wasn't exactly great for example, worse metrics across the board even if you omit the shader deficit. I'm not sure those GP104 1060s did great, but I never heard them do a lot better either due to 'dark silicon'. The main issue probably being these aren't high TDP cards to begin with, so they weren't missing out on cooling either, but still did sip more power, however little.

There must be so much waste on these if Nvidia releases cut down versions of already cut down chips. With Ampere, everybody blamed the Samsung foundry, but what's going on now?
The node might be great but the yield probably isn't stellar, but palatable. I guess they're priced accordingly...
At the same time, Samsung's node just wasn't great, nor were the yields.
 
Last edited:
The node might be great but the yield probably isn't stellar, but palatable. I guess they're priced accordingly...
At the same time, Samsung's node just wasn't great, nor were the yields.
Sure, but why does only Nvidia suffer from bad yields when they are far from being the only company ordering from Samsung/TSMC?
 
EDIT: there is no difference between the specs except the chip on the PCB, however since the die area is bigger and there is less active cores they will probably perform better and cool better, maybe a 1-5% difference in performance at best (sometimes the ROPs are different because of how they are coupled with the configuration). it is what it is.
So this is interesting to know. Would these versions more likely be "golden samples" for stuff like overclocking and undervolting then?
 
Sure, but why does only Nvidia suffer from bad yields when they are far from being the only company ordering from Samsung/TSMC?
They don't? But now look at the volume Nvidia is working with.

And: RDNA3 is a chiplet GPU and isn't on the cutting edge node either. AMD's biggest GPU die is 304mm². Nvidia's biggest is 609mm². Basically, AMD's top end has the same size/yield risk as Nvidia's 4070ti. But offers +30% raw perf.
 
Last edited:
They don't? But now look at the volume Nvidia is working with.

And: RDNA3 is a chiplet GPU and isn't on the cutting edge node either. AMD's biggest GPU die is about 300mm². Nvidia's biggest is 609mm².
Sure, but the Navi 31 GCD has about the same number of transistors as the whole AD103 chip. They're both made on a 5 nm node at TSMC, so I would expect the same amount of waste.
 
Back
Top