Tuesday, May 9th 2023

NVIDIA GeForce RTX 4060 Ti Available as 8 GB and 16 GB, This Month. RTX 4060 in July

In what could explain the greater attention by leaky taps on the GeForce RTX 4060 Ti compared to its sibling, the RTX 4060, NVIDIA is preparing a staggered launch for its RTX 4060-series. We're also learning that there are as many as three SKUs in the series—the RTX 4060 Ti 8 GB, the RTX 4060 Ti 16 GB, and the RTX 4060. All three will be announced later this month, however, only the RTX 4060 Ti 8 GB will be available to purchase at the time. The RTX 4060 Ti 16 GB and RTX 4060 will be available from July.

At this point, little is known about what segments the 8 GB and 16 GB variants of the RTX 4060 Ti besides memory size. The RTX 4060 Ti 8 GB is rumored to feature 34 out of 36 streaming multiprocessors (SM) physically present on the 5 nm "AD106" silicon, which gives NVIDIA some theoretical headroom to enable a few more shaders. These 34 work out to 4,352 CUDA cores, while a fully unlocked AD106 has 4,608. The RTX 4060 is a significantly different SKU that's based on a maxed out "AD107" silicon, with 30 SM, or 3,840 CUDA cores, although it should be possible for some RTX 4060 cards be based on a heavily cut-down AD106.
Sources: MEGAsizeGPU (Twitter), VideoCardz
Add your own comment

120 Comments on NVIDIA GeForce RTX 4060 Ti Available as 8 GB and 16 GB, This Month. RTX 4060 in July

#76
Paranoir andando
MarsM4NPretty sure most people understand the Nvidia issue. ;) Memory capacity >< Bus size >< Greed.

Or do you wanna tell me Nvidia, the absolute market leader of GPU's, in incapable to develop GPU's with adequate bus sizes for adequate memory sizes? Come on.
Apple (a company that I hate): The M1 ULTRA has 128GB for GPU+CPU at 800 GB/s. All done with extra-cheap DDR5 (LPDDR5)

Nvidia can't do something like that.
Posted on Reply
#77
ModEl4
Good choice if 4060Ti 8GB is $399 and 16GB version is $449 otherwise uninteresting.
At this performance class 16GB makes the exact same sense as 8GB did on RX580 6 years ago, which means a lot of sense... (The only problem is that RX580 had $239 SRP and 4060Ti will be at best $449)
If 4060Ti is only -10% slower in QHD raster vs RX6800 (starting at $479 atm) i will certainly prefer it at $449 vs RX6800 with all the other advantages that it has.
Posted on Reply
#78
Dragokar
JimmyDoogs7700XT 16GB please come out so we can finally make those under $1k beasts again like back in 2019 and save PC gaming.
I would love to get a good value gpu that does not lack vram and to much performance.
Posted on Reply
#79
R0H1T
Paranoir andandoApple (a company that I hate): The M1 ULTRA has 128GB for GPU+CPU at 800 GB/s. All done with extra-cheap DDR5 (LPDDR5)

Nvidia can't do something like that.
LPDDR5 is cheap :rolleyes:
Posted on Reply
#80
watzupken
I won't believe until the product is out in the market. We were led to believe that the RTX 3070 Ti may sport 16GB VRAM config, but instead got a meaningless GDDR6X upgrade, but still stuck at 8GB. If 16GB is true, it is great, but will depends on the cost. Also it does not change the fact that the GPU specs is very underwhelming for a xx60 Ti class that likely will struggle with new titles at 1440p (forget about RT).
Posted on Reply
#81
ValenOne
MarsM4N16GB on a 4060ti is just wasted potential. You pay for something extra that benefits in maybe 0,01% of games. :rolleyes: Nvidia's lineup doesn't make any sense.
If they would copy AMD (who are since years more generous without senseless overstacking), their lineup would look like that:

RTX 4090 (24GB) / RTX 4090ti (24GB)
RTX 4080 (20GB) / RTX 4080ti (20GB)
RTX 4070 (16GB) / RTX 4070ti (16GB)
RTX 4060 (12GB) / RTX 4060ti (12GB)
RTX 4050 (10GB) / RTX 4050ti (10GB)
4GB GDDR6 chip density is a problem.

4GB chip GDDR6W was offered by Samsung on November 2022. GDDR6W has 64-bit IO pins instead of GDDR6 / GDDR6X 32-bit IO pins.
Paranoir andandoApple (a company that I hate): The M1 ULTRA has 128GB for GPU+CPU at 800 GB/s. All done with extra-cheap DDR5 (LPDDR5)

Nvidia can't do something like that.
M1 Ultra has two M1 Max. Each M1 Max has 400 GB/s. For LPDDR5-6400, it would need 512 bits bus per M1 Max.

Apple attached LPDDR5-6400 chips on the chip package like on HBM implementations.
-----------

Recent GDDR6W chip has 64-bit IO pins instead of GDDR6 / GDDR6X's 32-bit IO pins.

12 GDDR6W chips can offer 768-bit and 48 GB.

8 GDDR6W chips can offer 512-bit and 32 GB.

4 GDDR6W chips can offer 256-bit and 16 GB.

Not factoring clamshell configuration.

GDDR6W didn't make it to ADA's initial release window.

Intel, AMD, NVIDIA, Hynix, Micron, and Samsung needs to gang up and figure out a way to significantly advance GDDRx standards in a timely manner. Need a VESA-type group for the PC memory standards instead of the slower JEDEC. The stakeholders in the PC clone industry cooperated when it crushed IBM PS/2.
watzupkenI won't believe until the product is out in the market. We were led to believe that the RTX 3070 Ti may sport 16GB VRAM config, but instead got a meaningless GDDR6X upgrade, but still stuck at 8GB. If 16GB is true, it is great, but will depends on the cost. Also it does not change the fact that the GPU specs is very underwhelming for a xx60 Ti class that likely will struggle with new titles at 1440p (forget about RT).
GA104-based RTX A4000 has 16 GB VRAM.
JimmyDoogs7700XT 16GB please come out so we can finally make those under $1k beasts again like back in 2019 and save PC gaming.
With existing GDDR6-20000 and 2GB density,

256 bit = 8 chips, 16 GB
192 bit = 6 chips, 12 GB
128 bit = 4 chips, 8 GB

With late Nov 2022 era GDDR6W with 4GB density (twice IO pins, twice the density)

12 GDDR6W chips can offer 768-bit and 48 GB.

8 GDDR6W chips can offer 512-bit and 32 GB.

4 GDDR6W chips can offer 256-bit and 16 GB.

Not factoring clamshell configuration.

GDDR6W didn't make it to ADA's and RDNA 3x's initial release window.
Posted on Reply
#83
AusWolf
arnold_al_qadr16gb in midrange card is good news..
What about the 4070 and 4070 Ti? It looks to me like you'll have to choose between VRAM and performance once again, just like with the 3060 and 3070 (Ti).
Posted on Reply
#84
wheresmycar
16GB at the lower spectrum alongside console memory enrichment sounds about right.

Doesn't matter how effective these SKUs will be in the current norms but more VRAM standardised across the board is no longer preventable but inevitable. The earlier we can adopt, the greater the possibilities going fwd. While some of us are more than content with the shit-show compromised graphics made available today, others simply fancy tapping into north of "made available".... you know the already perpetually accessible enhanced visual eye-candy which fails to land on our screens. The possibilities are endless only a wider paint pallette is needed alongside compute progression (which is already abundant) and a boat load of more time (years) for things to develop into a more immersively stunning picture. Oh and the big elephant in the room "price" - nothings perfect!

Lets just hope Leaky Leaks is taking a leak with precision (no wet trousers pls)
Posted on Reply
#85
dirtyferret
Solaris17I’m willing to bet it was a late sku they had no intention of initially releasing until the backlash regarding there vram practices.
it will give them the original $499 price point they wanted the RTX4060ti 8GB to sell for
Posted on Reply
#86
Paranoir andando
ValenOneM1 Ultra has two M1 Max. Each M1 Max has 400 GB/s. For LPDDR5-6400, it would need 512 bits bus per M1 Max.

Apple attached LPDDR5-6400 chips on the chip package like on HBM implementations.
-----------
....
...
Yes, i know it... He said " Nvidia is incapable to develop GPU's with adequate bus sizes for adequate memory sizes? "
and i answered that it's possible and with cheap RAM, but Nvidia don't want. Nothing else.

I don't say that nvidia have to redesign now ADA 106 or 107, I know that is not possible.

and yes, I know they would need bigger buses and more controllers in the gpu = more expensive chip, bigger,
but what is better and more cheap? Big silicon + huge cheap ram or small silicon + short expensive ram
Posted on Reply
#87
Dahak390
Now why praytel did nvidia not do that with the 3070 ti. It only has 8 gig version. But it seems all the other midrange cards have two flavours of vram. Nvidia cheapened out because they knew the 40 series was coming and just could not be bothered. Shame on you nvidia.lol. I did pay a lot for that gpu. So I kinda have a right to be a little pissed about it. That being said, the 4060 ti 16 gig version would be ok for higher resolutions. But I still think the bus width for the 40 series on everything below the 4090 should be to the same specs as yhe previous generation. Maybe they did it this way so the 50 series will get more "wows" when they use the same bandwidth as the 30 series respectively. With more RT cores and DLSS 3 is revamped with more A.I. involvement. I think the 50 series will be a smashing success. But if it is better than the 40 series like the 40 was better than the 30 series. We could have a problem as the 8 gig gpu's will be useless in a couple more years. My next upgrade will be at least 16 gig vram gpu.
Posted on Reply
#88
ValenOne
Dahak390Now why praytel did nvidia not do that with the 3070 ti. It only has 8 gig version. But it seems all the other midrange cards have two flavours of vram. Nvidia cheapened out because they knew the 40 series was coming and just could not be bothered. Shame on you nvidia.lol. I did pay a lot for that gpu. So I kinda have a right to be a little pissed about it. That being said, the 4060 ti 16 gig version would be ok for higher resolutions. But I still think the bus width for the 40 series on everything below the 4090 should be to the same specs as yhe previous generation. Maybe they did it this way so the 50 series will get more "wows" when they use the same bandwidth as the 30 series respectively. With more RT cores and DLSS 3 is revamped with more A.I. involvement. I think the 50 series will be a smashing success. But if it is better than the 40 series like the 40 was better than the 30 series. We could have a problem as the 8 gig gpu's will be useless in a couple more years. My next upgrade will be at least 16 gig vram gpu.
NVIDIA's ADA generation has a large L2 cache (think of AMD's Infinity Cache) that reduces external memory hit rates.

Fully enabled GA104 has RTX A4000 16GB and RTX 3070 Ti 8 GB SKUs. RTX A4000 16GB has a similar price as GA102-based RTX 3080 Ti 12 GB.
R0H1TLPDDR5 is cheap :rolleyes:
Apple Mac Studio M1 Ultra is not cheap.

www.apple.com/shop/buy-mac/mac-studio
Apple Mac Studio M1 Ultra with 20 core/20 threads CPU, 128 GB memory, and 1 TB SSD has a $4,799.00 USD asking price.
Posted on Reply
#89
Dahak390
I know but show me a 3070 ti with 16 gigs of vram. I was unable to locate one and I was told NVIDIA did not make one yet and probably wouldn't either. Guess what, only 8 gig version was produced. Not long after i purchased my 3070 ti, Nvidia was anouncing the 40 series. And if games are made to run with more than 8 gigs then gpu manufacturers should be producing video cards with enough vram and what do you know, now they are. At least AMD and INTEL figured it out and cared enough to put the nesesary amount of vram on their cards. Whereas Nvidia figured they'll wait for a few thousand more people to spend exorberant amounts of money to play the games. Look where it got them today. Now they listen to us gamers sort of.
Posted on Reply
#90
Hyderz
Dahak390I know but show me a 3070 ti with 16 gigs of vram. I was unable to locate one and I was told NVIDIA did not make one yet and probably wouldn't either. Guess what, only 8 gig version was produced. Not long after i purchased my 3070 ti, Nvidia was anouncing the 40 series. And if games are made to run with more than 8 gigs then gpu manufacturers should be producing video cards with enough vram and what do you know, now they are. At least AMD and INTEL figured it out and cared enough to put the nesesary amount of vram on their cards. Whereas Nvidia figured they'll wait for a few thousand more people to spend exorberant amounts of money to play the games. Look where it got them today. Now they listen to us gamers sort of.
its a just damage control for nvidia and business as usual. they will be back to their usual tricks
a while back nvidia was caught with performance lowering drivers to force users to upgrade
Posted on Reply
#91
Dahak390
Oh wow i forgot about that. Not like they got in trouble for it. I am seriously leaning towards AMD for my next upgrade. At least they try to be honest with us. Maybe even INTEL if they can manage to get some more cards with competative performance against NVIDIA.
Posted on Reply
#92
AusWolf
Dahak390I know but show me a 3070 ti with 16 gigs of vram. I was unable to locate one and I was told NVIDIA did not make one yet and probably wouldn't either. Guess what, only 8 gig version was produced. Not long after i purchased my 3070 ti, Nvidia was anouncing the 40 series. And if games are made to run with more than 8 gigs then gpu manufacturers should be producing video cards with enough vram and what do you know, now they are. At least AMD and INTEL figured it out and cared enough to put the nesesary amount of vram on their cards. Whereas Nvidia figured they'll wait for a few thousand more people to spend exorberant amounts of money to play the games. Look where it got them today. Now they listen to us gamers sort of.
That's just the normal Nvidia product cycle since the 20-series:
1. Release nerfed product as the new high-end for ridiculous prices. Make sure everybody knows it's the best of the best.
2. Release slightly less nerfed refresh for ludicrous prices. Tell everybody that it's even better than the last one.
3. Generation change: rinse and repeat.

The 16 GB 3070 Ti would have broken this cycle by not being nerfed. That cannot happen these days.
Posted on Reply
#93
Dahak390
Indeed. Don't get me wrong, I like NVIDIA. Over the years they have produced some amazing technology and were able to apply it to their GPUs and voilet you have a blazing fast video card that rocks. But AMD has been able to produce some great cards and technology themselves. And that has allowed them to stay competative with nvidia. Something we seem to forget is AMD is fighting two different wars on two fronts. On the one side, they are competeing with INTEL in tge CPU market and doing a bangup job of it. Yay AMD go get them. But on the other side they are also competeing with NVIDIA in the graphics market, and not doing a bad job or that either. Admittedly they could do better if they could put more people hrs into R & D for the GPU devision. I believe that given enough time, they will persavere and take NVIDIA on a long run in gpu labd.
Posted on Reply
#95
FlyingHopes
Yep, NVIDIA is definitely trolling us. LMAO.
Posted on Reply
#96
cbb
for my use case (older mmos at 3840x2160; no RT, no DLSS, but 4K all maxxed options) I plan to get the cheapest 16GB card of this gen with decent horsepower I can. My 2070 8GB bogs down pretty hard regularly, although my ancient cpu is prolly part of the problem (planning to do a whole new rig). This might do the trick, depending on price/perf vs higher tier or the upcoming offerings from team red. C'mon AMD, step up already! :)

Literally nothing I play supports either RT or DLSS, nor is it likely to be added. And for anything other than games (office, browsing, etc), even an igp is good enough, so the card is really just for my MMOs. And I doubt I'm the only one.
Posted on Reply
#97
krisdee
solarmysticAfter the precedent they set with a 3060 12 GB and 2060 12GB VRAM editions having more VRAM than cards higher up the stack, nothing surprises me anymore.

4060TI/4060 with 16 GB VRAM could unironically be the most future proof budget card in the Ada line up compared to the 4070/4070TI with just 12 GB VRAM once games require 16 GB VRAM for maximum texture quality.

Which will be the reality in a few years time.

What happened with the 3070 will happen with the 4070 again down the road.
Future proof with 288GB/s memory bandwidth?
Posted on Reply
#98
Paranoir andando
R0H1TLPDDR5 is cheap :rolleyes:
No, GDDR7 is the cheapest
Dahak390Now why praytel did nvidia not do that with the 3070 ti. It only has 8 gig version. But it seems all the other midrange cards have two flavours of vram. Nvidia cheapened out because they knew the 40 series was coming and just could not be bothered. Shame on you nvidia.lol. I did pay a lot for that gpu. So I kinda have a right to be a little pissed about it. That being said, the 4060 ti 16 gig version would be ok for higher resolutions. But I still think the bus width for the 40 series on everything below the 4090 should be to the same specs as yhe previous generation. Maybe they did it this way so the 50 series will get more "wows" when they use the same bandwidth as the 30 series respectively. With more RT cores and DLSS 3 is revamped with more A.I. involvement. I think the 50 series will be a smashing success. But if it is better than the 40 series like the 40 was better than the 30 series. We could have a problem as the 8 gig gpu's will be useless in a couple more years. My next upgrade will be at least 16 gig vram gpu.
Lovelace has the same bus as previous generations, but new names that not correspond to its percentage of cudas (tmu, rt, tensor, etc). If you look at the real gpu instead of the fake new name, then you'll understand it. For example:

Turing - 47%-55%cudas - mid-range - 256bit - TU106 - RTX 2060super & 2070
Ampere- 45%-57%cudas - mid-range - 256bit - GA104 - RTX 3060Ti & 3070 & 3070Ti
Lovelace - - 53% cudas - mid-range - 256bit - AD104 - RTX 4080



It's so easy to understand.


Another example, 33% of cudas of max:

Pascal - 33%cudas - mid-lowrange - 192bit - GP106 - GTX 1060
Touring- 33%cudas - mid-lowrange - 192bit - TU116 - GTX 1660Ti
Ampere - 33%cudas - mid-lowrange - 192bit - GA106 - RTX 3060
Lovelace-32%cudas - mid-lowrange - 192bit - AD104 - RTX 4070


Nvidia change the name, fake AD104 is the real AD106.
The real AD104 is the new fake name AD103

4060 Ti is not midrange, it's the new name for x50 range. A gpu with a 24%cudas is a low end gpu (and of course, 128bit). Sad but true.
Posted on Reply
#99
ir_cow
64KAll I can tell you is that Nvidia works in mysterious ways. They do oddball things from time to time and their logic is unfathomable when they do it.

Edit: I could reference all the way back to the Kepler series. The GTX 780 came with a 6 GB VRAM version and the GTX 780 Ti, while the GPU was considerably faster, only came with 3 GB VRAM.
My theory on that is they wanted to sell more 780s for SLI. Giving 6GB to the Titans also was a sell point. $1000 Titan Black was basically a 780 Ti with 6GB.
Posted on Reply
#100
Mahboi
bugIf this sells well, we may get new 4070 16GB models. But there are still many unknowns.
Ofc not WTF lol???
Adding 4Go of VRAM means redrawing the bus. That's redrawing the I/O, which means redrawing the chip. That's something they never ever do. They prefer doubling the VRAM on the bus (3060 12Go) than ever remaking a chip. And it's the same for AMD. We'll never have any 4070s with 16Go. We may have 4060 Tis with 16Go or 4070Tis with 24Go (highly doubtful on the the latter) though.
Posted on Reply
Add your own comment
May 2nd, 2024 10:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts