Wednesday, June 14th 2023

NVIDIA GeForce RTX 4060 to Release on June 29

NVIDIA could be advancing the launch of its GeForce RTX 4060 (non-Ti) graphics card from its July 2023 launch the company originally announced. Leaked documents shared by MEGAsizeGPU say that NVIDIA could make the RTX 4060 available on June 29, which means reviews of the card could go live on June 28 for the MSRP cards, and June 29 for the premium ones priced above MSRP. It was earlier expected to launch alongside the 16 GB variant of the RTX 4060 Ti, in July.

The RTX 4060 is a significantly different product from the RTX 4060 Ti the company launched in May, it is based on the smaller AD107 silicon. The card is expected to feature 3,072 CUDA cores, 24 RT cores, 96 Tensor cores, 96 TMUs, and 32 ROPs, compared to the 4,352 CUDA cores, 34 RT cores, 136 Tensor cores, 136 TMUs, and 48 ROPs, of the RTX 4060 Ti. The memory configuration is similar, with 8 GB of GDDR6 memory across a 128-bit wide memory bus, however, the memory speed is slightly lower, at 17 Gbps vs. 18 Gbps of the Ti. The RTX 4060 has a TGP of just 115 W. The company hasn't finalized its price, yet.

Update Jun 14th: NVIDIA confirmed the launch date on Twitter:
NVIDIAThe GeForce RTX 4060 will now be available to order starting June 29, at 6AM Pacific.
Sources: MEGAsizeGPU (Twitter), NVIDIA Twitter
Add your own comment

76 Comments on NVIDIA GeForce RTX 4060 to Release on June 29

#51
bug
Bomby569just buy a 3060ti, they are the same thing, almost. If you don't care about the vram issue.
Or a 3060 with more vram.
Or check the alternatives on amd or intel. 6700xt for example.
I'm really torn (hence my leaning on "pass"). If I upgrade, I really want to test RTX myself, so it's going to be Nvidia. My main monitor is 4k, so I would need DLSS3 as well. Though I figure I can circumvent DLSS and just upscale FHD in the drivers/monitor.
Posted on Reply
#52
Why_Me
tussinmanI think it will be like the 4060 TI in the sense that it's barely faster.

Wouldn't be surprised if the 4060 is like only 4-8% faster at 1080p than the vanilla 3060 and equal/slightly worst at 1440p. Pretty embarassing considering their asking 2+ year old 3060 price for basically same performance and worst ram/bandwidth (reviewers will play up the power draw and DLSS 3.0 but be forced to admit that the actual real world performance is embarassing)
Who in their right mind would purchase this card for a 1440P gaming build other than someone who plays nothing other than strategy games?
Posted on Reply
#53
Chrispy_
AusWolfI think the mindset here is that nothing is wasteful that serves a purpose. Sure, if you always play the latest games, and you always want maximum graphical settings, and/or you already have a 3060 or something better, then sure, the 4060 is wasteful. But if you're upgrading from a 1060, and just want to play current games (that is, you're not interested in new releases), and/or you don't mind turning texture resolution down a notch, and the 4060 isn't more expensive than the 3060, then I see the point behind it.

Similarly to the way the Radeon RX 7600 isn't meant for people who have a 6600 XT or better, the 4060 won't be an upgrade path for Ampere owners, either. It'll have to be reflected in its price, though.
This is the crux, yes.

It'll make a good budget esports card for 1080p medium/competitive settings for years to come. As long as it is actually "budget" then there's no problem with this. If it's selling from $299 upwards though it's not going to really offer a significantly better esports experience than a vanilla RX6600 which you can get brand new for $179 now.

The 3050 / 3060 8GB / 6650XT / 7600 were all given negative review coverage not because they're bad cards in isolation, but because they're too expensive for the target audience they're serving (esports and casual gamers), whilst having insufficient VRAM for the more demanding AAA gamer looking for a new GPU. Even the 3050 4GB has a purpose and someone it will satisfy, but not at the ridiculous asking price of $249.
Posted on Reply
#54
tussinman
Why_MeWho in their right mind would purchase this card for a 1440P gaming build other than someone who plays nothing other than strategy games?
Enthusiast like us who are really informed then logically yes probably not a good fit but at the same time if you're going to charge $300 to 350 range for a videocard in 2023 it should realistically play 1440p decently (especially since a 4060 should be offering 2080 super/3060ti level performance no excuses).

It's not like it's a unicorn either (6700 and 6700xt sell currently in the low 300s and do good at 1440p).
Posted on Reply
#55
Artie
In a way this slow performance per price development is nice. I don't feel much need to buy graphics cards every year or so. July 2018 I got a 1060 6GB, December 2022 I got a 3060 ti.
Posted on Reply
#56
Am*
ArkzGenerational improvements too of course. 4070 and 3070 have same amount of cores, but the 4070 is about 30% faster, and has a fraction of the Render Output Units too. Core is clocked a good bit faster though.

4060 has 16% fewer shaders, but the same higher clockspeed like the 4070 to 3070. So I reckon it will still be a bit faster than a 3060. It also has fewer ROPs like the 4070 to 3070. Has slightly fewer TMUs too. I wouldn't be surprised if it comes in at 5-10% faster when the reviews come in.


Still a bad product for the price. Should have more RAM and a lower price. But people keep buying them so... meh. Other than the 3060 Nvidia are so stingy with RAM. I wish my 3080 had more than 10GB.

And the 4060Ti and 4060 only having 8 lanes is pure BS. I pity anyone on a PCIE3.0 board buying one of these cards.
But that analogy is terrible, because a big part of the performance gains the RTX 4070 got over its predecessor was from gaining 50% more VRAM. The RTX 3070 was severely hampered from launch day with its terrible 8GB VRAM framebuffer, so the generational improvements are massively inflated on any website where the minimum frame rates are counted.

Meanwhile the RTX 4060 loses 33% of the VRAM that its predecessor had, in addition to halving the number of PCIE lanes. There are already benches up on YT where the 4060 TI gets its ass handed to it by the RTX 3060 in minimum frame rates at 4K purely due to the loss of VRAM (RE4 Remake at 4K maxed out, for example). The 3060 gets minimums in the 20s whilst the 4060 TI drops to low single digits, freezes and stutters on even a very up-to-date rig. It's dead on arrival.

The only place where the RTX 4060 should be released is a recycling centre -- it's literal e-waste. After seeing this crap show of a GPU market, I thank god everyday that the leather jacket-wearing clown at Ngreedia never managed to get his grubby hands on ARM, otherwise he would have gutted the entire technology sector and turned it into the same dumpster fire.
Posted on Reply
#57
bug
Am*But that analogy is terrible, because a big part of the performance gains the RTX 4070 got over its predecessor was from gaining 50% more VRAM. The RTX 3070 was severely hampered from launch day with its terrible 8GB VRAM framebuffer, so the generational improvements are massively inflated on any website where the minimum frame rates are counted.
That's not true. The games that supposedly hamper the 3070 were not out when it was released. Review sites that do an apples-to-apples don't test those titles on the 4070 either. Even if they did, you can count those titles on the fingers of one hand, they won't change the score much.

That said, while the 4060Ti sits just about what I'd call an acceptable price/perf zone, I'm quite sure reviews will show the plain 4060 is a good way off.
And I think I know what happened: Nvidia (and AMD for that matter) launched their initial high-end SKUs and were horrified to notice people aren't willing to pay those prices anymore. So they went into damage control mode to get their lower end SKUs into a more palatable range. Only in order to achieve that, they were forced to cut into the silicon way more than originally planned, resulting in very unbalanced hardware. If I'm right, their next gen will be looking much saner than what we're looking at this iteration.
Posted on Reply
#58
Am*
bugThat's not true. The games that supposedly hamper the 3070 were not out when it was released. Review sites that do an apples-to-apples don't test those titles on the 4070 either. Even if they did, you can count those titles on the fingers of one hand, they won't change the score much.

That said, while the 4060Ti sits just about what I'd call an acceptable price/perf zone, I'm quite sure reviews will show the plain 4060 is a good way off.
And I think I know what happened: Nvidia (and AMD for that matter) launched their initial high-end SKUs and were horrified to notice people aren't willing to pay those prices anymore. So they went into damage control mode to get their lower end SKUs into a more palatable range. Only in order to achieve that, they were forced to cut into the silicon way more than originally planned, resulting in very unbalanced hardware. If I'm right, their next gen will be looking much saner than what we're looking at this iteration.
It is true. The fact that you haven't noticed it just means you weren't paying attention. I've had my Titan X Maxwell since launch and have seen multiple last gen console ports reach 10GB+ VRAM usage around 5 years before the 3070 was even launched -- COD AW, Batman Arkham Knight, etc. With that being the case, no 8GB GPU was going to be acceptable for the following generation of games long term. You then had COD Warzone that was already out at the time and eating all 12GB VRAM and 10GB+ of RAM on top of that. It was deliberately released with planned obsolescence in mind.
Posted on Reply
#59
bug
Am*It is true. The fact that you haven't noticed it just means you weren't paying attention. I've had my Titan X Maxwell since launch and have seen multiple last gen console ports reach 10GB+ VRAM usage around 5 years before the 3070 was even launched -- COD AW, Batman Arkham Knight, etc.
It's been discussed to death already, if you haven't seen performance drops (and you didn't except for select few titles), you can't tell whether VRAM was actually used. You can tell it was allocated, which is a big difference.
Am*With that being the case, no 8GB GPU was going to be acceptable for the following generation of games long term. You then had COD Warzone that was already out at the time and eating all 12GB VRAM and 10GB+ of RAM on top of that. It was deliberately released with planned obsolescence in mind.
If the performance really dipped, than yes. Otherwise, yes, it was acceptable.
Posted on Reply
#60
Arkz
Am*It is true. The fact that you haven't noticed it just means you weren't paying attention. I've had my Titan X Maxwell since launch and have seen multiple last gen console ports reach 10GB+ VRAM usage around 5 years before the 3070 was even launched -- COD AW, Batman Arkham Knight, etc. With that being the case, no 8GB GPU was going to be acceptable for the following generation of games long term. You then had COD Warzone that was already out at the time and eating all 12GB VRAM and 10GB+ of RAM on top of that. It was deliberately released with planned obsolescence in mind.
If it was using all the VRAM and spilling into system RAM the framerate would absolutely tank. Games allocating as much as they can but not using all of it don't tank. The few games that are out there and really do eat massive amounts of RAM are usually unoptimized crap, or being ran on max settings where the ultra textures eat all they can and hardly look different to high textures.

3070 was fine back when it came out. The few titles back then that suffered could usually be alleviated just by not using ultra textures. People complain about the perf and vram usage in 4k a lot on cards that are sold as 1080p cards anyway, like the 3060. The goalposts move constantly with new games and new hardware too. Nothing is futureproof. I do think that cards like the 4060 should be launching with 12GB minimum though, just to keep them relevant and useful for longer. AMD are at least more generous with their VRAM. Just a shame they often have driver issues with various cards.
Posted on Reply
#61
AusWolf
ArkzAMD are at least more generous with their VRAM. Just a shame they often have driver issues with various cards.
Not with RDNA 2 cards. They are generally as solid as Nvidia. Other than that, I agree.
Posted on Reply
#62
Arkz
AusWolfNot with RDNA 2 cards. They are generally as solid as Nvidia. Other than that, I agree.
Tell that to my 6700XT. It's had numerous issues over the last few years, they've ironed many out, but a corrupted flicker that can happen when driving triple monitors is still an issue.
Posted on Reply
#63
Am*
bugIt's been discussed to death already, if you haven't seen performance drops (and you didn't except for select few titles), you can't tell whether VRAM was actually used. You can tell it was allocated, which is a big difference.

If the performance really dipped, than yes. Otherwise, yes, it was acceptable.
The performance did dip massively in Batman Arkham Knight during launch -- that's one of the main reasons why it was review-bombed. At QHD maxed, it was eating almost all 8GB VRAM of the R9 390. For me at 4K, it was eating up to 10.5GB before it was removed from the storefronts and then patched several times.

Also whether those games needed that much VRAM or not isn't really the point that I was getting at -- newer, poorly made console ports are only going to increase their requirements. My point was, this card just wasn't made with longevity in mind like the GTX 1070 and past similar 70 cards were and that seems to explain the complete stagnation that the PC GPU market has now. The 3070 has the performance to run for at least the next 5 years but is way too kneecapped to do so by that 8GB framebuffer. The reasons why it seems to have remained usable even over the past 3 years was due most console ports being last gen re-releases and "remasters" (Spiderman, God of War, TLOU Re-remastered, etc).
Posted on Reply
#64
Arkz
Am*The performance did dip massively in Batman Arkham Knight during launch -- that's one of the main reasons why it was review-bombed. At QHD maxed, it was eating almost all 8GB VRAM of the R9 390. For me at 4K, it was eating up to 10.5GB before it was removed from the storefronts and then patched several times.

Also whether those games needed that much VRAM or not isn't really the point that I was getting at -- newer, poorly made console ports are only going to increase their requirements. My point was, this card just wasn't made with longevity in mind like the GTX 1070 and past similar 70 cards were and that seems to explain the complete stagnation that the PC GPU market has now. The 3070 has the performance to run for at least the next 5 years but is way too kneecapped to do so by that 8GB framebuffer. The reasons why it seems to have remained usable even over the past 3 years was due most console ports being last gen re-releases and "remasters" (Spiderman, God of War, TLOU Re-remastered, etc).
poor ports seem to be more and more common. So many publishers seem to think a buggy mess with awful optimization is good enough and kick it out the door. DRS and DLSS piss me off with this too, cause it seems to be an excuse for it too. Oh it runs like crap? Just let it scale the res down to work better. Oh you can't run it natively in that res? Just render it much lower and upscale. Oh we made a mistake and it tries to eat 13GB of vram? Do 7 patches over the next 3 months to fix it.

It's why I rarely play anything day1 any more.
Posted on Reply
#65
bug
Arkzpoor ports seem to be more and more common.
Sad, but, unfortunately, far from surprising.
ArkzSo many publishers seem to think a buggy mess with awful optimization is good enough and kick it out the door.
Since consoles rake in most of the cash, the get most of the attention. Consoles are closed hardware, not upgradable, and exist in a handful of variants. They set the baseline for the level of effort a developer or a publisher is willing to put in. Almost everything that is console port gets a hefty translation layer on top and if the extra muscle from the PC can handle that, good. If not, we get these lame ports instead.
ArkzDRS and DLSS piss me off with this too, cause it seems to be an excuse for it too. Oh it runs like crap? Just let it scale the res down to work better. Oh you can't run it natively in that res? Just render it much lower and upscale. Oh we made a mistake and it tries to eat 13GB of vram? Do 7 patches over the next 3 months to fix it.
I disagree on this one. Ever since PC gaming went 3D, everything has become an approximation. How the video card approximates, I don't really care, as long as the end result is good enough.
I mean, forget DLSS and friends. Two decades ago, antialiasing was all about supersampling. Then, in order to boost performance, multisampling was added. Most reviews started showing how MSAA results are inferior to SSAA. Fast forwards two decades and we have TAA, FXAA and other approximations that can blur everything or shimmer. And we're happy when we can use MSAA instead. Nobody even mentions SSAA anymore, people have accepted MSAA is good enough.
Anisotropic filtering and its optimizations have through the same process as well. Now it's resolution's turn, that's all.
ArkzIt's why I rarely play anything day1 any more.
I don't really game anymore, but before that I have almost never played anything on day 1. Didn't lose anything in the process either ;)
Posted on Reply
#66
AusWolf
bugI disagree on this one. Ever since PC gaming went 3D, everything has become an approximation. How the video card approximates, I don't really care, as long as the end result is good enough.
I mean, forget DLSS and friends. Two decades ago, antialiasing was all about supersampling. Then, in order to boost performance, multisampling was added. Most reviews started showing how MSAA results are inferior to SSAA. Fast forwards two decades and we have TAA, FXAA and other approximations that can blur everything or shimmer. And we're happy when we can use MSAA instead. Nobody even mentions SSAA anymore, people have accepted MSAA is good enough.
Anisotropic filtering and its optimizations have through the same process as well. Now it's resolution's turn, that's all.
Except that AA was invented to use the extra GPU power to make things look better. When you couldn't do SSAA, you had MSAA which isn't as good, but still better than no AA.

DLSS/FSR on the other hand, were invented to make things look slightly worse to compensate for the GPU power that you don't have.

Anti-aliasing adds to the experience at the cost of some performance, but upscaling takes away from it to gain back some performance.
Posted on Reply
#67
Arkz
AusWolfExcept that AA was invented to use the extra GPU power to make things look better. When you couldn't do SSAA, you had MSAA which isn't as good, but still better than no AA.

DLSS/FSR on the other hand, were invented to make things look slightly worse to compensate for the GPU power that you don't have.

Anti-aliasing adds to the experience at the cost of some performance, but upscaling takes away from it to gain back some performance.
Bingo.
Posted on Reply
#68
Bwaze
Except there's tonns of marketing on how DLSS and other upscaling tricks "look better than native", even from reviewers.
Posted on Reply
#69
AusWolf
BwazeExcept there's tonns of marketing on how DLSS and other upscaling tricks "look better than native", even from reviewers.
I call bullshit on that. They can look good, but never better than native res.
Posted on Reply
#70
bug
AusWolfExcept that AA was invented to use the extra GPU power to make things look better. When you couldn't do SSAA, you had MSAA which isn't as good, but still better than no AA.

DLSS/FSR on the other hand, were invented to make things look slightly worse to compensate for the GPU power that you don't have.

Anti-aliasing adds to the experience at the cost of some performance, but upscaling takes away from it to gain back some performance.
You can be open minded or you can nitpick, I guess.
Posted on Reply
#71
AusWolf
bugYou can be open minded or you can nitpick, I guess.
I'm not nitpicking - I merely stated the difference between the design philosophies of anti-aliasing and upscaling. Whatever else AMD's and Nvidia's marketing makes it out to be doesn't concern me.

My take on the matter is that if I can run native, I will. Upscaling can look good, very good even, but it will never be more than an aid to boost framerates when needed.
Posted on Reply
#72
Bwaze
A friend of mine was nearly furious at suggestion that I can see the difference between native 4K and upscaled with DLSS - "No chance, I'm using it on 55" OLED, and there is NO DIFFERENCE!"

Then we established that he mainly games as he would on a gaming console - from comfortable distance to view the whole screen, playing fast paced shooters and role playing games mostly using controller, not keyboard and mouse. At his distance, even 1080p looked good, and by focusing on larger area of screen there's very little chance of noticing minor differences in image quality on small detail.

Then we compared it with my usage, 27" 4K screen at close distance, playing flight and racing sims where you are often focusing on small detail on screen, like target, runway, enemy, a curve in the far distance - you notice every small difference in Anti Aliasing, every drop in native resolution due to DLSS... It's similar with real time strategies and similar games with lots of small detail.

PC gaming is much more varied than console gaming. For my friend's usage, it mostly held true that even "performance DLSS" looked good.
Posted on Reply
#73
ThaneX
Looking at this trend, the 40 series will most likely phase out next year lol.
Posted on Reply
#74
EsliteMoby
It has the same cuda core counts and same Vram same bandwidth as the laptop 4060M so I'm guessing 5% difference in fps considering it's 100w vs 115w. Pretty underwhelming.
BwazeExcept there's tonns of marketing on how DLSS and other upscaling tricks "look better than native", even from reviewers.
In my experience, DLSS 2.0 has bad blurriness in motion even in quality mode as it's basically a temporal upscaler depending on the in-game TAA to function. Of course, most reviewers used static screenshots for comparison.
Posted on Reply
#75
bug
BwazeA friend of mine was nearly furious at suggestion that I can see the difference between native 4K and upscaled with DLSS - "No chance, I'm using it on 55" OLED, and there is NO DIFFERENCE!"

Then we established that he mainly games as he would on a gaming console - from comfortable distance to view the whole screen, playing fast paced shooters and role playing games mostly using controller, not keyboard and mouse. At his distance, even 1080p looked good, and by focusing on larger area of screen there's very little chance of noticing minor differences in image quality on small detail.

Then we compared it with my usage, 27" 4K screen at close distance, playing flight and racing sims where you are often focusing on small detail on screen, like target, runway, enemy, a curve in the far distance - you notice every small difference in Anti Aliasing, every drop in native resolution due to DLSS... It's similar with real time strategies and similar games with lots of small detail.

PC gaming is much more varied than console gaming. For my friend's usage, it mostly held true that even "performance DLSS" looked good.
What I take away from this is that even this early in its lifetime, DLSS already works fine for your friend.
It's not for everybody, of course. But that's why it's an option in the setting menu. It's not turned on automatically for anyone.
Posted on Reply
Add your own comment
May 12th, 2024 13:22 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts