Friday, February 2nd 2024
NVIDIA GeForce RTX 3050 6GB Formally Launched
NVIDIA today formally launched the GeForce RTX 3050 6 GB as its new entry-level discrete GPU. The RTX 3050 6 GB is a significantly different product from the original RTX 3050 that the company launched as a mid-range product way back in January 2022. The RTX 3050 had originally launched on the 8 nm GA106 silicon, with 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and 32 ROPs; with 8 GB of 14 Gbps GDDR6 memory across a 128-bit memory bus; these specs also matched the maximum core-configuration of the smaller GA107 silicon, and so the company launched the RTX 3050 based on GA107 toward the end of 2022, with no change in specs, but a slight improvement in energy efficiency from the switch to the smaller silicon. The new RTX 3060 6 GB is based on the same GA107 silicon, but with significant changes.
To begin with, the most obvious change is memory. The new SKU features 6 GB of 14 Gbps GDDR6, across a narrower 96-bit memory bus, for 168 GB/s of memory bandwidth. That's not all, the GPU is significantly cut down, with just 16 SM instead of the 20 found on the original RTX 3050. This works out to 2,048 CUDA cores, 64 Tensor cores, 16 RT cores, 64 TMUs, and an unchanged 32 ROPs. The GPU comes with lower clock speeds of 1470 MHz boost, compared to 1777 MHz on the original RTX 3050. The silver lining with this SKU is its total graphics power (TGP) of just 70 W, which means that cards can completely do away with power connectors, and rely entirely on PCIe slot power. NVIDIA hasn't listed its own MSRP for this SKU, but last we heard, it was supposed to go for $179, and square off against the likes of the Intel Arc A580.
To begin with, the most obvious change is memory. The new SKU features 6 GB of 14 Gbps GDDR6, across a narrower 96-bit memory bus, for 168 GB/s of memory bandwidth. That's not all, the GPU is significantly cut down, with just 16 SM instead of the 20 found on the original RTX 3050. This works out to 2,048 CUDA cores, 64 Tensor cores, 16 RT cores, 64 TMUs, and an unchanged 32 ROPs. The GPU comes with lower clock speeds of 1470 MHz boost, compared to 1777 MHz on the original RTX 3050. The silver lining with this SKU is its total graphics power (TGP) of just 70 W, which means that cards can completely do away with power connectors, and rely entirely on PCIe slot power. NVIDIA hasn't listed its own MSRP for this SKU, but last we heard, it was supposed to go for $179, and square off against the likes of the Intel Arc A580.
65 Comments on NVIDIA GeForce RTX 3050 6GB Formally Launched
Upgrade a decent (Core i7) older office PC with a proprietary PSU (common for years now). Hey that's good, keep something outta the landfill and even a 10 year old Haswell i7 will be a good match in most games. But:
It's a 3040, just freakin' name it that and stop the obfuscation.
Too expensive, needs to be $30-50 cheaper.
Also why the hell doesn't the FTC slap Nvidia around for these terribly deceptive naming practices?
Also, who in the right mind would consider this at 55% of the price of entire 8 core APU. The cheapest R7 7700 non X costs $315 on amazon. Any other card for $180 would be better (PowerColor RX6600 costs $200 at Amazon). This is the product for inveterate fanboys. Yeah, this is complete scam. There's no way to sell this card outside prebuilt, or deliberate taking advandage over non savvy people. Might as well push RTX bull*sh*t narratives for this purpose. 4060 is already a rebadged 4050. But you're right, at this pace nVidia will rebrand 4060 as bottom, and slap an even higher price tag, just because it's 5000. LOL. Indeed. This is the first reason that comes to mind. Capitalize on raytracing feature, that doesn't make sense, for cards lower than 4070 Super. The entire fact of releasing dGPU card of RX 570 level in 2024 is absolute slap on the face. There sould be progress, not regress. At this point, the absolute lowest card, that nVidia and AMD should have release, should be 3060/6600. Not this waste of precious silicon. Actually both of them. AMD is as guilty as nVidia. Even worse, because AMD they gouging their trustful fanboys, which take much smaller market share, and mostly are not as wealthy as Nvidia ones. This was red flag, when AMD has dropped RX 6500XT on the market.
For what it is worth, this crap is actually stronger (and although RT is completely unreasonable with such low-end hardware, it is capable of any upscaler out there - especially DLSS but without FG) than anything else in its power envelope. However it is priced (and named) with absolute lunacy. RT 3040, $120 at most.
My old streaming rig had a gtx 1080 so it's still an upgrade xD
Aside from that not really, no reason to bother, httpc? just get a gt 1030, it can decode 4k30 HDR just fine... or just go the IGPU route directly...
Anytime there is an article about low-end GPUs, there are many who fail to understand there are uses for GPUs beyond demanding games, including (but not limited to):
- Replacing defective GPUs in an old system, whether it's an old OS or new. (Ampere is the last generation to support Win 7.)
- Upgrading a GPU to gain newer API support, but not play demanding games.
- Workstations or even servers which have no integrated graphics.
- Adding support for more monitors/high-resolution displays.
- Getting newer codec support (which is almost a must for 4K YouTube), luckily Ampere supports AV1 decode, which is becoming more important.
- Someone building a system for low-end gaming, emulation, etc.
- Anyone who wants a smooth desktop experience without any need for high GPU performance.
I would argue RTX 4060($300) and RX 7600($270) fit a similar price point. Yes that's a little more than the ~$250 you calculated with the official inflation, but those inflation estimates from both the US and several other central banks are more than a little skewed by using subjective metrics. If inflation were calculated the way the US did in the 1970s (which is the way you probably think it works), the official inflation numbers would be much higher, possibly ~70-80%.
Just look at pricing on most kinds of hardware, especially motherboards and accessories like fans, and in the past 3-4 years cases have become pricey. I'm not excusing Nvidia here, but just ascertaining that everything has gotten a lot more expensive lately (without diving into politics).
However if you look at the $229 1GB 460's reviews, the tests are dominated by older games so it's dishing out over 100 fps in many of them at 1680x1050, but when you look at newer and taxing games it's getting about 30 fps. In the 4060 review that's getting ~54 fps at 1080p in 2 games and everything else is over 60. Even at 1440p a single game dips below 40 fps.
The 1GB $229 GTX 460 would be $320 today making it more expensive than the 4060 and it's slower for the demanding games of its day. The $199 768 MB GTX 4060 ($280 in today's money) is behind the 1GB 460 by 5-10%, making it relatively even slower than the $270 RX 7600 (2% behind the 4060) as another comparison.
The 4060 is at least as solid as the 460. We just have soooo many tiers about the regular '60 level nowadays (8 levels!) and nothing below so it feels like entry-level. Entry level is now used or previous gen.
Another thing is, in 2010 1920x1080/1200 was about the highest resolution available*. In 2024, that would be 4k.
*just like today, there were higher res monitors, albeit rare/expensive
Shocked that it's not only coming back in 2024 but there actually going to make it even slower ?? (Might as well just start reselling the 2060 again or just call the 2060 the 3050 6gb)
www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3050/
Nvidia slow card. Silence.
RX 6500XT close to RX 6600 pricing? "What a garbage card".
RTX 3050 6GB close to RX 6600 pricing? "There is NO RX 6600. RTX 3050 is faster than RX 6500XT, so it is good"!!!!
It's really funny until you realize that this card will become a sales hit, selling to those dreaming "ray tracing from Nvidia". More funny that those will end up using AMD's FSR 3 Frame Generation to somehow get acceptable fps in some games with raytracing enabled.
Comparing this to the current lineup, a RTX 4060 performs about ~33% pf RTX 4090, and the price difference is huge. The big question then becomes, should we judge RTX 4060 based on how "weak" it is compared to its bigger brother, or should we consider what kind of gaming experience will this actually offer for the money?
Another important aspect since you're bringing up Fermi is longevity. Fermi wasn't just prone to overheating, it quickly became "obsolete" both due to performance demands, changes in performance characteristics and API support. This was actually pretty much the norm for every generation up to then; if someone bought a pretty good GPU, then 2-3 years later there would be many games they couldn't play or at least play well. And I had many cards in that era, like GTX 680, GTX 580, Radeon HD 4870, GTX 275 (traded), 9800 GT, 7800 GT, 6600 GT(SLI), and many before that all the way back to the Riva TNT2. And all these cards have one thing in common, they become "obsolete" long before they deserved to, either due to performance or API support/features.
But starting from Maxwell(900 series), the cards have had remarkably good mileage in them. Especially Pascal (like the GTX 1060 I see in your profile), will probably go down as legendary in terms on long-term value. Granted, the performance difference isn't as great per generation, but the APIs don't change erratically either. Maxwell, Pascal, Turing and Ampere all seem to have lasting value, especially for anyone who bought xx70 models or up. In retrospect it's easy to see that during this time-span buying one or two tiers up would have gotten people more mileage, vs. back in the GTX 460 days where everything was quickly obsolete anyways. It's also worth mentioning that anything from Maxwell and on, still have mainline driver support with features and improvements, that would not have been the case with 10 year old cards back in 2010.
But is it fair to demand 4K in a ~250$ card today?
The main reason why the high-end cards today can do 4K well is that they're using massive GPUs, drawing huge amounts of power, that are several "tiers" higher than back in the GTX 460 days. We also have higher refresh rate, so the reality is that a lower mid-range card today can usually do 1080p at a good frame rate or 1440p at lower frame rate or lower details.
My assessment is that RTX 4060 is a little too poor of a deal, but not terrible.
If I were looking to build a gaming PC today, and wondering how far to stretch my budget (how expensive GPU can I justify), I would probably go a good step up. RTX 4060 is already a little weak today, RTX 4060 Ti is of course better, but the jump up to RTX 4070 is huge, and especially now with the price cuts, I'd say go for RTX 4070 or the two compelling models from AMD: RX 7700 XT and RX 7800 XT. Those three models I believe are the "sweetspot" GPUs today, in terms of getting a great experience today and some longevity.
Of course, it's pretty clear that those who defend Nvidia are dependent, both financially and maybe ideologically.
It's nice that they do exist, but I would only consider them for a heavy professional workload that justifies that.
As for the realistic options; Back with Ampere, the RTX 3060 / Ti models were very well positioned at the time (only you couldn't find them at the MSRP then). Currently the 4060/Ti models falls a little short, but are still great products in terms of what they are offering, and I believe you can at times find them below MSRP.
But I still think RTX 4070 along with RX 7700 XT and RX 7800 XT are the models to consider for a "decent" gaming PC today.
Ti (8 GB), no. I don't have anything against the device itself, it's just horribly overpriced. For $330, would be 10/10.
Ti (16 GB) makes no sense for anyone but those who do some professional workloads where this speed is plenty but 8 GB is "are you kidding me."