• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Launches GeForce RTX 5050 for Desktops and Laptops, Starts at $249

Not sure what folks are expecting. Since we love to compare everything to the flagship, let's go ahead and do that. Here's how the 5050 stacks up to the 5090:

1750802382743.png


In everything but VRAM, the 5050 is 1/8 of the 5090. Not-so-coincidentally, so's the price. Should the 5050 be less than USD200? Absolutely. But we showed Nvidia that we're perfectly willing to shell out four figures for high-end cards, so the outrage that they'd dare sell entry-level cards at 2-1/2 Benjamins is rather amusing at this point.
 
Unspecified ROP count? That sounds shady as heck.
Will this continue the 'missing ROP' tradition on other 5000 series card? :roll:
 
Really shameless. Comparing new cards with those old DLSS-uncapable ones. I thought AMD's slides are already quite exaggerated, but Nvidia doesn't even tell us what 20 games are, and no exact fps whatsoever. Where's 20 series in the graph, huh? Oh it says 50 mill gamers using Pascal, Tuning and Ampere GPUs, so Nvidia knows 20 series is a disaster, right? And the title, "more than 50 times" performance for tens of millions of "gamers", rather than "games". This is totally rip-off.
PTF4w2y745JxaxBm.jpg


Maybe I can fix this graph. Here it is. Looks a lot better.
nv f you.jpg
 
I'd consider one at $99 and even then, only for video encoding
 
Ooof those charts are something...something terrible lol
 
Nice benchmarks nvidia, The last I knew the 3050 didn't have frame gen.
 
skewed graph. Base it on die size and wafer cost vs previous generation and come back.. Tired of defending NVIDIA when I have no reason to.
Exactly. The spike in the 30 series was because Samsung was desperate for business and cut Nvidia a good deal (also why we're still seeing so much Ampere silicon still coming out, e.g. Switch 2, RTX 2050, RTX 3050 6GB slot-powered, etc). But 5nm for Ada and RDNA 3 was expensive, and 2nm is so expensive both AMD and Nvidia elected to refresh on 5nm instead of getting a smaller node. Also, AMD chose the cheaper N4P option for RDNA 4, while Nvidia chose the 4N option for Blackwell to push clocks and power higher.

Also OP is completely forgetting yields. The smaller the process, the higher the liklehood of faulty dies. The 4090 and 5090 get 90% of the maximum die size because the number of "perfect" dies is extremely small, and Nvidia would prefer to use those in $20,000 RTX cards and have a high supply of GeForce cards instead.
 
Bad card for such price. In our eyes only, unfortunately.

It's a 5000 series, the new stuff. Cheapest model available. It has DLSS and MFG. Target customer will happily buy it, fortunately for NVIDIA. It's that simple, I cannot blame NVIDIA for such move. Their business is to make as much money as possible. If there are buyers, it's a good decision.

From other perspective, the AMD - just make a 9060 non-XT better, cheaper. There is a great amount of people willing to buy your product. Just don't make it suck as much as NVIDIA did. Let's see.
 
Under powered and overpriced. A hopeless product with far better alternatives.
 
A shame they didn't make it weaker. It would've been nice if it was something that didn't need a power connector.

As it stands right now, if their performance chart is correct, it's looking to me like a 4060 with a price cut and some new features.
 
Amazing. I'm especially excited about RTX 5050 mobile. It has 8 GB of VRAM which was a true pain point of those mobile GPUs in the past. Now we have an entry level GPU with enough VRAM. Nvidia low end is looking good, shame about their high end though.

As for desktop version, it will serve its niche, especially if it is comparable to RTX 4060 then it will be a great product. What I'm most interested is in smaller version of this card with a lower TDP to fit it in low profile builds with no power cables.

As for all the negativity, it is just people getting old. People have to change their attitudes or they will be old men yelling at the cloud how things used to be back in their days.
 
Last edited:
I'm starting to wonder if they actually 'need' that many CUDA/shader cores at the lower end... less compute power will make these less attractive to anyone for any potential crypto/AI BS as well as make a big dent in die size/cost/heat and power requirements.... stick with me on this one...

Going back to their Blackwell release graphics:
1750847394980.webp


Since they've combined the shader core operation types/capabilities into each core (compared to say Turing where they were only FP or INT 50/50 split)... why not just drop 50% of them (at this low end)? If you are going to leverage DLSS + FG, you might as well go 'all in' on scaling tech for higher numbers - if you were playing with a low-end RTX 3050/3060/2060 right now you'd probably be grudginly turning on some form of FSR/DLSS - some games essentially need it right now.

Instead of dropping from 36 to 20 SMs going from 5060 to 5050, pack say 28 SMs but these SMs are 50% less shader cores but packing the same number of tensor cores so you'll have no real drop off in terms of DLSS feature capability and maintain relative RT performance scaling per SM count (which at this level is still crap but Nvidia must RT).
ROPs and TMUs do not take a huge amount of die space in comparison so keep those counts the same per SM to maintain some level of performance - in gaming terms any very compute heavy games will take a hit, but more low-end 'e-sports' type games will likely be fine as they are more bound by rendering performance which the ROPs/TMUs dictate.
At this point, most people / game engines have already likely defaulted to lower detail shader settings anyway.

Nvidia have done this before with Turing where the RTX TU10x products and GTX TU11x products had different SM/core configurations and capabilities within the same family - but I guess making a more render output based chip might be a bit too radical.
This would also serve as a better basis for people who still want a HTPC GPU where cut down dies will do the job and don't give a crap about shading capability so long as NVDEC/NVENC work, or just need a lowest of the low 2nd/3rd GPU output card for banks of displays but offer feature/driver parity with the latest products.
 
Last edited:
Exactly. The spike in the 30 series was because Samsung was desperate for business and cut Nvidia a good deal (also why we're still seeing so much Ampere silicon still coming out, e.g. Switch 2, RTX 2050, RTX 3050 6GB slot-powered, etc). But 5nm for Ada and RDNA 3 was expensive, and 2nm is so expensive both AMD and Nvidia elected to refresh on 5nm instead of getting a smaller node. Also, AMD chose the cheaper N4P option for RDNA 4, while Nvidia chose the 4N option for Blackwell to push clocks and power higher.

Also OP is completely forgetting yields. The smaller the process, the higher the liklehood of faulty dies. The 4090 and 5090 get 90% of the maximum die size because the number of "perfect" dies is extremely small, and Nvidia would prefer to use those in $20,000 RTX cards and have a high supply of GeForce cards instead.

Started with RTX20 to be fair.

20 series was also on an older TSMC 12nm node (7nm was ready by late 2017) and significantly bigger than it should have been due to the RT push and core layout.

If people were to go back to the GTX600 to GTX1000 run, they would realize NVIDIA went back to a standard TSMC sizing model when factoring inflation metrics.

x80 cards ALWAYS fell into 300-400mm2 segmentation ignoring RTX 20/30. Exceptions being G80 and GF100 @ 500mm2+ which ATI/AMD was actually competing with on smaller 200-300mm2 nodes. Hilarious to think about in retrospect.

The difference today is that TSMC is charging 20K+ per wafer on 4/5nm and EE design + PCB layer count/proper 5.0 signaling is a lot more expensive these days.. The days of $3000 USD 28nm wafers are over.

People can't even argue that they get more nodes per chip either.. 28nm 400mm2 (GTX970/980 via GM204) on a 300mm wafer is the same as 5nm 400mm2 (RTX5070TI/5080) on 300mm wafer.. It's just yield rate vs actual TSMC wafer cost these days..

AMD's Navi 48 (9070XT) competes in the same segmentation as the 5080, but doesn't hit $600 MSRP without kickbacks from AMD themselves. NVIDIA has significantly higher margins due to order size. Market price on a full die 64 CU 9070XT is the same as a defect 70/84 GB203... $700-800 USD in US shops.

To go back to what I was saying about a SKEWED graph.. RTX 5090 is 750mm2 on relatively CURRENT node.. Not quite N4P, but close enough. Closest thing in size was the RTX 2080 TI, but on an older and cheaper TSMC 12 process ($4000?). Sauce: https://www.tomshardware.com/news/t...aled-300mm-wafer-at-5nm-is-nearly-dollar17000

Either way, thats my rant.

Edit: Wanna know why people are mad? Rasterization is on the back burner for NV. SM/CU count has increased significantly and priced accordingly vs legacy generations.

GTX1080 @ $599 ($800 adjusted for inflation) only had 20 SM's. The current RTX5080 has 84... $999 Reference model.

On topic to this thread, the RTX5050 should perform close to the GTX1080 from 2016. Similar 320gb/s bandwidth, Same 20 SM count.. Seems to be spec linear, though FP32 is prob lower.
 
Last edited:
On topic to this thread, the RTX5050 should perform close to the GTX1080 from 2016. Similar 320gb/s bandwidth, Same 20 SM count.. Seems to be spec linear, though FP32 is prob lower.
Nvidia has it matching the 3060, which was much closer to the 1080-Ti than the 1080 in performance. That seems to pass the "sniff test," at least. The core count might be the same, but the 5050 has much higher clock speeds than the 1080. We'll have to wait for TPU benchmarks to see for sure, though.
 
The key issue behind high prices is TSCM monopoly. Nobody can build a high performance chip without accepting their pricing. That pricing will inevitably be passed down to you. This is why Intel 18A should be everyone's hopes and dreams. Samsung should also get their issues fixed up and deliver something competitive. Next year, manufacturing options will become a lot better. However, next generation is likely already planned on some existing TSMC 3 nm FinFET (N3) node which is going to retain elevated pricing, but will bring a good performance jump. We also are only getting a second hand node, because the main players are already moving away from this node to 2 nm, so we can have cheaper leftovers.

I predict that AMD and Nvidia will land there with some variation on specific type. Intel might finally release their next gen cards on 18A node which will be a massive win for them. Things seems to be moving finally to the better direction again after long winter.
 
Remember guys, there's still a massive market of people with low-end 10 and 16 series GPUs. Take a look at how many GTX 10, 16, and even GTX 900 GPUs are still on the Steam Hardware Survey. The 1650 is the 4th most popular card, the 1060 is the 12th, and the 1660 Super is the 14th (discrete GPU). Even the 1050-Ti still has a 1.7% market share.

This GPU is intended for those people to give them a decent performance boost (~3060 level performance is believable, and that would be a significant upgrade to all the 1650s, 1060s, etc) and give them good DLSS. Even for people with 2060s and 3050s, those GPUs have paltry AI capabilities. Turing and Ampere Tensor cores only support FP16 math, stepping up to FP4 will be a significant improvement. And for people with a 1060 who have to make do with FSR, getting the FP4 DLSS Transformer model and some frame generation will make a world of difference.

The most important thing Nvidia can do for these people is ensure a worldwide, low-cost supply of the 5050 to OEMs. If the 5050 can start appearing in $500 prebuilts with a 12100F, it's going to make a lot of people happy.
 
Even as GPU upgrades. Old CPUs are easily bottlenecked by new GPUs. I have i5-4670 and then I had upgraded GTX 760 to GTX 1060 6 GB it was fully utilized. CPU and GPU running at 100%. For an old motherboard and CPU it doesn't take much to bottleneck it and recommending 60s series GPU is out of touch and trying to give people what they don't need. For those cases you need weakest and cheapest available gaming GPUs as rest of the system simply can't handle it. And people often are not looking to spend a lot of money either way nor have high expectations. I feel that tech youtubers and tech enthusiasts simply lack perspective and grasp of what users really need and want.

It was also why my home PC went to RX 6500 XT. CPU bottleneck. Wanting to spend as little as possible on barely used gaming PC. Those parts serve a very important niche, but people try to bend every segment to themselves. Low end has to provide best value per dollar. 90s series are too expensive and they can't buy it. It must provide better value per dollar. PC enthusiasts view of GPU market is very narrow.

I'm personally most excited about RTX 5050 mobile. Last gen Nvidia GPUs often were quite crappy. They are still crap at the high end. However, it is 50s series where I find recommending most laptops for people. One such laptop also saved my skin back in a day when I was a poor student.
 
Remember guys, there's still a massive market of people with low-end 10 and 16 series GPUs. Take a look at how many GTX 10, 16, and even GTX 900 GPUs are still on the Steam Hardware Survey. The 1650 is the 4th most popular card, the 1060 is the 12th, and the 1660 Super is the 14th (discrete GPU). Even the 1050-Ti still has a 1.7% market share.

This GPU is intended for those people to give them a decent performance boost (~3060 level performance is believable, and that would be a significant upgrade to all the 1650s, 1060s, etc) and give them good DLSS. Even for people with 2060s and 3050s, those GPUs have paltry AI capabilities. Turing and Ampere Tensor cores only support FP16 math, stepping up to FP4 will be a significant improvement. And for people with a 1060 who have to make do with FSR, getting the FP4 DLSS Transformer model and some frame generation will make a world of difference.

The most important thing Nvidia can do for these people is ensure a worldwide, low-cost supply of the 5050 to OEMs. If the 5050 can start appearing in $500 prebuilts with a 12100F, it's going to make a lot of people happy.
250$ in US, means 300€ in Europe, without taxes. Still way to expensive for a lot of people out there, especially considering its beyond crappy performance and specs.
Even at 200€, this pos is way to expensive.
 
250$ in US, means 300€ in Europe, without taxes. Still way to expensive for a lot of people out there, especially considering its beyond crappy performance and specs.
Even at 200€, this pos is way to expensive.
For discrete purchases, maybe. But I think barely anyone will buy this as a discrete GPU, outside of narrow use cases like HTPC or emulation PCs. The OEM and prebuilt market is much, much more important, and Nvidia should be more flexible with pricing there. Based on the language of the announcement slides, they're explicitly pushing this as the upgrade to the vast number of people still on GTX 900, 10, and 16 GPUs. And those people are probably mostly using prebuilt PCs.

Also, this comes at the same time as Windows 10 is losing support. If you bought a prebuilt in 2017 with a 7th gen Intel CPU and a GTX 1060, this gives you another reason to upgrade.
 
Good grief with the whining.. It would seem many are missing the point of having lower end GPUs. It's like people can't understand the full scope of market needs.

This GPU fills a market sector that needs options. Anyone failing to understand that is only doing themselves a disservice.

Overpriced waste of sand.
Much like your comment..

Not sure what folks are expecting. Since we love to compare everything to the flagship, let's go ahead and do that. Here's how the 5050 stacks up to the 5090:

View attachment 405196

In everything but VRAM, the 5050 is 1/8 of the 5090. Not-so-coincidentally, so's the price. Should the 5050 be less than USD200? Absolutely. But we showed Nvidia that we're perfectly willing to shell out four figures for high-end cards, so the outrage that they'd dare sell entry-level cards at 2-1/2 Benjamins is rather amusing at this point.
Someone who did the math and understands market economies. .thumbs_up.
 
Last edited:
250$ in US, means 300€ in Europe, without taxes. Still way to expensive for a lot of people out there, especially considering its beyond crappy performance and specs.
Even at 200€, this pos is way to expensive.

Actually European prices are around 1:1 conversion. You have to multiply MSRP with exchange ratio and then sales tax. In EU regions where sales tax is high, it costs more, but in some regions it can be about the same.

Then there is an issue or perk that Nvidia availability is often better. You can find plenty of stock and Nvidia puts their GPUs at 'competitive prices'. In this case, they just mirror MSRP in Europe too. For example, 60s series cards always start at 300 euros at the cheapest and then you pay whatever percentage your retailer wants on top of what he gets. Though, that is not the case for their high end which often is slightly more expensive than they should be.
 
This GPU fills a market sector that needs options. Anyone failing to understand that is only doing themselves a disservice.
Idk man, I'm a random guy on the internet and I think the $3 trillion company is totally wrong about the largest market for gaming GPUs. They're just intentionally wasting TSMC 4N wafers to make a GPU that nobody will buy, instead of making a bazillion $ by turning them into GB200 chips. It's obvious they haven't done any market research or anything, there's absolutely 0 logic behind their decisions.
 
The low end market needs affordable options, this just isn't it at $250, and still needing a pci-e power connector.
Although, it's nvidia so people will defend this as somehow being a "budget" option. The budget market needs a $100-150 card.
 
Back
Top