• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5050 8 GB

The small PCB and oversized cooler tells me a couple things.

As mentioned earlier, it comes in two more variants - dual-fan and low-profile. But it's not that much cheaper. The only thing missing is the single-fan variant.
 
Last edited:
Lots of cheap “gaming laptops” will feature this GPU, but it will probably be marketed as something more than a 5050.
RTX 5050 laptops already hitting the market so naming does not seem to be an issue. The mobile chipset uses GDDR7 VRAM rather than the GDDR6 of the discrete graphics cards.
 
I wish you guys would have a more in depth review of the video encoding quality between the different gpu vendors. Maybe this thing is worth the $275 just for nvenc encoding and decoding support?
It's annoying you mention it in the conclusion but have no testing of it.
 
Last edited:
Hmmmm. $250usd ~ $340cad$

Hmmmm. However
IMG_8489.png
 
Huge triple-fan cooler on a low-end card, really Gigabyte? This could easily be cooled with a passive cooler.
 
Colorful battle ax and Zotac gaming force selling retail here for a little over USD300, includes 7% tax.
 
5060 is certainly a better price/perf option in the current Blackwell lineup.
True, fully aware of that. One of the reasons I want a slot powered unit is the PSU in the target system. There is no PCIe cable. I'd have to buy a whole new PSU or SFF system for a 5060 to work. If they can't do it I'd rather do a 3050, but an newer system is tempting me.
I doubt there will be a slot powered variant unless the card is significantly nerfed on clocks/board draw.
I'd bet real money they can pull it off to good effect. If they can do it with the 3050, they can do it with the 5050.
 
Last edited:
People are going to criticize it no matter what. It's got 8GB of RAM. It's too expensive they say. They expect Nvidia to "give away" their products. Not gonna happen.

The vast majority of the PC gaming market is still using 1080p monitors. This will work just fine for most buyers.

I'd also buy Nvidia before AMD simply because I prefer Nvidia's hardware support.

Thus far I've had no issues with my 5090 FE, nor did I have any with my 4090, 3090 FTW3 or my 2080Ti.

I'm willing to pay more for the quality.
 
So $100 cheaper than 4060
Yup 100$ cheaper for basically the same performance plus dlss and MFG, not seeing the huge issue, its also so much faster than a 3050 which the cheapest I could find was 260
 
Huge triple-fan cooler on a low-end card, really Gigabyte? This could easily be cooled with a passive cooler.

what is wrong with people these days? more choice is always better. There's alternatives with one and 2 fans. Simple.
if someone for some reason wants 3 fans and a smaller size fin stack, why not? Or just 3 fans for the bling, it's on them.
With that stupid price and people pick on the stupidest of things, it's almost like doing it on purpose.

You are covered in literal shit from head to toe, but man why don't you comb your hair
 
I wish you guys would have a more in depth review of the video encoding quality between the different gpu vendors. Maybe this thing is worth the $275 just for nvenc encoding and decoding support?
It's annoying you mention it in the conclusion but have no testing of it.
There are many tests of gaming performance.
There are few tests of encoders and encoding quality.
Decoder tests simply do not exist.
 
Upscaling lowers the VRAM usage, because the game is rendered at lower resolution. Only FG increases VRAM usage
However, there is an increase in usage from the native resolution without DLSS.
It can also be understood from your tests, that the difference in VRAM usage from 720p to 1080p is small, and that what matters is something else, so it is not clear why now you are trying to make it become decisive, and always decisive.

The 3060 12 GB, is kinda similar to the B580's situation. You do have more VRAM which will let you run bigger workloads, but your GPU horsepower is so limited that you will never get decent FPS in any of them, so you'll never run them in real-life. This means you'll use upscaling, for those resolutions, which brings the VRAM usage back down.

Unlike the B580, the 3060 has MUCH less raw GPU perf, which amplifies the problem. For DLSS Transformer upscaling specifically, the performance hit is much small on Blackwell than on older architectures, so this will cost you even more FPS and the extra VRAM won't help you one bit. But the 3060 is $220, I'd still pick the 5050 for 250 (if I had to choose between those two cards)
Tests of similar GPUs should not be done with high-maximum settings. You seem to ignore that the setting that affects the VRAM is the quality of the textures, especially when you can exclude the RT, and other things not suitable for this range.
One thing is clear: With the 3060 you can still keep that setting higher.

The overall weight of the game is not necessarily linked to the occupation of VRAM, that depends primarily on the quality of the assets, and, again, with the 3060 you can afford better assets.

There are cases of relatively light games that occupy more than 8GB of VRAM, just as there are heavy games that might not reach it, so it is not clear what you are forcing, which seems like a bottleneck, a sort of vademecum to defend the small quantity...

Playing with settign there are also cases in which you can aim for 1440p with DLSS, and when you do that and the VRAM does not reach the 12GB threshold, you will be able to make it work better, because it is predictable that if the upscaler starts with better textures the result will also be better.

Asset quality is one of the parameters that even the "laymen" notice well, in this case, as I said, it has little influence on the overall weight, but VRAM is needed.

Is Stellar Blade a heavy game? NO. Is Hogwarts Legacy without RT? Not even, so why treat as an axiom what is not?

You are effectively calling W1z a liar. You need to stop. Your shtick is coming off like it's full on mindless drivel.
I think he meant that they are not enough to show the whole picture. If so I agree. You can't simulate VRAM usage with 5 minute ephemeral tests, among other things without considering the annoyance of the abnormal block that could have little impact on the overall average, while being annoying in practice.
Without even considering the cases in which the textures are not loaded, or are loaded late. In those cases, the average frame rate value will matter even less.
 
Last edited:
This is a bad card and it shouldn't be on the market also because of the price. It doesn't make sense and at a much lower level it's a step back from what this segment used to be. It draws too much power, it should draw 75-100W max. and little in idle mode. Then it would be good and then the question of price. Even if the price drops it will never be a good card. Cooling is fine, it's a 130W card, good cooling for silence, correct, only the power demand is not. It's strange that in v-sync 60Hz it draws 131W and 5060 draws 77W. What does that say about this card?, scrap.
 
that the difference in VRAM usage from 720p to 1080p is small

Because it is. 1080p has roughly twice the pixel count of 720p, but both are well within the capabilities of budget hardware today.

Tests of similar GPUs should not be done with high-maximum settings. You seem to ignore that the setting that affects the VRAM is the quality of the textures, especially when you can exclude the RT, and other things not suitable for this range.
One thing is clear: With the 3060 you can still keep that setting higher.

It has neither the shader power nor the memory bandwidth to keep a stable frame rate with the setting higher, so... I don't see how that invalidates the initial assertion.

The overall weight of the game is not necessarily linked to the occupation of VRAM, that depends primarily on the quality of the assets, and, again, with the 3060 you can afford better assets.

Not necessarily true, it always depends on the title you're running. However, being able to load higher quality titles as long as you don't know or care about what frame rate means is not exactly something to gloat about. By the time 12 GB is required, the RTX 3060 has long since stopped being relevant. It's a half-decade-old low end card.

The combined Ada and Blackwell advancements are well worth those 4 GB of VRAM, not to mention the lower power consumption. I just don't understand this weird defense of the 3060, when it has never, ever managed to be relevant or outperform the RTX 3060 Ti, with 8 GB, in any real world use case.

There are cases of relatively light games that occupy more than 8GB of VRAM, just as there are heavy games that might not reach it, so it is not clear what you are forcing, which seems like a bottleneck, a sort of vademecum to defend the small quantity...

W1zz's suite covers a large amount of graphics engines, and even some bad games no one cares about on that merit alone (for example, Veilguard). If it's a vade mecum, it's one that's covering almost every use case, as intended?

Playing with settign there are also cases in which you can aim for 1440p with DLSS, and when you do that and the VRAM does not reach the 12GB threshold, you will be able to make it work better, because it is predictable that if the upscaler starts with better textures the result will also be better.

Asset quality is one of the parameters that even the "laymen" notice well, in this case, as I said, it has little influence on the overall weight, but VRAM is needed.

Keeping in mind the internal resolutions of DLSS is a careful consideration indeed, but since it lowers the resolution, it also reduces VRAM usage, making the 12 GB card relatively irrelevant here, not because it's 12 GB but because the core is so much weaker. This is a tradeoff I often deal with when playing on my laptop, where VRAM is in exceptionally short supply. I've a 4 GB RTX 3050M on it, and VRAM is pretty much almost never a problem before the core itself becomes an issue when DLSS is involved. If you reduce it to Performance, even games like Metro Exodus will run at medium settings and ray tracing enabled. Taiga level apart, you can expect a solid 30 fps experience most of the time. There is a single game I have been completely unable to run on it, and that's Monster Hunter Wilds. 4 GB is not enough, regardless of resolution or settings. But that game can and will run on a 8 GB GPU.

Is Stellar Blade a heavy game? NO. Is Hogwarts Legacy without RT? Not even, so why treat as an axiom what is not?

I've seen my brother playing it on his RTX 3070 and it was running quite decently at 1440p, fwiw. Looks like it's a bit above the pay grade of lesser cards such as these, but no further comment here, I haven't picked it up yet.

I think he meant that they are not enough to show the whole picture. If so I agree. You can't simulate VRAM usage with 5 minute ephemeral tests, among other things without considering the annoyance of the abnormal block that could have little impact on the overall average, while being annoying in practice.
Without even considering the cases in which the textures are not loaded, or are loaded late. In those cases, the average frame rate value will matter even less.

Or... get to playing games... you know, at reasonable settings within the realistic for the hardware. End of the day, pricing is the quibble that leads to these discussions anyway. It's natural to want the most for your hard earned dollar, but at the same time... it's not even conformism at this point, it is what it is: the wooden level at $250, and there is no one able or willing to step up. Time to accept that reality...
 
I just realized that the 9070XT is only 16% slower than the 5080 at 2.5K now. For the going price of $700-$800, that's a killer deal over the $1300-$1500 5080.
 
I just realized that the 9070XT is only 16% slower than the 5080 at 2.5K now. For the going price of $700-$800, that's a killer deal over the $1300-$1500 5080.
Yes RTX 5080 cost per frame is weak and RTX 5090 is even way worse than that. Only decent gpu from nvidia is RTX 5070 12GB where you are not burning money.
 
Yes RTX 5080 cost per frame is weak and RTX 5090 is even way worse than that. Only decent gpu from nvidia is RTX 5070 12GB where you are not burning money.
The RTX 5080 was weak at launch even when considering the $999 MSRP. Now it's just plain ridiculous. I mean double the price for barely any additional performance doesn't sound like a good buy to me.
 
I wish you guys would have a more in depth review of the video encoding quality between the different gpu vendors. Maybe this thing is worth the $275 just for nvenc encoding and decoding support?
It's annoying you mention it in the conclusion but have no testing of it.
This is also of interest to me. the wiki page says:

Eighth generation, Ada Lovelace AD10x​

[edit]
Nvidia announced the next-gen NVENC with 8K 10-bit 60FPS AV1 fixed function hardware encoder in Ada Lovelace GPUs.

Ninth generation, Blackwell GB20X​

[edit]
Introduced support for 4:2:2 chroma subsampling and the AV1 Ultra High Quality mode. It is also said to be 5% more efficient over its predecessor.
https://en.wikipedia.org/wiki/NVENC

edit: still no single slot model available that i can find.
 
Last edited:
Yes RTX 5080 cost per frame is weak and RTX 5090 is even way worse than that. Only decent gpu from nvidia is RTX 5070 12GB where you are not burning money.
What's funny is i have a 5070 and 5090 (5070Ti as well)

Sorry but 99% of people getting a 5090 aren't giving a shit about cost for frame. They want the best. My 5070 is weak plain and simple. But nixe argument to convince yourself to never get a 5090
 
Sorry but 99% of people getting a 5090 aren't giving a shit about cost for frame. They want the best. My 5070 is weak plain and simple. But nixe argument to convince yourself to never get a 5090
That's great! If you have 4k 240hz screen. Next station RTX 6090 3500$.
 
Last edited:
True, fully aware of that. One of the reasons I want a slot powered unit is PSU in the target system. There is no PCIe cable. I'd have to buy a whole new PSU or SFF system for a 5060 to work. If they can't do it I'd rather do a 3050, but an newer system is tempting me.

I'd bet real money they can pull it off to good effect. If they can do it with the 3050, they can do it with the 5050.

They can definitely get it to work, It's just performance would end up in a completely different tier of cards. Kinda like how a Mobile 5080 with 60 SM units ends up closer to a 36 SM desktop 5060 TI.

Yes RTX 5080 cost per frame is weak and RTX 5090 is even way worse than that. Only decent gpu from nvidia is RTX 5070 12GB where you are not burning money.

5070TI is also quite cost effective at MSRP and you're more or less getting more SM units per dollar relative to 5070. Card OC's extremely well.

If you don't OC, it has the highest efficiency on average. Better 4K card in regards to FPS/$.. esp since not being bottlenecked at 12 gig.

I have the opinion that both 70 variants effectively cover a wide range of gamers and exploit blackwell arch effectively per SM count.

5060 TI variants have obvious tradeoffs and down sides.
5080 isn't price effective, especially at 16G. Too close to 5070 TI.
5060 is "okay" but they really need a 12GB 3GB IC variant at a similar $300-330 price point.
5050 has the worse SM/$ ratio, but $200-230 would more or less balance it out with other NV cards. $200 price point would top effective 1080p performance in frame/USD @ 1080p.

You buy 5090 because you want the best.. no one is buying this card to be cost effective.
 
Last edited:
What's funny is i have a 5070 and 5090 (5070Ti as well)

Sorry but 99% of people getting a 5090 aren't giving a shit about cost for frame. They want the best. My 5070 is weak plain and simple. But nixe argument to convince yourself to never get a 5090
We were actually talking about the 5080 which I noticed you own almost every mid to high end Blackwell but that one.
 
5070TI is also quite cost effective at MSRP and you're more or less getting more SM units per dollar. Card OC's extremely well.

If you don't OC, it has the highest efficiency on average. Better 4K card in regards to FPS/$.. esp since not bottlenecked to 12 gig.
@1080p RTX 5070 Ti is 19% faster than RTX 5070 but costs 51% more.
@1440p RTX 5070 Ti is 23% faster than RTX 5070 but costs 51% more.
@4k RTX 5070 Ti is 27% faster than RTX 5070 but costs 51% more.

RTX 5070 Ti is not at the same level definattley not when it comes to cost per frame.
 
Last edited:
Back
Top