• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5060 8 GB

So with all this talk abut the 8 gb on this card not been enough, how come the 5060 ti 16gb gets ONLY like 12 and 14 more fps on Cyberpunk and Elden Ring for example. I dont get it. When the double VRAM starts to make a difference ?
The 16GB card is able to stretch its legs to its full potential, whereas the 8GB card can not. That's my personal problem with this design; it's just weird. The card is stronger than what it's limited to. This doesn't just include gaining 10 more fps, but also not crashing and better 1% lows.

There should never be a scenario where a 2-generations old 3060 12GB will be able to sustain a scenario that a 5060 can not, but such scenarios exist if you didn't buy the 16GB models. Likewise for AMD, for the record. The 8GB XT is weird also.

Billion dollar companies doing billion dollar things. :rolleyes:
 
Last edited:
I don't think this has been mentioned in this thread but one of the other big issues with Nvidia right now because of the removal by Nvidia of PhysX in the 50xx series. So not only are we dealing with cards that some have issues with power connectors, bad firmwares, bad drivers, bad pricing, but now we have also lost performance in older games. My back catalogue is huge. I don't need to lose performance in older games. I have heard some users are getting around this by buying old 1030 cards just for PhysX. But I shouldn't need any added card.
 
Because it's the market leader who always sets the trend for the rest of the market.
Did Intel set the trend in the x86 CPU market, or has AMD done their own thing?

I don't think this has been mentioned in this thread but one of the other big issues with Nvidia right now because of the removal by Nvidia of PhysX in the 50xx series.
It’s been mentioned ad-nauseam in every other Nvidia product thread and regurgitating it in this one adds no insight nor value to the conversation.
 
Did Intel set the trend in the x86 CPU market, or has AMD done their own thing?


It’s been mentioned ad-nauseam in every other Nvidia product thread and regurgitating it in this one adds no insight nor value to the conversation.
It does though because playing older games is more important to users in the budget bracket that know they can't play new games with cranked settings. The last thing they can afford is to lose value or get worse performance on the older games. It is a double whammy really.
 
Double regurgitation. I hope you’re in the bathroom and not making a mess in the family room.
 
Double regurgitation. I hope you’re in the bathroom and not making a mess in the family room.
So we're at the end of useful discussion then?
 
Here you go. Your prayers might be answered after all:
"Nvidia should just go off to make AI & datacenter cards"

It is thinking that you, youtubers or buyers can dictate what cards are being made from those companies. First of all, there are manufacturing reasons why we got so little VRAM from Nvidia or why there is such a small performance uplift. Since it is the same node, Lovelace and Blackwell should be treated like one, big generation of GPUs which was constantly refreshed and upgraded through its lifespan.

You must've missed my post where I said:

It's been done before. Look at the 780 Ti vs the 980 Ti. The 980 Ti is much faster despite being on the same 28nm node. Zen 3 was much better than Zen 2 despite both being on 7nm. I'm sure it isn't easy, but there is proof it can be done.

Nvidia released a lackluster generation because their focus (and most of their revenue) is elsewhere.
 
Screenshot 2025-05-31 211502.png

Is this accurate? Seems strange that the 4060 would run better than the 5060ti (Page 36, 1080p Alan Wake 2)
 
It does though because playing older games is more important to users in the budget bracket that know they can't play new games with cranked settings. The last thing they can afford is to lose value or get worse performance on the older games. It is a double whammy really.
So it does explain why nvidia was heavily outselling amd. Cause physx was very important.
 
Outside of a few games Physx was useless. CPU's got powerful enough to handle it.
Yeah but now that nvidia removed 32bit support it suddenly became important. It specifically became important for AMD buyers though which never had access to it anyway. Im trying to wrap my head around it
 
waste management isn't ideal and at some point you run out of your 8 GB buffer and the game goes from butter smooth to shaky at best
Oh I see! That explains the stutters and freezes Im too familiar with my 2Gb GTX 960 ... Also, I see its mentioned already in the Value and conclusion page: should've read carefully the whole review instead of jumping straight to the benchmarks.
 
You must've missed my post where I said:



Nvidia released a lackluster generation because their focus (and most of their revenue) is elsewhere.

Mentioning GTX 700 and GTX 900 generations only proves my point. Difference between 760 vs 960 was just 10% or several frames. Sure, there was good GTX 970 from that generation and we had seen improvements in costs and power efficiency from this generation. However, it was very lackluster generation when it came to performance gains and where it did gained performance was at the cost of bigger dies. Same thing we saw and with RTX 5090. Instead of power efficiency, we had seen improvements in software support, tensor cores increased significantly, allowing more AI stuff to happen. Pricing decreased in some places and in general we had seen introduction of a lot better GDDR7 memory with PCIe gen 5 interface. It is a lot of small improvements everywhere, so it isn't like Blackwell isn't holistically better than Lovelace.

I predict that we might see similar thing that we were seeing with GTX 600/700/900 generations. People easily forget what kind of dog these years were for GPU development when we were stuck on a same node. We are stuck now on it for two generations instead of three. Nvidia is also in a same position of dominance with no competition from AMD. I'm predicting that Rubin will use 3 nm node and next generation will have a lot more performance than we had seen in recent years. TSCM N3E node offers nearly 50% greater transition density than 4N FinFET which Blackwell uses. It offers overs up to 35% power efficiency per Watt. Just switch to that node would yield up to 40% more performance with significantly reduced power consumption.

Though, that is the most optimistic scenario. It also depends where Nvidia will allocate this silicone. If they are going to add integrated CPU into their GPUs, that alone might use up considerable portion of their die space and increase pricing. Or in other words, AI AI AI might eat up margins for pure performance. I personally would like to see that performance going into considerably beefed up RT cores. A lot of people would like stagnation in quantity of tensor cores and due to opened up space add more cuda cores. We are likely to see 30% increase in performance from a new node and hopefully another 10% from architectural changes. With that RTX 6080 could match RTX 5090 performance if they would increase die area to around 434 square mm! It would be slightly below it and binned Ti model would be above it! A man can dream...
 
where it did gained performance was at the cost of bigger dies.
GTX 780: 561 sq mm; 250 W TDP.
GTX 970, a faster card: 398 sq mm; 150 W TDP.
Apple-to-apple: GTX 980 Ti with 601 sq mm and 250 W TDP. And... more than 50% bonus performance.

Maxwell was a real step up even if we completely ignore the fact both these generations used the same node.
 
Mentioning GTX 700 and GTX 900 generations only proves my point. Difference between 760 vs 960 was just 10% or several frames. Sure, there was good GTX 970 from that generation and we had seen improvements in costs and power efficiency from this generation. However, it was very lackluster generation when it came to performance gains and where it did gained performance was at the cost of bigger dies. Same thing we saw and with RTX 5090. Instead of power efficiency, we had seen improvements in software support, tensor cores increased significantly, allowing more AI stuff to happen. Pricing decreased in some places and in general we had seen introduction of a lot better GDDR7 memory with PCIe gen 5 interface. It is a lot of small improvements everywhere, so it isn't like Blackwell isn't holistically better than Lovelace.

I predict that we might see similar thing that we were seeing with GTX 600/700/900 generations. People easily forget what kind of dog these years were for GPU development when we were stuck on a same node. We are stuck now on it for two generations instead of three. Nvidia is also in a same position of dominance with no competition from AMD. I'm predicting that Rubin will use 3 nm node and next generation will have a lot more performance than we had seen in recent years. TSCM N3E node offers nearly 50% greater transition density than 4N FinFET which Blackwell uses. It offers overs up to 35% power efficiency per Watt. Just switch to that node would yield up to 40% more performance with significantly reduced power consumption.

Though, that is the most optimistic scenario. It also depends where Nvidia will allocate this silicone. If they are going to add integrated CPU into their GPUs, that alone might use up considerable portion of their die space and increase pricing. Or in other words, AI AI AI might eat up margins for pure performance. I personally would like to see that performance going into considerably beefed up RT cores. A lot of people would like stagnation in quantity of tensor cores and due to opened up space add more cuda cores. We are likely to see 30% increase in performance from a new node and hopefully another 10% from architectural changes. With that RTX 6080 could match RTX 5090 performance if they would increase die area to around 434 square mm! It would be slightly below it and binned Ti model would be above it! A man can dream...

You are comparing the low end cards, the same low end cards where Nvidia today provides a very low improvment. In the higher end cards back then, just like today, there difference is more pronounced.
 
View attachment 402017
Is this accurate? Seems strange that the 4060 would run better than the 5060ti (Page 36, 1080p Alan Wake 2)
Retested several times. The memory management is probably different on Blackwell, probably better optimized for scenarios with lots of VRAM, could be a driver optimization thing, too.
 
GTX 780: 561 sq mm; 250 W TDP.
GTX 970, a faster card: 398 sq mm; 150 W TDP.
Apple-to-apple: GTX 980 Ti with 601 sq mm and 250 W TDP. And... more than 50% bonus performance.

Maxwell was a real step up even if we completely ignore the fact both these generations used the same node.

Apples to Apples comparison would be GPUs with similar die size and intended roles.

GTX 780 Ti: 561 mm²; 250 W TDP.
GTX 980 Ti: 601 mm²; 250 W TDP.

After looking up, they have 40% more performance despite having same TDP and being 7% bigger. I had looked how RTX 4090 and RTX 5090. Despite much bigger die and price, performance gain is about 30%. I would say it is in line with what we historically should expect, especially considering all the AI and RTX flappery which takes up space on dies. If we compare loads where it makes sense, RTX 5090 has 40% increase in path traced games. Even up to 2,5 times in synthetic results or over 60% increases in various VR games. This card is awesome and is telling that we need 90s cards for the next frontier of gaming.

If we take dies which retained similar dimensions through generations:
GTX 660 Ti: 294 mm²; 150W; 300 USD.
GTX 760: 294 mm²; 170W; 250 USD.

Here we get performance increase in under 10% with the same issues like today. In some games, these GPUs are performing identically! So, about 20% improvement in cost and 10% improvement in performance. Identical to what we saw with RTX 80 series these past two generations!

If we take throughout the stack, it is more difficult, because people compare Super series with RTX 5000. We didn't had that back then. If we take original RTX 4080 and RTX 5080 there is an improvement of about 15% in performance and about 20% reduction in price. If we take original RTX 4070 and RTX 5070, there is an improvement of about 20% in performance and reduction of about 10% in price. Die also shrunk and Nvidia managed to get more performance out of their custom T5 node with lower density.

Again, back then we had more impressive gains, but they are still comparable to what we are getting today. People act like Nvidia is doing something criminal, but they ignore lackluster generations like jump from GTX 760 to GTX 960. And the reason why we are not seeing the same growth is here:

GTX 770: 294 mm²; 230W TDP; 400 USD.
GTX 970: 398 mm²; 150W TDP; 330 USD.

We got 40% increase in cost efficiency of a die without accounting for inflation, more VRAM or anything else what was added to GPU. This the crux of the issue. Back then silicon was becoming faster and cheaper. If we didn't got faster silicone, we got it cheaper. Now silicon is becoming faster and more expensive. This is why we are unable to achieve significant gains in performance through supercharging dies. And then compared to dies of similar size, performance gain then is similar to performance gain now, especially taking in mind that we have RT cores, tensor cores, cache all competing for space on the die.

You are comparing the low end cards, the same low end cards where Nvidia today provides a very low improvment. In the higher end cards back then, just like today, there difference is more pronounced.

But people are acting like RTX 5060 is a scam. It is quite typical of Nvidia even back in their glory days. So, do they have any argument at all to complain? All I'm hearing is: "Nvidia should give us more, because we say so". Not to mention that they are systematically ignoring the inflation. GTX 660 Ti MSRP back in 2012 was 300 dollars. It is an equivalent to 420 dollars today. It land it squarely to RTX 5060 Ti 16 GB. Yet, people are adamant on what they should get for 300 bucks despite massive currency devaluation which had happened after covid or simply that almost decade and a half had passed since. In this comment section alone I had read multiple times that people think that this card is fine for 200 dollars max. Are they are out of their mind? It is not 2012 anymore!
 
Last edited:
If we take dies which retained similar dimensions through generations:
GTX 660 Ti: 294 mm²; 150W; 300 USD.
GTX 760: 294 mm²; 170W; 250 USD.
Do you realise the GTX 760 is just GTX 660 Super? They're exactly the same architecture.
 
Oh yes. However, how that exactly matters? We got the same situation back in the glory days too. Stuck on a same node and constant refreshes. We spent over 3 years without significant improvements and 3 generations re-releasing the same stuff. Same had happened now. Lovelace offered great generational improvement at a high end over Ampere. Super series fixed some cards. Then Blackwell struggles to offer meaningful performance gains without supersizing their dies. We also have to remember that back then GPUs were simpler and they advanced much faster. Now we they are a lot more complex and release cycle is naturally slower.

It is just cherry picking when reviewers compare Super cards to Blackwell and complains about lackluster improvements then we can't look to history and point out that the exact same thing had happened back then too. Nvidia could not give us meaningfully more than it does today. Only good GPUs there ones with significant die increases and that is only feasible when costs of wafers drops significantly. We don't have this situation today. Nvidia built its custom node for Blackwell and demand for advanced chips are in high demand. As you can guess, AI is to blame and we still don't see returns on newly constructed fabs. Until we gain more production and AI bubble pops, things will be like they are. We should be just happy that supply wasn't so decimated as during crypto eras. At least now we can buy average to sub-average GPUs and in my region, at MSRP.
 
Last edited:
Same had happened with Maxwell RTX 970.
I dunno how much more meaningful you want it to be with that one, it literally offered more than GTX 780 at 60 percent TDP. And Maxwell GPUs overclocked like crazy, significantly better than the Kepler ones.

Imagine 5060 Ti, a 180 W GPU, besting 4080, a 320 W GPU, in the current games; and beating it like it's nothing in not yet existing DirectX 13 titles. This is how much better Maxwell was compared to Kepler.

Now we have a total lack of incentive (AMD don't release anything disruptive for way too long) that makes nVidia make nothing special. They were literally granted a permission to do whatever. And they just made a lame AI excuse and stuck to it.
 
Did Intel set the trend in the x86 CPU market, or has AMD done their own thing?
Uhh yes, Intel set the trend...AMD literally started off as a company that reverse-engineered and cloned Intel's CPU designs. So what's your point exactly??
 
I just looked at RTX 970 performance in benchmarks and it was matching RTX 780. In some games it even underperformed while in most being just slightly faster with same margins as we see today. I don't see it the way you put it, RTX 5060 Ti > RTX 4080. I'm seeing it being as RTX 5070 >= RTX 4070 Ti. It is not as great as it used to be, but it is comparable. The difficulty now is that we don't have an equivalent GPU comparison.

GTX 970 had 74% transistors of GTX 780 while offering equal or slightly better performance. While being 51% of the price.
RTX 5070 has 86% transistors of RTX 4070 Ti while offering equal or slightly worse performance. While being 68% of the price.

It is not as great as it used to be, but I don't think that it is not comparable. Especially considering that these days, die space is tied to raytracing, tensor cores and all those cores need more data which takes up space as cache. We also should naturally expect progress of any technology to slow down and differences between products to become ever smaller. Same thing had happened with smartphones. Each new generation was revolution over last. Now we are getting the same AI stuff and minor refinements each generation.

As a thought experiment, I think that in order to be just the same as in good, old days, RTX 5070 should cost 499 dollars MSRP and we could call it even. I think that Nvidia can do it, have enough of margins to lower it to that price point. The reason why we are not seeing it is that they have no competition. AMD decided to just inflated cost of their GPUs. So, Nvidia out of kindness of their hearts, don't want AMD being completely bankrupted by their greatness. So for the good of everyone, they are forced to sell their GPUs at an elevated prices. ^_^
 
Last edited:
I just looked at RTX 970 performance in benchmarks and it was matching RTX 780.
4K. 17% advantage on average.
1748780557515.png


This isn't much, sure, but it's still faster than 780.

In DX12 titles, it's up to 2x faster.
 
Why you are giving me 4k data where those cards are getting tortured? Back then 1440p was high end and for 4k you realistically needed SLI. Benchmarks from back then. You can see that in Crysis 3 it even loses. Here you can see that it ties in a lot of games. So, it is not much faster than GTX 780. In select few titles it is slower, it ties within few percentage points in a lot of games and is faster up to 15% in best performing games.
 
Back
Top