• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 5060 8GB performance

Status
Not open for further replies.
The Bugatti Chiron is faster than a Ferrari. That doesn't mean that a Ferrari is slow.


That could be the difference in core config, too (3840 vs 3072 cores). If it was only the more modern VRAM, then we'd see the same increase all across the board.

Edit: I find it more likely that we don't see an increase where VRAM is the issue, and we do where it isn't, and the higher number of cores get their chance to shine.
Cars dont get old, cards do. Cars you dont buy for their performance, cards you do. A 30 year old gpu is useless, a 30 year old ferrari is a 2 million dollar classic. Come on now..

Anyways, unless its cheap as heck, id try to go with gddr7. Its only a handful of games today that take advantage of it but in the next 2 years who knows.
 
Cars dont get old, cards do. Cars you dont buy for their performance, cards you do. A 30 year old gpu is useless, a 30 year old ferrari is a 2 million dollar classic. Come on now..
Don't shift the goalpost, I was talking about speed, not age or price or collector value.

Anyways, unless its cheap as heck, id try to go with gddr7. Its only a handful of games today that take advantage of it but in the next 2 years who knows.
Memory technology doesn't matter. Bandwidth combined with capacity and GPU speed does. I'd much rather choose 16 GB GDDR6 than 8 GB GDDR7. When that 8 GB overfills, the card will have to swap data through the PCI-e bus, which is infinitely slower than any GDDR6 config.
 
Last edited:
Don't shift the goalpost, I was talking about performance.
I am talking about performance too. 20 year old supercars are still extremely fast, much faster than 99.9% or cars in existence. That is not the case with gpus. The fastest 5 year old gpu is at best middle of the pack.

I was comparing cards with equal amounts of vram. You are shifting it to 8 vs 16. Well regardless of the amount, im saying buy gddr7. If you need 8, buy 8gb of gddr7 as its clear it's much faster with the 4060 getting it handed by the 5060. Heck the 5060 is getting lose to the 4070 that has way more cudas.

If you need 16, buy 16gb of gddr7. Why settle for slower is my point when you are overpaying for gpus anyways. It will just improve longevity and resale value, dont settle for slow gddr6.
 
Holy shit. Last time I was logged in there was just my first post in the thread.

It's probably a PCIE interaction with the test system - 5xxx series gpus are known to have issues on many gen 5 motherboards @gen 5 speeds.


Derbauer had the same issues - tuning bios settings/falling back to PCIE 4 fixed the issues.
Seems plausible. RTX 4060 (Ti) has PCIe x8 interface as well, but gen 4.0. Yet, there are no similar performance problems.
I, too, think that Nvidia messed something up with PCIe implementation with Blackwell and this caused a lot of problems.
It's perfectly clear now that motherboards makers were not a problem.
Why? Because no similar problems occurr on RX 9000 series which also leverage PCIe 5.0.

This is something they can't fix with drivers, I guess. It's obvious they're trying to fix it, since there is new driver every week or so now,
but the fact is whenever they fix something for RTX 5000 series, it breaks something for older generations (mainly RTX 4000).
Nvidia themselves said RTX 4000 owners to stick to older (566.xx if I remember correctly) drivers.

Something in design was changed and causes problems.
Maybe they implemented something that works well for AI workloads, but turned out to be a problem for gaming.
This is most probably the reason behind why release of RTX 5060 (Ti) series was postponed.
Nvidia knew about the problem, thus they tried to mitigate bad praise by limiting RTX 5060 Ti 8GB availability to reviewers.
Now with RTX 5060 providing drivers to selected (p)reviewers only and strictly limiting (p)reviewing and comparison scenarions.
I think Nvidia should not release these GPU is such unpolished state. If it's broken, don't sell it.

Very unfortunate is that RTX 5060 will end up in most of OEM and pre-built gaming machines in $1000-$1500 price range.
Many of buyers will be so disappointed/pissed when they find out the card can't handle more than 1080p, mostly because of VRAM.
(Retailers don't give a f*ck, they will say anything to promote and sell that RTX 5060 rig.)
Pairing RTX 5060 (Ti) with 10 GB or 12 GB (even cheaper GDDR6) VRAM would be maybe cheaper and would yield better performance.
Intel clearly demonstrated that a 12 GB card can be done for less than $300.
Just two more GB of VRAM would unlock potential for using MFG in selected 1440p titles, with 12 GB everything would be fine.

Ngreedia - turns out the 5060 is best perf / $ gpu available.

View attachment 400459
The thing is, it's not available in MRSP price, even RTX 4060 is being sold now for >400€ incl. VAT nowadays.
MSRP prices of RTX 5000 and RX 9000 cards are just placeholders with no meaning.
Maybe RX 9060 XT will beat RTX 5060 in graph you mentioned, but still, we all know RX 9060 XT won't be sold in MSRP.
That's to say, this graph is pointless, unless it provides fps/retail price ratios.

But was this with MFG on or off..? Also I cant tell if the card is garbage or the drivers are...
MFG was off.
 
The thing is, it's not available in MRSP price, even RTX 4060 is being sold now for >400€ incl. VAT nowadays.
MSRP prices of RTX 5000 and RX 9000 cards are just placeholders with no meaning.
Maybe RX 9060 XT will beat RTX 5060 in graph you mentioned, but still, we all know RX 9060 XT won't be sold in MSRP.
That's to say, this graph is pointless, unless it provides fps/retail price ratios.


MFG was off.

First of all the 330$ in the graph isnt msrp. Msrp is 299. Second of all, you can buy it right now, its in stock for 319$. You can also find it in eu, multiple models, on stock for 319 euros.
 
I am talking about performance too. 20 year old supercars are still extremely fast, much faster than 99.9% or cars in existence. That is not the case with gpus. The fastest 5 year old gpu is at best middle of the pack.
We were taking about memory, not GPUs. GDDR6 is not slow by any means.

I was comparing cards with equal amounts of vram. You are shifting it to 8 vs 16. Well regardless of the amount, im saying buy gddr7. If you need 8, buy 8gb of gddr7 as its clear it's much faster with the 4060 getting it handed by the 5060. Heck the 5060 is getting lose to the 4070 that has way more cudas.

If you need 16, buy 16gb of gddr7. Why settle for slower is my point when you are overpaying for gpus anyways. It will just improve longevity and resale value, dont settle for slow gddr6.
If it's an equal amount of memory, then get the faster GPU. If they're both equal, then get the cheaper one. There is no point singling out any VRAM technology. GDDR7 is one more than GDDR6, but it won't give you "one more" gaming experience unless the faster GPU can utilise it. It's proven by the 5070 Ti being not much faster than the 9070 XT despite having 1.4x the VRAM bandwidth.

Tldr: Memory speed is massively overrated.
 
If it's an equal amount of memory, then get the faster GPU. There is no point singling out any VRAM technology. GDDR7 is one more than GDDR6, but it won't give you "one more" gaming experience unless the faster GPU can utilise it. It's proven by the 5070 Ti being not much faster than the 9070 XT despite having 1.4x the VRAM bandwidth.

Tldr: Memory speed is massively overrated.
In fact i think your example is perfect. The 5070ti is a tiny chip compared to the 9070xt and yet its faster in raster and loads faster in rt while simultaneously using less power.

I dont know why we are still arguing over this, hubs reviww shows that some games like fast vram. As the years go by, there will be more of them, not less.
 
In fact i think your example is perfect. The 5070ti is a tiny chip compared to the 9070xt and yet its faster in raster and loads faster in rt while simultaneously using less power.

I dont know why we are still arguing over this, hubs reviww shows that some games like fast vram. As the years go by, there will be more of them, not less.
By the TPU database, it's a whopping 5% faster. And it's not using less power to an extent that's worth mentioning, but we've discussed that elsewhere, so let's move on.

So basically, 896 GB/s achieves the same raster and somewhat better RT than 644 GB/s. You might need to lend me a magnifying glass, but I don't see the enormous GDDR7 advantage you mentioned.
 
By the TPU database, it's a whopping 5% faster. And it's not using less power to an extent that's worth mentioning, but we've discussed that elsewhere, so let's move on.

So basically, 896 GB/s achieves the same raster and somewhat better RT than 644 GB/s. You might need to lend me a magnifying glass, but I don't see the enormous GDDR7 advantage you mentioned.
This dishonest as crap man. Cause by the same logic, 8gb vs 16gb are IDENTICAL in averages yet you keep banging the 8gb not enough drum. A much larger chip being slower while consuming more power, yeah we cant see the difference, we need a magnifying glass lol


Ive made the point multiple times, there are some games that show a big difference. Obviously 3 or 5 games wont make a large impact on averages. So using the averages is misleading but if thats what you want todo go on. Im out, have the last word
 
This dishonest as crap man. Cause by the same logic, 8gb vs 16gb are IDENTICAL in averages yet you keep banging the 8gb not enough drum. A much larger chip being slower while consuming more power, yeah we cant see the difference, we need a magnifying glass lol


Ive made the point multiple times, there are some games that show a big difference. Obviously 3 or 5 games wont make a large impact on averages. So using the averages is misleading but if thats what you want todo go on. Im out, have the last word
And I've made my point multiple times. Even if averages aren't affected much, and only a handful of games show a performance difference between 8 and 16 GB (not to mention performance isn't everything when you have texture pop ins and such), it's already enough to conclude that the 8 GB card won't last long if you want to play the newest games. There's nothing dishonest about recognising the pattern - the same pattern we had with the 2 vs 4 GB 960 and 3 vs 6 GB 1060. The 8 GB card might be enough for the games you're playing now, but it'll run out of grunt in newer ones faster than the 16 GB one.
 
What worries me is laptops. Upcoming 50-series laptops are 8GB all the way until you hit 5070Ti laptops which start at $2500 for the worst, nastiest examples. You can easily drop $2500 on a 5070 laptop and that's got an 8GB GPU in it. The billion-page forum threads over 8GB being enough or not are bad enough when it's just about a $400 5060Ti with only 8GB. How do you think laptop buyers are going to feel about the planned obsolescence of a Blackwell 8GB GPU at $2500?!!
Than you get AMD 395 MAX 64GB laptop with few FPS lower than 4060 but at half the power consumption of 4060 laptops, end of the year maybe we gonna have AMD max APU at 4070 Super performance.
 
yeah , it is still a bad deal (at least for me personally) ...
i ´m still running 8gb rtx 2070 super (aka very slightly cut down 2080) which i bought for 220€ ($250) and that was two and a half years (basically 3 years) ago !!!
than the 4060 8gb came out and i laughed - because it was barely any faster (not really) than my 2 generations old cut down 2080 ...

now this mess hits the shelves and it is what ? like 30% faster in rasterization (if that)
and i also have exactly the same memory bandwidth and VRAM at my 3 generations and 7 years old class 70(Ti/S) graphics card ...

it´s insane how much the mid range GPUs from nvidia and AMD have fallen over the last 6-7 years ...
if for some reason i would have to get up and purchase a new graphics card tomorrow than
i would just get rtx 3080 for $350 (maybe for $320 after some negotiations with the seller) on a second hand market ...

also the power consumption is not really that big of a deal and it would not affect my bill in any meaningful way -
in my region a kW/h costs only €0,07 ($0,08)
(also i almost forgot we have a "nigh time tariff" discount which is €0,05 from 10pm until 6am -
many times i´m gaming from 10pm to 1am so it is also very relevant for me)
after some calculations it would cost me exactly $50 annualy
to run a graphics card with average gaming consumption of 350w if i game 365 days x 5 hours each day on it
(at full $0,08 price all the time , which there is no chance for me to do)
so yeah there is that - just in case you decide to throw in power consumption as a factor ...

3080 is still a great GPU that is about 70% faster than 2070Super.

But the lifespan of those used 3080 is kinda dubious though, what if it die 6 months after you get it because the previous owner was mining on it 24/7 for the past 5 years? :wtf:.

Also if you look at the other perspective, the guy who bought 3080 at 800usd and used it for a good 5 years can now sell it at 350usd. What's stopping you from grabbing a 5070 now and sell it 2-3 years later for 300usd :wtf:, it's the same cost as if you buy the used 3080 and it dies 2 years later but you get much better efficiency and features set of the 5070
 
"Outdated, slow" GDDR6 is still infinitely faster than swapping data through the PCI-e bus (I don't understand where you got the idea that it's outdated and slow).

Also, frames per second isn't the only metric where more VRAM shows. Quite a few modern games have texture loading issues, pop ins, slow downs after long sessions, etc. when they exceed your VRAM, but act fine during a benchmark run, or when looking only at your FPS.
Memory technology doesn't matter. Bandwidth combined with capacity and GPU speed does. I'd much rather choose 16 GB GDDR6 than 8 GB GDDR7. When that 8 GB overfills, the card will have to swap data through the PCI-e bus, which is infinitely slower than any GDDR6 config.
Exactly. Having 8GB GDDR7 is not better than having 12GB GDDR6. Once the memory is not enough, you don't have space to load assets into.
If GPU has to swap a lot of data frequently, your rig will consume more disk, RAM and CPU resources as well.

Experience from my work:
We have one computer at work that for generational reasons can't hold more than 4 GB RAM. It's computer embedded to laboratory machine.
We can't simply substitute the PC with newer one, as the machine is extremely expensive. We're talking millions here.
With time, manufacturer had been updating software for years to a state that software is very complex and heavy on memory.
Over years, they added lot of functions we required, tailored them for our use (of course, we paid them).
There's no software and particularly no support for laboratory software with newer systems. So, laboratory workers struggle to keep RAM usage below 90%.
Whenever RAM usage hits around 94-96%, shit hits the fan. It comes to swapping and everything is damn slow, unresponsive.
You'd say why don't you restart that PC? Well, there's a procedure that everytime PC restarts, machine has to do homing cycle on everything it contains.
This cyce takes about 20 min. of time. It's a sophisticated machine.
Now imagine you have RAM usage already at 90% in 2 hours, restart takes 20 min. People need that machine to test samples, they work for 8 hours a day.
There's no way to have this PC restarted every 2 hours, because laboratory workers would not get tested all samples that day (there's a samples per day norm).
Software eats up most of memory right after restart and as I said, in 2 hours usage is roughly 90%.
We talked a lot with machine manufacturer to optimize software for us. They said it eats a lot of memory as we keep testing more and more samples over day.
It requires to hold that data into memory so it can quickly compute side tasks and provide measurement data to workers.
Don't ask me why this machine is created this way, I don't know. What know for sure is that even slower memory would save us from fighting with this machine, if there was more of it.


What's the point of my experience from work? It's similar to gaming. Game assets are getting more detailed, so they require more space than years ago.
Game engine needs some things stored in memory in order to work. When these things alone reach nearly full memory utilization, no memory speed increase can help you.


Cars dont get old, cards do. Cars you dont buy for their performance, cards you do. A 30 year old gpu is useless, a 30 year old ferrari is a 2 million dollar classic. Come on now..
This is quite unfortunate wording. You DO buy cars for their performance. No one would buy super fast card for tons of money if it wasn't fast (powerful).
Unless it's veteran car which was produced in very limited amount.

Cars do get old but what's more important, they get wear and tear with mileage.
A 30 years old Ferrari is veteran car with collector's value. Ferrari or Lamborghini can't do 100k miles without engine maintenance (I'm not taking oil and filters into account here).
So, you buy cars for their performance (maybe sometimes it's not primary focus, but you always consider it) and you buy GPUs for their performance.

First of all the 330$ in the graph isnt msrp. Msrp is 299. Second of all, you can buy it right now, its in stock for 319$. You can also find it in eu, multiple models, on stock for 319 euros.
You're right. So, it seems in my country something is out of hand with RTX 4060 being much more expensive and RTX 5060 being not in stock.
On Mindfactory, RTX 5060 can be bought for 319-359 €. Right know, RTX 5060 seems to have best $/fps ratio. Let's wait for RX 9060 XT.
 
I'd argue the ordinary consumer is concerned with marketing and price a lot more than performance and features. They don't know or understand the features.
Is nothing to understand from the provided features atm. Even more for 5060 cards.
Also I can ignore the "features" easily when I see no Hot spot sensor, missing ROPS and maybe by the end of the year we gonna have missing CUDA cores, cause the drivers are hiding them or impurities in silicon make them clogged on the pipelines :D
I think is to early to talk about features or performance of this cards, all I can say is: is not wise to commit to 5060 laptops of 5060 desktop versions atm.
 
I will say what i said for AMD, 60 class card shouldn't really be a 1080p card, unless prices were much lower obviously
But hey i guess people like me that said that for 1080p alone 8GB was fine were right all along.
 
It's not even particularly interesting to watch reviews, it's clear that on gb206 it will be good, it will be closer to 4060ti and 5060ti than to the dull 4060(4050).
But the price is very important, extremely important. If buyers of 5070ti can complain, whine, but still find an extra 200 dollars in their bins, then it doesn't work like that here.
 
It's not even particularly interesting to watch reviews, it's clear that on gb206 it will be good, it will be closer to 4060ti and 5060ti than to the dull 4060(4050).

Don't get me wrong, but you're just saying with all the words, you'd only be interest in watching reviews if the card flopped.
What the hell is wrong with people?
 
I will say what i said for AMD, 60 class card shouldn't really be a 1080p card, unless prices were much lower obviously
But hey i guess people like me that said that for 1080p alone 8GB was fine were right all along.
8gb is fine for 1440p as well. I dont understand what this whole 8gb back and forth is about.

If we take for example alan wake 2 (you can take any other game, im just using this as an example because it is considered a good looking game) 8gb are fine to max it out at both 1080p and 1440p. So lets say 3 years down the line alan wake 3 comes out and you can only play it at medium because of vram. Even at medium it still has to look better than alan wake 2 at ultra, else why in the world would it need the same vram? I guess that at SOME point even low textures will look better than todays ultra and so they wont really fit in just 8gb of vram but I don't think we are anywhere near that point.
 
Man when a small channel goes out and pay for a 5060 from his own pocket, makes me wonder about the integrity of those big and "influential" Techtubers :wtf:
 
8gb is fine for 1440p as well. I dont understand what this whole 8gb back and forth is about.

If we take for example alan wake 2 (you can take any other game, im just using this as an example because it is considered a good looking game) 8gb are fine to max it out at both 1080p and 1440p. So lets say 3 years down the line alan wake 3 comes out and you can only play it at medium because of vram. Even at medium it still has to look better than alan wake 2 at ultra, else why in the world would it need the same vram? I guess that at SOME point even low textures will look better than todays ultra and so they wont really fit in just 8gb of vram but I don't think we are anywhere near that point.

I have a 3060ti, and in Forza Horizon 5 for example, my card could easily do better settings at 1440p if i had more vram.
That doesn't mean i can't play any game at 1440p with 8GB (i haven't found a single one, even if i don't play every game), but the card is being held back by the lack of vram, it's unfortunate.
 
I have a 3060ti, and in Forza Horizon 5 for example, my card could easily do better settings at 1440p if i had more vram.
That doesn't mean i can't play any game at 1440p with 8GB (i haven't found a single one, even if i don't play every game), but the card is being held back by the lack of vram, it's unfortunate.
Well sure, but that applies to every card. I have a 4090 and I could easily do better settings in oblivion remastered if my card was faster :D

The question is, does FH5 look bad at 1440p with high textures instead of ultra?
 
I have a 3060ti, and in Forza Horizon 5 for example, my card could easily do better settings at 1440p if i had more vram.
That doesn't mean i can't play any game at 1440p with 8GB (i haven't found a single one, even if i don't play every game), but the card is being held back by the lack of vram, it's unfortunate.

I knew a user with 12GB 6700XT that always trash talk about lack of VRAM on Nvidia, yet he upgraded to 6800XT, then 6900XT and then 9070XT.
I guess lacks of GPU power or DLSS free performance is more annoying :roll:
 
Well sure, but that applies to every card. I have a 4090 and I could easily do better settings in oblivion remastered if my card was faster :D

The question is, does FH5 look bad at 1440p with high textures instead of ultra?

sure mate, but the problem here is they are gimping the cards for what, saving 20/50 bucks. It's a bit stupid.
There is a balance between not wasting vram on a card that will never use it, and holding a card back for such a stupid saving.
It gets worst if you think of all the 3070's and 3080's out there now.
 
sure mate, but the problem here is they are gimping the cards for what, saving 20/50 bucks. It's a bit stupid.
There is a balance between not wasting vram on a card that will never use it, and holding a card back for such a stupid saving.
It gets worst if you think of all the 3070's and 3080's out there now.
Well it's not really 20/50$ though. Cards with lots of vram are used for ML AI and crap. We are in fact incredibly lucky that we are only being charged 50$ for a 16gb 5060ti. I can't explain why is nvidia doing it, but it's really a gift. Just take a look at the new 9700pro, it's basically a 9070xt with more vram, and it's going to cost you like 700$ for that extra vram.

Lots of pros want to use AI locally (especially programmers), so cheap 16gb+ vram cards are godsend.

I knew a user with 12GB 6700XT that always trash talk about lack of VRAM on Nvidia, yet he upgraded to 6800XT, then 6900XT and then 9070XT.
I guess lacks of GPU power or DLSS free performance is more annoying :roll:
People moved from RDNA3 to RDNA4 when the only thing it offers are the better RT gimmick and the better upscaling gimmick. They even sacrificed vram, since RDNA3 high end cards had more of it. I don't even know what the hell, they say one thing and then they do another :kookoo:
 
Well it's not really 20/50$ though. Cards with lots of vram are used for ML AI and crap. We are in fact incredibly lucky that we are only being charged 50$ for a 16gb 5060ti. I can't explain why is nvidia doing it, but it's really a gift. Just take a look at the new 9700pro, it's basically a 9070xt with more vram, and it's going to cost you like 700$ for that extra vram.

Lots of pros want to use AI locally (especially programmers), so cheap 16gb+ vram cards are godsend.

The LLM issue was not relevant when the 3000 series came out, i think. This has been a problem at least with Nvidia way before anyone knew what a LLM was.
And AMD is using GDDR6, so they don't face the same issue. Nvidia could do the same.
 
Status
Not open for further replies.
Back
Top