• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Did Nvidia purposely gimp the performance of 50xx series cards with drivers

Can you get it to normal?
No because of Shaders/ROPS/TMUs being nerfed, aka laser cut. If anything the 1060 6GB should have been designated a ti and 3GB should have been a regular and 6GB could have been put on the 3GB card for product difference but they didn't.



Then it got nerfed as a 5GB card and 8 ROPs are missing.


Very deceptive of them.

I mean heck AMD with the RX 580- they had 4 and 8 GB models which weren't nerfed, then they had a Chinese only market 2048SP card and it was actually labeled as such, it was a Rebadged RX 570.

The 590 and 590 GRE, the 590 GRE was a Polaris 20, 590 a Polaris 30, both had 2304 Shaders like the 580.

Then there was a RX 560 and then 560D, 470, 470D. There was magic in the 560D though, some could be unlocked to a 560.
 
Last edited:
It doesn't even make sense. From just a physical transistor / SM / ROP / TMU / clock standpoint, how can there possibly be any massive hidden performance from a driver level. Sure driver improvements over time should make for slight increases, but the idea of 40% locked away is truly hilarious.

Its on the same process node as 40-series.
With the exception of the 5090, the rest of the stack sees only a modest increase in SM / core counts (~5%).
Die sizes are practically the same as their 40-series predecessors (again, minus the 5090)

Factoring in IPC gains from new architecture, G7, etc., there's no mystery to why the uplift from say a 4080 Super to a 5080 sucks. Anytime in the past where Nvidia had multiple generations utilizing the same node whether Kepler to Maxwell (both 28nm), or Pascal to Turing (both 14nm class), the die sizes had to increase to accommodate additional SMs for decent generational gains. Now when you have Ada to Blackwell (both on 4N), they didn't do that except for the 5090.

Season 2 Nbc GIF by The Office
 
tenor.gif


What does Nvidia get out of nerfing there own cards. Blackwell is performance wise not that impressive to begin with. To power hungry and to little improvements over last gen for to to mention to exspensive. So nerfing there card makes no sense. Also take the competition from Amd to account, you sell the best by being the fastest......
 
Very small % buy 'fastest' but when it comes to the dumpster fire the rtx5000 lineup has been, only true fans get them. Most buyers go with a mid range or budget card and that is where they have higher margins whether AMD, intel, nvidia.
 
Thats just nearly how every SKU this day is produced man.


Industry standard practice more like. There is only one AMD SKU right now that isn't "nerfed" if thats how you define it.
The point is a 1060 6 GB and 3GB were both labled 1060 when there was no label difference other than ram. So if anything the 3GB should have had a marking like how LHR cards did, or the 6 Gb should have been called a TI.
 
The point is a 1060 6 GB and 3GB were both labled 1060 when there was no label difference other than ram. So if anything the 3GB should have had a marking like how LHR cards did, or the 6 Gb should have been called a TI.

The 3050 6GB and 3050 8GB are even worse with the 2 GPUs sharing not a single similar spec. At least the 1060 3 and 6GB still had the same VRAM bandwidth (which IMO is still not good enough to give them the same name with a 10% core count difference). And the solution is easy: the 3050 6GB is simply a 3040 as it's a different class of GPU. But the naming is intentionally misleading.

Funny thing is I like the 3050 6GB, I have 2 and it's a great product for the slot-powered GPU crowd. I simply disagree 100% with the naming choice.
 
Okay, so...

4080 Super vs 5080... Similar number of cores + similar clock speed + minimal architectural changes = similar performance. Simple enough? How could you get 40% more out of it by driver?

Also, what would Nvidia's reason be to limit the performance of a perfectly working chip by driver and make it look worse than it is?

If someone can answer these for me, I might be willing to listen. If not, bullshit.
 
It doesn't even make sense. From just a physical transistor / SM / ROP / TMU / clock standpoint, how can there possibly be any massive hidden performance from a driver level.
LOL maybe some Nvidia code had an inappropriate line of code like Sleep(0) to yield CPU time and that guy found it but more than likely it's all bs.
 
The point is a 1060 6 GB and 3GB were both labled 1060 when there was no label difference other than ram. So if anything the 3GB should have had a marking like how LHR cards did, or the 6 Gb should have been called a TI.
That's fair, wasn't underanding your criticism fully but that adds up.
 
Okay, so...

4080 Super vs 5080... Similar number of cores + similar clock speed + minimal architectural changes = similar performance. Simple enough? How could you get 40% more out of it by driver?

Also, what would Nvidia's reason be to limit the performance of a perfectly working chip by driver and make it look worse than it is?

If someone can answer these for me, I might be willing to listen. If not, bullshit.
For entertainment value I'm willing to wait and see how this pans out. Did you read his manifesto? It reads like a 3 letter agency honeypot recruitment love letter like Nvidia hired some pros to go Phishing for people who are gungho at circumventing Nvidia technology to leverage AI capabilities for the greater good which might interfere with their profits. Of course it's 2am and lack of sleep might not have me thinking clearly.

Here is the post by the person who found the issue:


I find it interesting that the possibility exists. He has been contacted by Nvidia...will be interested to see the outcome
LOL I was going to ignore this thread until I saw the account was suspended and the reddit post removed. Life is stranger than fiction. My popcorn is ready but I'm prepared for disappointment like every Marvel movie and TV series after End Game.
 
Last edited:
These days I wouldn't even be surprised that you have to pay for special drivers to fully unlock them.
yeah, gimme promo if you find to get Nvidia GAYME pass for free please
 
I checked CUDA-Z, still 5070 has no integer core. Same as 4070. 50% throughput of fp32.
 
why not bring in the nvidia

nvidia 1060 cards
nvidia 970 cards

what else?

Do not lock that topic please. Keep up the entertainment please.

I'm surprised no ngreedia comment yet. Or maybe I missed it.

TLDR ... too long to read ... people still write, I'm too lazy to read 4 pages. Most are short comments and memes.
 
Jesus hernandez christ.

I claim to be able to make you a 100x return on investment, are you gonna give me all your money?
YES, please. Send me your bank details. Lambo when?

It doesn't matter if it's compute performance, Nvidia has been known to nerf performance for years and it wouldn't surprise me if Nvidia did purposely limit cards to upsell people who want more performance to the 5090.
Has been known to the small circle of amd priests. Nobody else has noticed anything. I have a GTX 1060, a 1080ti and a 3090 available. I'm willing to test your claims. I'll install a very old driver and test a game from 2018 (let's say sotr) then install the latest and test again. If performance is the same, are you going to admit you are full of... nothing? I'm willing to post videos of the whole process.
 
Last edited:
Has been known to the small circle of amd priests. Nobody else has noticed anything. I have a GTX 1060, a 1080ti and a 3090 available. I'm willing to test your claims. I'll install a very old driver and test a game from 2018 (let's say sotr) then install the latest and test again. If performance is the same, are you going to admit you are full of... nothing? I'm willing to post videos of the whole process.

Yeah a trusted user here actually tested those claims a while back... Maybe it was rtwjunkie? Can't remember. Anyways they ended up being bunk because newer drivers actually helped the old cards performance, lol.
 
@JustBenching @R-T-B
I am telling you, gentlemen, this is totally the legacy of when Kepler fell off a cliff due to architectural deficiencies leasing to quite terrible performance in newer (so developed for PS4/XBone) games versus Maxwell and especially later on Pascal relative to what the earlier performance numbers suggested. That led to hilarious conspiracy theories of NV purposely nerfing older cards in the drivers. Even though it was disproven and definitively shown that the issue was simply with Kepler being what it was the idea still survived to this day, even though it doesn’t make a lick of sense when one actually stops and thinks about for a second.
 
Last edited:
@JustBenching @R-T-B
I am telling you, gentleman, this is totally the legacy of when Kepler fell off a cliff due to architectural deficiencies leasing to quite terrible performance in newer (so developed for PS4/XBone) games versus Maxwell and especially later on Pascal relative to what the earlier performance numbers suggested. That led to hilarious conspiracy theories of NV purposely nerfing older cards in the drivers. Even though it was disproven and definitively shown that the issue was simply with Kepler being what it was the idea still survived to this day, even though it doesn’t make a lick of sense when one actually stops and thinks about for a second.
Exactly. No one knows what the future will bring. It may favour a present-day architecture, or it may not. There's no planned obsolescence, just as much as there is no fine wine, either. Marketing and tinfoil hats.
 
Exactly. No one knows what the future will bring. It may favour a present-day architecture, or it may not. There's no planned obsolescence, just as much as there is no fine wine, either. Marketing and tinfoil hats.
AMD or Nvidia will optimize drivers for the new architectures first and will not care to be everything fine for older Pascals cards for example.

Edit: they optimized newer 9000 series for lower pwr draws in idle and video playback but not for the 7000 series. Why? They want you to upgrade...

With the new drivers Pascals consume less power but, will be for example more shimmering on textures(fences, radiators of cars etc) , for example War Thunder(Dagor engine in cooperation with Nvidia), AC Syndicate(just to name few),
if you want to get rid of the shimmering you have to force your old Pascal in heavy and demanding settings like SSAO 4+ hidden settings in config files like clipmap scale and others.
These heavy settings will push high loads of processing and power to your old GPU.

At this point you have 3 options:
1. Deal with shimmering and mess up your eye sight
2. Deal with higher power consumption and heat, modifying the stock cooler at your own cost and maybe risk, to mitigate the heat and degradation.
3. Upgrade

Edit: 1080 TI with SSAO 4 and clipmap scale + background scale from 1 to 2 can Increase temps from 49 to 65C depending on the map, this will eliminate almost entirely the shimmering with +15-16C cost
1070 in AC Syndicate in order to eliminate shimmering on textures and shadows you can play with HBAO settings but will in crease from 53 C to 68 C = +15 C cost
My question to you is what about same cards being stock coolers and TIC in a silly Prebuilt case, what temps we gonna talk about? we gonna sizzle some VRAM!? We gone have GPU throttle??


The point is, In the end of the day you'll want to upgrade. So call it something else if you wish, not planned obsolescence but, the result is the same, you will upgrade.

At hardware level, planned obsolescence is there. I've seen it to often.

Corsair 650 TX from 10-12 years ago is much alive and well despite heavy smoker user, sticky tar and nicotine on fan and components.12+ 5+ 3.3 all perfect, not even small oscillations.
Corsair 550 TX(7 years warranty) from 7 years ago , non smoker user, the PSU has drops on 12V+ of minus 3.5%, still in tolerance but, maybe just for another year.

He already set his 1070 to 85 % pwr because even is in tolerance, games starts crash out in Windows or he has black screens.

The user with 650W TX has like for example, 25000 hrs(aprox) use while the guy with 550 TX has a 3000 hrs(aprox) used. One user using his pc daily Corsair 650 W TX, while the other with 550TX he use it in Weekend only about 4-8hrs total.
 
Last edited:
AMD or Nvidia will optimize drivers for the new architectures first and will not care to be everything fine for older Pascals cards for example.

Edit: they optimized newer 9000 series for lower pwr draws in idle and video playback but not for the 7000 series. Why? They want you to upgrade...

This is nonsense, Nvidia has been optimizing drivers and releasing new features for the entire RTX series, from 20 to 50. All of them got DLSS 4 support, all of them got the new video decode algorithm with improved power consumption, all of them are getting targeted bug fixes, etc.

There's no changes in how games will display on GTX 10 series GPUs other than them becoming too weak to handle the workload. Newer games and even existing games with updates applied will not perform as great as they used to back then - and you're talking about hardware which is over 9 years old at this point. The first GP102-based GPU released in 2016.

As for AMD, the reason why you aren't seeing any changes in power draw etc. on RDNA 3 GPUs is because... the hardware can't do it. It's simple as that, the hardware can't do it. RDNA 3 is still fully supported and in active development, as is RDNA 2 hardware. It's only the 5700 XT that hasn't seen much in the way of changes recently, again, because the hardware can't really do it.
 
Back
Top