• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

will gpu continue to have crazy TDP?

  • Thread starter Thread starter Deleted member 234478
  • Start date Start date
my point here is pascal was exclusively a gaming Architecture especially, dx 11 optimized to the fullest extent. an arch that is purely made for gaming will always be much more efficient. rdna 2 was amds pascal.

Aren't there Pascal-based Tesla cards?
 
This is a good reference, but I can't help but wonder how many tens of thousands of times has the performance of the hardware improved since Rage 128, even if you disregard the functionality, that's quite fascinating really
Yeah I was wondering that myself
From 2008-2010 until today its like x250-300. Going another 10years back?

Memory bandwidth tho has improved about 1K since then (ATI Rage 128 GL)

These numbers compared are not doing justice to the subject...
Untitled_80.png
 
This is a good reference, but I can't help but wonder how many tens of thousands of times has the performance of the hardware improved since Rage 128, even if you disregard the functionality, that's quite fascinating really
Yeah, I think its hard to deny we definitely made massive progress since the first GPUs. And that progress (every time...) has reinforced our belief that it can't ever stop, but reality is starting to knock on the door at this point.
 
To provide some numbers for this statement: with the stock 200W power limit, my RTX 3060 Ti LHR averaged 26.2 FPS in Unigine Superposition (1440p ultra). After raising the power limit to 220W via a VBIOS flash, it averaged 27.2 FPS. Sample size is roughly 18 test passes at each power target. In both cases, the card was throttling against its power limit.

If you bought a fancier 3060 Ti, it would have come out of the box with the higher 220W power limit. If you have one of those cards, based on my data, you could shave 10% off power consumption with only a 3% performance loss, at most - and that's without touching VFC.
Its true that almost every CPU/GPU today its pushed to the actual edge (by default) beyond its sweetspot on the power/efficiency curve for the sake of competition
Once upon a time, we had 'whole*' GPUs, (primarily) delineated by clocks and RAM (bus width/type/amount).
The Radeon 9800series and GeForce 6800series are decent examples. The GF 4MX, GF3, and GF2 are also great examples.
*At least in the US, "Whole Milk" has already had the butterfat and cream removed.

I just recently had the opportunity to play with a couple Dell OEM 6800s (Vanilla).
1708110479126.jpeg1708110533019.jpeg
1708110585564.jpeg1708110560588.jpeg
They're 100% bus powered (~/<75W), 12pipes, and 256-bit 256MB DDR, and will OC to 425 Core and 390/780 DDR (max 'factory' limits) with ease.
Contrast that with the 'steps up' that required Auxiliary Power and much beefier coolers. Performance is increased, but not inline with the additional heat and power.

IMHO, *that* is the reason we don't see 'fat' low-clocked GPUs. It'd be like the old days, where custom cooling and quick OCing was a 'free upgrade'.
With how chips are made today, it would cannibalize the rest of their own product stack

Aren't there Pascal-based Tesla cards?
Correct.
 
Last edited:
My first GPU (2001) was a PowerVR KYRO II with x2+(?) the performance of Rage128
And my next one was 6800GT (2004) with "out of the world" performance uplift.
 
it still is physics in the end it all comes down to this why. so yes it really is. you wont change natural laws and no one ever will.

without ai we would have no nvidia gpu for gamers right now lol ada gpus are 100% ai. gaming gpus are just the crap chips that couldnt make it to ai chips for selling for 10k plus.

but yes dedicated gaming gpus would be the best but wishfull thinking.

so yes unless they find a completely new way of making gpus we will not come down with the watts ever again. unlesss you would like to have the same performance like now, than they can bring that down but they will never release a next gen with the same power as before. (which they actually already do. 3060 ti to 4060 ti and rx 6800xt to 7800xt). thats how much we are near the wall.
AI (Tensor cores) doesn't take up as much die real-estate as you think. As far as I know, it's about 10-15% on Ada, and way, way less on RDNA 3. The huge majority of a GPU is still the rendering pipeline.
 
while that is true i dont think you understood what my point was. fucking language barrier lol
 
while that is true i dont think you understood what my point was. fucking language barrier lol
If you mean that Ada is wholly meant for AI work, then you might not be entirely wrong, although I'm not sure how long it'll stay feasible to carve AI and gaming GPUs out of the same silicon. I'm sure Nvidia could cram a lot more AI power into a chip by seriously cutting down the unnecessary 3D rendering bits. Actually, isn't this the direction they started with Hopper?
 
nvidia is nearing apple and microsoft slowly but surely with it. you can even feel that nvidia gives zero fucks about gamers anymore terrible drivers(i never had this many issues with nvidia drivers in the last 15 years than in the last year), to late gamereday drivers or plain out no optimisation for new games for weeks. usually they were on time pretty much always.


AI comes first makes sense right why selling from nvidias standpoint cheap af fuck chips to gamers when they can sell them for ten times the price to companies. ada gaming gpus are simply trash chips from ai segment. nvidia is selling features not power. rasterization perfromance is terroble on nvidia cards if we go by value. a fucking high end card should never need upscaling or framegen to get playable frames. lmao

weridly enough my 7900xt rarely needs it its alway above 60 fps in 4k native without uscpacling as it should be for 700€ yet the 4070s whioch wiould have been 40 bucks cheaper is often easily 30% slower in 4k native. no wonder with 192 bit bus(for 600+ € lmao thats a 60 card right there). nvidia wannts to disctate on what resolution you play. 4070 ti 1440p card by marketing 900€ i lost my shit. :slap: :laugh:. that shit was a 299€ card years ago and idiots pay 3 times the price now while being a vram cripple at the same time and crippled 4k performance too.
 
nvidia is nearing apple and microsoft slowly but surely with it. you can even feel that nvidia gives zero fucks about gamers anymore terrible drivers(i never had this many issues with nvidia drivers in the last 15 years than in the last year), to late gamereday drivers or plain out no optimisation for new games for weeks. usually they were on time pretty much always.


AI comes first makes sense right why selling from nvidias standpoint cheap af fuck chips to gamers when they can sell them for ten times the price to companies. ada gaming gpus are simply trash chips from ai segment. nvidia is selling features not power. rasterization perfromance is terroble on nvidia cards if we go by value. a fucking high end card should never need upscaling or framegen to get playable frames. lmao

weridly enough my 7900xt rarely needs it its alway above 60 fps in 4k native without uscpacling as it should be for 700€ yet the 4070s whioch wiould have been 40 bucks cheaper is often easily 30% slower in 4k native. no wonder with 192 bit bus(for 600+ € lmao thats a 60 card right there). nvidia wannts to disctate on what resolution you play. 4070 ti 1440p card by marketing 900€ i lost my shit. :slap: :laugh:. that shit was a 299€ card years ago and idiots pay 3 times the price now while being a vram cripple at the same time and crippled 4k performance too.
Those are fair concerns, not too far from my own. I don't know much about the driver issues you're talking about, as I refused to pay for the Ampere price hike, not to mention the Ada one. I have a 1660 Ti, and a 2070 (which is half dead, unfortunately), but that's it, no more Nvidia for me until they get things right with the pricing. I'm 100% happy with my 7800 XT anway.

The way I see the price hike is two-fold:
  1. For one, they charge way more money than they used to for chips of similar size and PCBs of similar complexity. Let's compare the 4060 and the 1060 6 GB. The 4060 has a 25% smaller GPU, a bone simple PCB and VRM, and 4 memory chips instead of 6, but costs more than the 1060 did back in the days. Wtf, seriously!?
  2. Secondly, as you said, Nvidia is starting to become a datacentre/software company, so they charge more for features you never asked for in the first place. I mean, do we need AI cores for upscaling? I don't. I'd much rather have more performance so that I don't need to rely on upscaling in the first place. Unfortunately, that's not an option these days with AMD jumping onto the bandwagon with FSR. Luckily, that one doesn't need super-duper fancy AI cores, but then that begs the question: why does RDNA 3 have them? What's the point? I just want to play my freakin' games! :laugh: :ohwell:
 
Well i can tell you gpu utilisation was rarely at 99% without a cpu limit in many games thats an abolute no go, this was by far the worst. frametimes not as smooth as it should be and no its not the game when it runs just fine on amd gpu with a flat line. but most stuff comes from the beta features they release. RR was terrible at first(them faces yuck) and framegen still is imo. why should i ever use such a technique it feels horrible. if you would have told pc gamer from 2015 we would upscale and framegen they would laugh at you because its garbage and no upscaling is not better than native how blind are you maybe on a phone screen. yet the raw power barely went up which is the only important thing about a gpu besides the vram of course, still and will be for many years. because fuck that frame gen bullshit. sure i buy 700-1000€ gpu to play horrible feeling fake frames and yes i know all frames are fake but in this case they really are just doubled. absolute garbage feature on both vendors. fuck off with that gimme Tflops and increase rt cores performance thats all i need.

the price hike startet with 600 series from nvidia alias kepler. thats where it all started amd didnt they started heavily in 2019 with navi aka rdna 1.

rx 5700xt was an rx 480/580 sucessor at best. yet it was double the price.

people always cried about the prices even when 500 bucks got you the best gpu on the market but then we at least had awesome 130 bucks gpus too.

but since turing it went really really downhill and even pascal was expensive af on release but at least they cut the prices fast and when it was readily available it was really good.

but ampere and ada lmao throw it in the trashbin. nobody should buy any gpus for 1 generation so these fuckers wake up finally but people here think they got a good price when the get a 4090 for 1500€ completely brainwashed and out of touch. 4090 is such a cut down chip it barely makes an 80ti card even with all the inflation and what not 999€ would be still expensive as fuck and nvidias marge would be still gigantic. but keep on buying folks. :laugh:
 
Well i can tell you gpu utilisation was rarely at 99% without a cpu limit in many games thats an abolute no go, this was by far the worst. frametimes not as smooth as it should be and no its not the game when it runs just fine on amd gpu with a flat line. but most stuff comes from the beta features they release. RR was terrible at first(them faces yuck) and framegen still is imo. why should i ever use such a technique it feels horrible. if you would have told pc gamer from 2015 we would upscale and framegen they would laugh at you because its garbage and no upscaling is not better than native how blind are you maybe on a phone screen. yet the raw power barely went up which is the only important thing about a gpu besides the vram of course, still and will be for many years. because fuck that frame gen bullshit. sure i buy 700-1000€ gpu to play horrible feeling fake frames and yes i know all frames are fake but in this case they really are just doubled. absolute garbage feature on both vendors. fuck off with that gimme Tflops and increase rt cores performance thats all i need.
All I can say is, I agree.

the price hike startet with 600 series from nvidia alias kepler. thats where it all started amd didnt they started heavily in 2019 with navi aka rdna 1.

rx 5700xt was an rx 480/580 sucessor at best. yet it was double the price.

people always cried about the prices even when 500 bucks got you the best gpu on the market but then we at least had awesome 130 bucks gpus too.

but since turing it went really really downhill and even pascal was expensive af on release but at least they cut the prices fast and when it was readily available it was really good.

but ampere and ada lmao throw it in the trashbin. nobody should buy any gpus for 1 generation so these fuckers wake up finally but people here think they got a good price when the get a 4090 for 1500€ completely brainwashed and out of touch. 4090 is such a cut down chip it barely makes an 80ti card even with all the inflation and what not 999€ would be still expensive as fuck and nvidias marge would be still gigantic. but keep on buying folks. :laugh:
Oh man, how much fun I had with my 750 Ti, or my passively cooled 1050 Ti back in the days! :cool: Now, anybody tell me how much fun you're having with your current gen x50 series card! Anyone? No one? :laugh:
 
Efficiency will pretty much always generally improve over time with GPU generations, but TDP relative to performance targets can change quite a bit sometimes they aren't pushed as aggressively and other times over aggressively beyond the efficiency power curve and TDP relative to performance sky rockets in line with that.
 
i dont know i know some who enjoy their 300 bucks 4050(60). :laugh: we had 200 bucks 8gb gpus 8 years ago lmao. that rt and dlss sure is worth it on it tho:slap:. i actually hope it gets even more expensive then people will cry and i will sip the tears since they same people enabled these prices.

look at this shit and start thinking maybe some will finally wake up form the marketing brainwash

data#.png
 
i would love to have a gpu that goes all out no limits attached with a power draw of 700w with a watercooling block on it preinstalled
 
all nvifia gpus are going all out by default with their awesome boosting behaviour which has al algorithm in it theres a reason overclcoking nvidia gpus is almost pointless since pascal. stock you get like really 95% of max power the card has to offer and 5% wont make any differen in any game at all. especially without fps counter you wouldnt even feel 10% or even more.
 
i dont know i know some who enjoy their 300 bucks 4050(60). :laugh: we had 200 bucks 8gb gpus 8 years ago lmao. that rt and dlss sure is worth it on it tho:slap:. i actually hope it gets even more expensive then people will cry and i will sip the tears since they same people enabled these prices.

look at this shit and start thinking maybe some will finally wake up form the marketing brainwash

View attachment 335064

The average joe will never really care about the % cores or any of that stuff.
To be honest I don't care either and its not what I base my GPU upgrades on.
Price in my country/performance/power draw/feature set is what I'm mainly interested in and if those are ok with me then thats all I care about and I couldn't care less what die size or core % it has compared to the prev gens,whatever else.

To be fair I've upgraded from a GTX 1070 to my second hand 3060 Ti in 2022 september so it was close enough.
 
very wise upgrade from man 8gb gpu to the next one. :laugh: in my honest opinion gamers are the dumbest consumers on the planet videogames prove it and hardware does too.
 
very wise upgrade from man 8gb gpu to the next one. :laugh: in my honest opinion gamers are the dumbest consumers on the planet videogames prove it and hardware does too.

Except that I'm yet to play any game where 8 GB was an issue but more like the lack of raw GPU power, but sure you do know everyone's use case right?:rolleyes: 'I also don't plan on upgrading my 2560x1080 monitor anytime soon which is less pixels than 1440p so Vram issues aint pressed on me yet on High settings even in current AAA games'
+ I'm also using DLSS whenever possible cause I like it better than native TAA.
 
The average joe will never really care about the % cores or any of that stuff.
To be honest I don't care either and its not what I base my GPU upgrades on.
Price in my country/performance/power draw/feature set is what I'm mainly interested in and if those are ok with me then thats all I care about and I couldn't care less what die size or core % it has compared to the prev gens,whatever else.

To be fair I've upgraded from a GTX 1070 to my second hand 3060 Ti in 2022 september so it was close enough.
It doesn't matter if you look at the number of cores vs the top-end model, PCB complexity, chip design, or just raw performance vs price, it's plain obvious that Ada is an artificial upsell of Ampere.

Except that I'm yet to play any game where 8 GB was an issue but more like the lack of raw GPU power, but sure you do know everyone's use case right?:rolleyes: 'I also don't plan on upgrading my 2560x1080 monitor anytime soon which is less pixels than 1440p so Vram issues aint pressed on me yet on High settings even in current AAA games'
+ I'm also using DLSS whenever possible cause I like it better than native TAA.
That depends on what you play. (the below link is time-stamped to the point)

TLDR: 8 GB cards use low quality assets in Halo: Infinite, and they stutter in The Last of Us, even at 1080p.
 
It doesn't matter if you look at the number of cores vs the top-end model, PCB complexity, chip design, or just raw performance vs price, it's plain obvious that Ada is an artificial upsell of Ampere.


That depends on what you play. (the below link is time-stamped to the point)

I'm familiar with that video, I watch HUB for years now.
Like I said, my res is somewhere between 1080p and 1440p in regard of pixel count and Vram usage in games, at most I drop from Ultra to High and its 100% issue free at this res.
+ I do not play competitive games either.:)

In fact I'm starting to run out of the raster performance of my GPU in the most demanding games lately, mainly the Unreal Engine 5 games I've played where I was perfectly fine with my Vram but that engine is hammering my GPU completely and only DLSS saved me. 'Immortals of Aveum for example, no Vram issues at all but it was killing my GPU natively on High settings'

Last of us was ~fixed a while ago, its all good at my resolution and high settings even natively. 'Hogwarts too, I can't really do RT in that game anyway, no RT High is not a problem now'
No issues with last of us at 1080p high here heck even 1440p is doable:
 
Last edited:
rt in hogwarts is uselss gimiick anyways cyberpunk is the only game where rt really makes sense in relative to the performance hit. metro was nice too every other implementation was very forgettable. raster 99% nad still will be mmany maqny years some of us wont be alive when rt overtakes raster completely.
 
Feels like my 4070 is the most effecient GPU I've ever had. Tons and tons of processing power and usually in the 180W range. The CPU can pull more power depending on scenario!
Its kind of weird.

I undervolt my 3080 and I play games capped at either 30 ro 60fps. The result is I get very favourable wattage vs what I had on something like my 1080ti and even 1070. So I agree with you,

My issue I suppose is Nvidia like Intel, have basically started treating the part of the c/f curve thats very inefficient as part of normal boost clocks. So the cards can peak at very high levels of power (and heat). Good hardware, bad firmware. Unless of course you dont care about power and heat efficiency but just want every ounce of performance you can get.

Thats the issue they are more efficient, but not enough to get the performance without having very high max power.
 
Last edited:
Now start metro exodus no matter your undervolt you will sit at 300 watts or close to it ;). thats why its so funny readfing such stuff hur dur my gpu is undervolted 200 watt max untill you launch a game thta really brings the shaders to glow and your underwolt while yes sipps less power is significatly slower than stock.

also undervolting modern nvdiia gpus is dumb und useless. nvidia does it by itsself. this did igor say not me. ;)

and when you see how small the difference is beeteeen powerlimit reducing and undervolting via frequenzy curve then you will see he is right.

Well I wont as I am pretty sure I also lowered power limit :p But I dont own the game so cant test. You mean enhanced edition I assume? Probably a version thats had RT nonsense added to it to really work the card.
 
Rt is no nonsense in metro exodus ee. The complete lighting is rt. Its not even working on non rt capable cards. Its also extremely performant for what it is. :) even my amd card can get 4k native 60 plus and nvidia can use dlss.

one of the best rt implementations on the market.

this applies to more games that was just the most intense example. Unreal engine 4 games are very heavy too usually. But there you at least you can get to 270-290 watts with the 3080. i had it too in 2020. great card 320 bit bus .

if it had 16 or 20gb vram like it should have had. it would have rendered all ada cards with less vram useless. especially the 70 cards with their garbage 192 bit bus. thats the reason why in 4k the 3080 outperforms the 4070 easily.
 
Last edited by a moderator:
Back
Top