• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

will gpu continue to have crazy TDP?

Yep, seems power draw and physical size just goes up and up. Used to be high end cards took a single 4 pin power connector, and a single slot cooling solution was just fine! Now we have double slot coolers as the standard, even on midrange cards, and sky high power requirements fulfilled by connectors that can barely handle the load (and fail at times).

Still, you don't have to have the top of the line, especially if power draw is a concern. It's like complaining about the fuel efficiency achieved by a Ford Mustang when you could drive a Honda Civic instead.
 
This is not a fair comparison. Using a large cooler either air or water has much faster diminishing returns on a cpu compared to a gpu.
  1. Modern consumer cpu's, especially AMD, are designed to run upto 90c unless you manually limit them. AMD's boost clock algorithm and PBO will run the cpu as fast as the cooling allows. Better cooling can increase your performance with Ryzen cpu's.
  2. Modern consumer cpu's are becoming so compact it is challenging to transfer heat through the ihs to whatever cooler you have available. Run the cpu direct die cooling and temperature is a non-issue.
  3. GPU's are all direct die cooling. A 4090 with a good cooler may run cooler than a 7800x3d for example, but the 4090 is producing more than 3x as much heat. It is simply easier to manage heat with a gpu.
Otherwise I agree with you that despite the high power usage, the 4090 has rather impressive performance per watt even without undervolting or power limiting it.

I was kinda leading the question by saying "high power consumer CPUs" as that's almost all Intel at this point but of course you're spot on about the higher power/energy/heat density of CPUs when compared to GPUs. I'm kinda interested in that as: are the CPUs pushed harder past efficiency than GPUs, which is why they are even more power-dense? Or is it simply an effect of their different internal design? Both CPUs and GPUs are made on similar nodes, though I suppose GPUs are leading slightly but not so different that the node explains it.

nope. never once bought an AIO and I overclock every cpu I have and run heavy things with no issues with heat. I dont know what the hell youre talking about and air coolers can only take X space in predetermined area. with gpu, those things are so ugly huge heavy and gets wider and thicker over and over. you cant even have a cage for hdd cause gpu just go into the case area there. 3 lanes, whats next 5 lanes? lets start making motherboards just for a 6lane gpu so they can make them ugly and as fast as possible. gpu far exceeded air coolers in size over time.

You've figured out that if you buy the right CPU, you don't need a huge cooler to keep it cool. Do the same with GPUs: buy a 250W one just like the 'old' days. 4070 Super is a normal sized card and is everything you're looking for and fits in every case.

Complaining about 350-400W GPUs is like complaining that Lambos exist when you own a BMW 3 Series or a Corvette. You have something good and available already and let other people enjoy their higher power enthusiast parts.
 
Just got my Deck Oled. So we are talking about high wattage GPUs...

Here's Dead Space (vanilla), max settings. 8.3W system power. 1.4W on gpu. :rockout:

2722F18F-38D1-4D5D-ACCC-DD5E427137F4.jpeg
 
Undervolt or even underclock the 4090 a little and you have an incredibly efficient gpu. Even 10% less performance than stock is dramatically less power used.

This! I just didn't want to spend $1600 + tax though.
 
As a side note, my 7800 XT can do anywhere between 212 and 280 W avg. TBP depending on where I move the power slider in the driver, and the change in performance is so miniscule that I'd say it's unnoticeable. It also plays everything fine in 1440p (sans RT of course), has a double slot cooler and is super cool and quiet. You don't need the hungriest cards with crazy overclocks for decent gameplay.
 
The 4090 I have can be dropped 100w+ and the 4070 I grabbed for brother can be dropped 40-50w while losing a negligible amount of perfomance 5% ish in a repeatable scenario.

The majority of the cards especially on the nvidia side are pushed beyond their sweetspots this generation and last generation they used a crappy Samsung node.

My Ampere cards on the other hand didn't scale as well but I could drop my 3080ti from 400w stock for the evga ftw3 ultra to around 345w with negligible perfomance loss.

My 2080ti strix did not scale much from it's about 300w stock usage but was my best overclocker from the past 4 generation at around 15% performance gained at 350w OC.

My Titan Xp sat around 250w but did not scale down very well and also didn't really oc much.

I prefer progress at any cost if that means power as well so be it if the 4070 super/7800XT at 220-250w or 4080/7900XTX at 250-350w was the flagship card that would be way more meh to me anyways.

I hope every generation has at least one card that really pushes the envelope cost/power be damned.
 
The majority of the cards especially on the nvidia side are pushed beyond their sweetspots this generation and last generation they used a crappy Samsung node.
even with what sweet spot is at stock isn't true sweet spot as all cards are running higher volts then they need to even for the worst chip of made. 4070super i have runs stock around 2700-2800mhz doing so at 1.1volts yet you can run i think most them at 2700 at .95volts which results in decent drop in power and heat with almost 0 loss and if you spend time even gain performance. I think cards are setup to much on conservative side which makes a lot more powerdraw then could save if they went tad more aggressive with voltage even dropping 4070s to 1.0volts instead of 1.1
 
even with what sweet spot is at stock isn't true sweet spot as all cards are running higher volts then they need to even for the worst chip of made. 4070super i have runs stock around 2700-2800mhz doing so at 1.1volts yet you can run i think most them at 2700 at .95volts which results in decent drop in power and heat with almost 0 loss and if you spend time even gain performance. I think cards are setup to much on conservative side which makes a lot more powerdraw then could save if they went tad more aggressive with voltage even dropping 4070s to 1.0volts instead of 1.1

I mentioned the 5% becuase that is the typical variance I see card to card my Gigabyte Gaming OC clocks around 2800mhz at stock vs others I've seen in the 2600mhz range same with the 4070 there is a slight silicon quality variance. Some get the short end of the stick.
 
Yep, seems power draw and physical size just goes up and up. Used to be high end cards took a single 4 pin power connector, and a single slot cooling solution was just fine! Now we have double slot coolers as the standard...
Ive always preferred an overkill cooling solution. It either gives you more performance or a quieter fan. I like both.
I was kinda leading the question by saying "high power consumer CPUs" as that's almost all Intel at this point but of course you're spot on about the higher power/energy/heat density of CPUs when compared to GPUs. I'm kinda interested in that as: are the CPUs pushed harder past efficiency than GPUs, which is why they are even more power-dense? Or is it simply an effect of their different internal design? Both CPUs and GPUs are made on similar nodes, though I suppose GPUs are leading slightly but not so different that the node explains it.
Maybe Intel's latest stuff is pushed far beyond its optimal power curve but AMD chips are very efficient. Maybe we can applaud Intel for having an architecture that scales to such high power usages, but Intel's market share has been declining.

I cannot speak authoritatively but I think AMD's boost algorithm is more aggressive than gpu's. AMD targets 90c. Assuming power used is within defined limits, the boost algorithm will boost clocks until temperatures reach 90c. The more cooling you have the more performance you have. I am less informed on how Intel's boost algorithm works.

Look up modern consumer cpu's running delidded. They run at very low temperatures. Igorslab has great content showing this.
 
Ive always preferred an overkill cooling solution. It either gives you more performance or a quieter fan. I like both.

Maybe Intel's latest stuff is pushed far beyond its optimal power curve but AMD chips are very efficient. Maybe we can applaud Intel for having an architecture that scales to such high power usages, but Intel's market share has been declining.

I cannot speak authoritatively but I think AMD's boost algorithm is more aggressive than gpu's. AMD targets 90c. Assuming power used is within defined limits, the boost algorithm will boost clocks until temperatures reach 90c. The more cooling you have the more performance you have. I am less informed on how Intel's boost algorithm works.

Look up modern consumer cpu's running delidded. They run at very low temperatures. Igorslab has great content showing this.
AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).

Zen 6 will fix this with interposer tech/fan out etc. But till then AMD CPUs are only more efficient under full load.
 
it seems out of hand. lets not focus on the monstrosity sizes they have become and the sag, but why arent they doing anything about tdp with them. 300-450W for gpu is out of hand. cpu have stayed pretty stable with theirs, maybe higher intel chips have gotten high but they are just a few but if cpu can stay pretty low why gpu are getting higher and higher the jumps are quite large for the higher end ones. will this trend continue more and more? at the pace were in now, well be well over 600w for a card in the next few years. and with electricity prices jumping, thats paying the card twice in its use time. or a good chunk. seems like they put mno effort into the efficiency of them. hell I remember a time when I was thinking 180w was a lot for a gpu.

Out of hand... more like you're out of touch...180w alot *rolls eyes*

And heard of undervolting...

AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).

Zen 6 will fix this with interposer tech/fan out etc. But till then AMD CPUs are only more efficient under full load.

So you are claiming you can game on a 14900k with it consuming less than 30w...
 
AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).
True... but its more like 20-25W top for IOD. At least for a properly configured system. Unless Zen4 parts use more than that... I really miss this info
Its the trade off of having scalable architecture. Chiplets are used now for 4.5years and took me very long time to understand that AMD used chiplets not for cost reduction per say, but for scalability. Exact same architecture from 8 core up to 96 core parts
So this may have some impact for general consumer products like Ryzens but gets less and less a negative as you progress up to higher (16+) core count CPUs.

Same attempt (not quite the same) on RDNA3. Will see how that will go

EDIT: typo
 
Last edited:
True... but its more like 20-25W top for IOD. At least for a properly configured system. Unless Zen4 parts use more than that... I really miss this info
Its the trade off of having scalable architecture. Chiplets are used now for 4.5years and took me very long time to understand that AMD used chiplets not for cost reduction per say, but for scalability. Exact same architecture from 8 core up to 96 core parts
So this may have some impact for general consumer products like Ryzens but gets less and less a negative as you progress up to higher (16+) core count CPUs.

Same attempt (not quite the same) on RDNA3. Will see how that will go

EDIT: typo
It's 100% for cost reduction.

The idle power wouldn't be an issue except the interposer technologies to efficiently do chiplets are expensive.

You can learn about how they'll likely fix this flaw here.


Just a shame we have to wait until the sixth generation of an otherwise good core for AMD to step up and pay for the fix.

You can also notice the chiplet problem with the chiplet based larger die RDNA3 cards, they aren't good at low loads.
 
AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).

Zen 6 will fix this with interposer tech/fan out etc. But till then AMD CPUs are only more efficient under full load.
I do not really dispute this. It is similar with the multi chiplet gpu's.
 
Ive always preferred an overkill cooling solution. It either gives you more performance or a quieter fan. I like both.
This, I agree with, but within reason. If your GPU can run whisper quiet with a dual-slot, dual fan config, then there's no reason why it should have a 2 kg, 4-slot cooler held by a stand.

Maybe Intel's latest stuff is pushed far beyond its optimal power curve but AMD chips are very efficient. Maybe we can applaud Intel for having an architecture that scales to such high power usages, but Intel's market share has been declining.
I'm not gonna applaud Intel for giving us the same gaming performance as the 7800X3D with three times the power consumption.

I cannot speak authoritatively but I think AMD's boost algorithm is more aggressive than gpu's. AMD targets 90c. Assuming power used is within defined limits, the boost algorithm will boost clocks until temperatures reach 90c. The more cooling you have the more performance you have. I am less informed on how Intel's boost algorithm works.
No. They run either at the top of the voltage-frequency curve, or at the power consumption / electrical limits, or at the temperature limit, whichever they reach first, just like Intel CPUs do. The 14900K will run at 100 °C just like the 7700X will run at 95 °C if your cooler can't handle it. My 7800X3D, on the other hand, never goes above 82-83 °C under a be quiet! Dark Rock 4 because it reaches the top of its V-F curve before it could reach any other limit. If you reach a temperature limit with any CPU, you just need a better/different cooler (or a less powerful CPU), that's all.

Edit: AMD only said that their CPUs are "designed to run at max temp" (which I hugely disagree with) because compact chiplets are way harder to cool than less dense, monolithic chips are, and they're also way more fussy with the coldplate design for some reason.

AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).

Zen 6 will fix this with interposer tech/fan out etc. But till then AMD CPUs are only more efficient under full load.
That's also not true. AMD is less efficient at idle because of the IO die consuming 15-25 W (depending on RAM and IF speed and voltage), but once you put it under any kind of load, it's way more efficient than top-tier Intel. My 7800X3D is an example again: 25-28 W at idle (which is crap, I know), but 38 W in Cinebench ST at 5.05 GHz, and around 50-ish W in games.

This is a must-watch on this topic:
 
Last edited:
It's 100% for cost reduction.
It is also about yields. Your always going to have imperfections in silicon that render dies either DOA or defective.

Imagine what the yield/ability to make monolithic versions of the current EPYC dies that are 100% working. Also means its far easier to bin individual parts as dies that dont make say 7950x turbo speeds may be fine for EPYC dies due to the lower intended clocks.

That flexibility will be a great boon when you are a foundry customer as being able to effectively harvest dies per wafer will be so much higher vs monolithic dies. Has anyone noticed how they havent had to add dual quad core CCDs to make up 8 core parts for Ryzen etc? I suspect this is because they have been able to get decent enough yields with this approach to not require it as well as enough demand in the EPYC lineup to be able to use them there.


You can also see this movement with Intel with their Tile based approach and Foveros technolgy. I look forward to the possibility of Intel being able to put the I/O die below the higher heat output cores hopefully meaning we dont see socket sizes following the same size increases we have seen with GPUs over the years. (Look at Threadripper Pro boards and the sizes we are looking at already)
 
oxrufiioxo said:
The 4090 I have can be dropped 100w+ and the 4070 I grabbed for brother can be dropped 40-50w while losing a negligible amount of perfomance 5% ish in a repeatable scenario.
AMD chips are actually significantly less efficient under low loads (90% of typical usage), due to the IO die and infinity fabric taking up a lot of power (30-40 W minimum).
Zen 6 will fix this with interposer tech/fan out etc. But till then AMD CPUs are only more efficient under full load.
IO Die seems to consume pretty constant 12-13W, at least from my experience with Ryzen 5000 and 7000 CPUs. Idle power usage of CPU Package is usually 35-40w.
Efficiency under load comes from the same thing Intel used to beat AMD into submission with - smaller manufacturing node. Intel is still on Intel 7 while Zen4 CCDsa are TSMC N5, this is a full node ahead.
I cannot speak authoritatively but I think AMD's boost algorithm is more aggressive than gpu's. AMD targets 90c. Assuming power used is within defined limits, the boost algorithm will boost clocks until temperatures reach 90c. The more cooling you have the more performance you have. I am less informed on how Intel's boost algorithm works.

Look up modern consumer cpu's running delidded. They run at very low temperatures. Igorslab has great content showing this.
I do not think this is about a more aggressive boost algorithm. It is more about power density - the small size of CCD and power pushed into it. It is difficult to get the heat out of there, especially compared to intel where the amount of power is roughly the same but area is quite a bit larger. What this ends up doing in boost algorithm is that in Intel's case the temperature is not as important of a factor as it is in AMD's case.
 
Last edited:
Ive always preferred an overkill cooling solution. It either gives you more performance or a quieter fan. I like both.
Sure, that's a fair point and that's why aftermarket coolers exist. My point was that in the past, single slot coolers were sufficient, not required, for high end cards, and these days we're having to use dual slot coolers even on midrange parts.
 
Sure, that's a fair point and that's why aftermarket coolers exist. My point was that in the past, single slot coolers were sufficient, not required, for high end cards, and these days we're having to use dual slot coolers even on midrange parts.

These days everyone has the extra PSU power for at least a 150W card (see: 4060 and 7600) and there's no way you can cool that with a single slot cooler. 75W is probably your maximum, using the 51W SSHH RX 6400 as a guide. There no demand for SS coolers except under rare, specific, restrictive conditions.
 
every little bit counts. I look at these things ahead. a set a line and try not to pass them. hopw they can make one card decently efficient like the 6650xt then go bonkers with 6700xt. nvidia is far worse with tdp. I was thinking to get the 5700xt in the past but that tdp is trash and then 6600xt showed the mistake they made. its evident they realized this. since covid all hell broke lose with tdp. efficient cards are not a thing and this is the same thing with buying pc hard ware. you can always just add another $10 and jump just a little bit higher, just a little bit more power, just a little more speed etc but I set criteria and try to find products that fit in that.

Remember that more and more people want higher frame rates too. In some games i am using less than my old 390X which was in 1090p and not 4k.

tled.jpg
 
Sure, that's a fair point and that's why aftermarket coolers exist. My point was that in the past, single slot coolers were sufficient, not required, for high end cards, and these days we're having to use dual slot coolers even on midrange parts.
well and then there were times were gpus were passive cooled.

its simple physics why we got where we are.
 
Did you check you memory temp ? Mine goes over 80ºC with an undervolt and no OC on the vRAM. Last summer when i had over 30ºC in my room it got to 88-90ºC. Asus Dual dose not seem to have a good solution for cooling the vRAM as other coolers do.

Just ran 3DMark Timespy and the Memory Junction Temp looks alright not the best but not the worst I been 68C max and GPU 61.7C and GPU Hot Spot 78.2C, this is with stock fan curve nothing custom may I add.

and I saw the card pulling 195W and hwinfo shows 195.9W. so not bad really, since I haven't had time to really test undervolting the fast test I did my card wasn't happy with but the mem oc was a instead stable.
1708073226548.png
 
Back
Top