• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4070 has an Average Gaming Power Draw of 186 W

12GB is a repeat of the 3070. The card could just barely fit new AAA games in it's VRAM buffer at the time but not even 2 years later we are already seeing the card having to drop settings and stuttering issues. The same is likely to happen to the 4070 / 4070 Ti. This kind of price for a cards that will last less than 2 years is not what I'd call acceptable and it'll kill PC gaming as the vast majority of people cannot afford to drop that kind of money on just their GPU less than every 2 years.
I think you are right but this time around it will be much longer than two years, so it won't matter as much:

The Xbox and PS5 both have 16GB of RAM, usually allocating 10-12GB as VRAM and both consoles targeting 4K. Both consoles are "current" for the next 3-4 years and when their successors appear in 2027 (rumoured), game devs won't instantly swap to optimising for the newest consoles, they tend to away from the outgoing generation over a year or so, while the vast majority of their paying customers are still on the older hardware.

IMO 12GB is enough for at least 3 years, maybe even 5. Meanwhile, the 10GB of the 3080 and 8GB of the 3070 were widely questioned at launch - I forget whether it was a Sony or Microsoft presentation that claimed up to 13.5GB of the shared memory could be allocated to graphics, but the point is that we had entire consoles with 13.5GB of VRAM costing less than the GPUs in question that were hobbled out of the gate by miserly amounts of VRAM.

The 3070 in particular has been scaling poorly with resolution for a good year now, but it's only in the last couple of months that the 3070 has really struggled. In 2023 we've had four big-budget AAA games which run like ass at maximum texture quality on 8GB cards, with 3070 and 3070Ti owners given the no-win choice between stuttering or significantly lower graphics settings.
 
I think you are right but this time around it will be much longer than two years, so it won't matter as much:

The Xbox and PS5 both have 16GB of RAM, usually allocating 10-12GB as VRAM and both consoles targeting 4K. Both consoles are "current" for the next 3-4 years and when their successors appear in 2027 (rumoured), game devs won't instantly swap to optimising for the newest consoles, they tend to away from the outgoing generation over a year or so, while the vast majority of their paying customers are still on the older hardware.

IMO 12GB is enough for at least 3 years, maybe even 5. Meanwhile, the 10GB of the 3080 and 8GB of the 3070 were widely questioned at launch - I forget whether it was a Sony or Microsoft presentation that claimed up to 13.5GB of the shared memory could be allocated to graphics, but the point is that we had entire consoles with 13.5GB of VRAM costing less than the GPUs in question that were hobbled out of the gate by miserly amounts of VRAM.

The 3070 in particular has been scaling poorly with resolution for a good year now, but it's only in the last couple of months that the 3070 has really struggled. Four big-budget AAA games run like ass on 8GB cards, with 3070 and 3070Ti owners given the no-win choice between stuttering or significantly lower graphics settings.

Yeah 12GB is the bare minimum but it should be ok through the console generation except for with terrible ports.

Also the Series S is like 8.5 GB for games and targets 1080p so all the sub 400 gpus should be fine for 1080p.... Although 400 for an 8GB gpu is kinda sad.
 
Last edited:
RX 6800 can be had for around $500 on Amazon in the US so it's going to be cheaper. It also has more VRAM, which is important given games are already using more than the 4070 Ti's 12GB frame buffer.

$485 on Newegg atm
 
Frequency and voltages automatically go down as the GPU approaches it's power limit, as such it becomes voltage limited because there is no more headroom at those frequency steps, this happens on pretty much every GPU. If you look at TPU's power figures practically all GPUs run at exactly their power limit.



Because that card is fast enough for games to become CPU limited in a lot more scenarios, a card like the 4070 will almost certainly be power limited 100% of the time.
The maximum stock voltage on a 4090 is 1050mv and it will not hit the power limit on most games with that voltage. In general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve. That's the case with Ampere and RDNA3 but if your power limit is high enough you'll just hit the top of the voltage-frequency curve and the limit will be voltage.

I mean my 4090 isn't CPU bottlenecked in Port Royal and it'll still draw less than 400W in some scenes. I don't think it ever hits 450W in that test. Why? Because it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work. I can show you some screenshots at DSR 4K resolutions if you want?

The Igorlab's results are not because of a CPU bottleneck. A 4070Ti isn't going to be CPU bottlenecked at 4K.

You don't seem to understand that a power limit is just a number chosen by Nvidia or AMD. The 4090 could've easily been a 200W TDP card if Nvidia wanted to but then it would've indeed always been power limited. With 450W that's not the case and the same might be true for 200W on the 4070.
 
Because it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work.
You realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.
In general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve.
You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
 
You realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.

You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
Well, that's how Nvidia designed it. I have a 4090 so I know what I'm talking about as I see it has trouble hitting 450W in most things. It makes things less efficient but plenty of hardware works like that. It's not that uncommon. A lot of AMD CPUs work like that in games. They'll hit the top of the frequency curve before they hit the power limit.

I can assure you there's no CPU bottleneck happening here:
Cyberpunk-2077-AvWatt_DE-2160p-DX12-Ultra-RT-On.png
Marvels-Guardians-of-the-Galaxy-AvWatt_DE-2160p-DX12-Max-Settings-RT-On.png

I can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example? I don't get why you keep denying reality when there's so many examples you can check. It's fine to think it makes no sense but I think a CPU that hits TJmax before hitting the power limit or voltage limit makes even less sense and yet that also exists. It's really weird how you act as if you know it better than people who actually own the card and can test it.

Also I'm pretty sure a voltage-frequency curve is mostly dependant on the process node in function with the architecture.

This is my stock voltage curve and once I hit 2790mhz it'll not go up no matter how little power I'm using as the max voltage Nvidia allows at stock is 1050mv.
Untitled.png
 
They'll hit the top of the frequency curve before they hit the power limit.
I mean I am not going to go over this a million times but that's absolutely not how most cards are designed to operate. I am 100% sure if you simply increase the power target your card will run at higher average clock speeds even though nothing about the frequency curve would change proving that I am right and these GPUs are designed to adhere to the power limit above all else.

I can guarantee you the 4070 will run at it's designated power target all the time in the same way the 4070ti does :

1680639348820.png


I can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example
No because it wouldn't prove anything, RT cores would be the bottleneck in that case leaving shaders underutilized, it's well known that RT workloads result in lesser power consumption vs pure rasterized workloads on Nvidia cards. Also Spider-Man is known to be pretty heavy on the CPU under any circumstances so it would be a bad choice anyway.
 
Back
Top