Thursday, May 9th 2024

NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W

According to Benchlife.info insiders, NVIDIA is supposedly in the phase of testing designs with various Total Graphics Power (TGP), running from 250 Watts to 600 Watts, for its upcoming GeForce RTX 50 series Blackwell graphics cards. The company is testing designs ranging from 250 W aimed at mainstream users and a more powerful 600 W configuration tailored for enthusiast-level performance. The 250 W cooling system is expected to prioritize compactness and power efficiency, making it an appealing choice for gamers seeking a balance between capability and energy conservation. This design could prove particularly attractive for those building small form-factor rigs or AIBs looking to offer smaller cooler sizes. On the other end of the spectrum, the 600 W cooling solution is the highest TGP of the stack, which is possibly only made for testing purposes. Other SKUs with different power configurations come in between.

We witnessed NVIDIA testing a 900-watt version of the Ada Lovelace AD102 GPU SKU, which never saw the light of day, so we should take this testing phase with a grain of salt. Often, the engineering silicon is the first batch made for the enablement of software and firmware, while the final silicon is much more efficient and more optimized to use less power and align with regular TGP structures. The current highest-end SKU, the GeForce RTX 4090, uses 450-watt TGP. So, take this phase with some reservations as we wait for more information to come out.
Source: Bechlife.info
Add your own comment

84 Comments on NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W

#1
Kn0xxPT
pumping power trend continues.
Posted on Reply
#2
dgianstefani
TPU Proofreader
If efficiency increases (work done/power used), which it has with RTX 40xx compared to RTX 30xx, significantly, the power limits don't bother me.
Posted on Reply
#3
Kn0xxPT
dgianstefaniIf efficiency increases (work done/power used), which it has with RTX 40xx compared to RTX 30xx, significantly, the power limits don't bother me.
but ..600w for a GPU is a bit scary....... it should have "Eletric Hazard" sticker on it....
Posted on Reply
#4
dgianstefani
TPU Proofreader
Kn0xxPTbut ..600w for a GPU is a bit scary....... it should have "Eletric Hazard" sticker on it....
It's not scary at all.
Posted on Reply
#5
grammar_phreak
dgianstefaniIt's not scary at all.
It wont be scary if it has a functional power connector.
Posted on Reply
#6
N/A
just add a second 2x6 pin to the flagship. all the current designs are reusable, they just keep recycling things, pin to pin compatble even. nothing changes. same N4/N4P node same difference.
Posted on Reply
#7
oxrufiioxo
Kn0xxPTpumping power trend continues.
They tested up to 900w for ADA doesn't mean they will actually release a gpu that is capable of pulling that wattage.
Posted on Reply
#8
cvaldes
Guys, companies like Apple, Nvidia, AMD, Intel, etc. don't ship everything that passes POST in their labs.

We all have to wait to see what Nvidia decides to productize for the RTX 50 series. But it most certainly won't be every single configuration they have tested.
Posted on Reply
#9
natr0n
MSI AB - Power limit slider Oh how much I appreciate you

Nice card render. So they will all have hbm ?
Posted on Reply
#10
cvaldes
natr0nMSI AB - Power limit slider Oh how much I appreciate you

Nice card render. So they will all have hbm ?
I read GPU rumor articles pretty frequently and I've seen nothing about any expectation that the next generation of consumer videocards will be using HBM.

The benefits of HBM are better harnessed in the datacenter so I assume almost all of the HBM parts will go into AI accelerators for enterprise customers. Typical consumer workloads like gaming won't exploit the full HBM performance envelope. I don't see that changing anytime soon. If a videogame is going to sell well, it needs to run decently on a wide variety of hardware including more modestly specced systems like mid-range notebook GPUs and consoles.

From a business standpoint, there is very little logic in putting HBM in some consumer cards and regular GDDR memory in others. The more sensible solution has been used before: put better specced GDDR in the high end cards and less expensive memory in the entry-level and mid-level cards.
Posted on Reply
#11
Nordic
dgianstefaniIf efficiency increases (work done/power used), which it has with RTX 40xx compared to RTX 30xx, significantly, the power limits don't bother me.
While I personally don't want a 600w card of heat, the 4090 is fairly efficient in performance per watt. I would rather not have an 600w heat (approximately) in my office.

250w Blackwell with 2x the performance of my 6750xt would be compelling. I doubt it would be anything resembling a reasonable price given how far ahead of AMD they are.
Posted on Reply
#12
cvaldes
Kn0xxPTbut ..600w for a GPU is a bit scary....... it should have "Eletric Hazard" sticker on it....
There are appliances in your house that draw far more current from a standard outlet.

Toasters, microwave ovens, hair dryers, and more. And you've probably had them for years, maybe even decades.

The device in question needs to be carefully designed and properly used. It's the same whether it's a graphics card or a stand mixer.

That said 600W is probably nearing the practical limit for a consumer grade GPU. Combined with other PC components, display, peripherals, etc., that's using up a lot of what's available in a standard household circuit (15A @ 120V here in the USA). And it's not really wise to push the wiring and breaker in a standard household circuit to the limit for long periods of time.
Posted on Reply
#13
RogueSix
The RTX 5000 series needs a performance boost so they MUST go with the same 600W envelope as the RTX 4090 on the top SKU. No doubt about it.

Remember: These new cards will once again be built on a 5nm process just like the RTX 4000 series. It will be a slightly optimized process over "4N" but the gains from that optimization will be minimal. Any and all performance gains will have to come from the new Blackwell architecture and, if the rumors are true, from a broader 512-bit memory interface.
In fact, pure desperation to achieve performance gains, is probably the sole reason why the RTX 5090 will receive a 512-bit bus. If the RTX 5000 series would be built on 3nm, we would probably get the same 384-bit bus for saving costs and power reqs.

We can thank AI for all of that. It will be quite a disappointment in H2/2024 and early 2025. Looks like both nVidia and AMD have dedicated all of their 3nm capacities solely to AI/datacenter. I would even go so far and bet that they have made a non-public arrangement behind the curtains wrt leaving their consumer stuff on 5nm. Zen 5 will also be another 5nm CPU and RDNA 4 is also bound to be produced on 5nm.

What we will be getting in the consumer space in the next few months and into next year will most likely be pretty damn incremental and iterative. In a bit of a twist of irony, the real next big thing could actually come from Intel in the form of Panther Lake in mid or H2/2025 (if Intel 18A is at least half as good as Pat is trying to make us believe).
Posted on Reply
#14
dgianstefani
TPU Proofreader
RogueSixThe RTX 5000 series needs a performance boost so they MUST go with the same 600W envelope as the RTX 4090 on the top SKU. No doubt about it.

Remember: These new cards will once again be built on a 5nm process just like the RTX 4000 series. It will be a slightly optimized process over "4N" but the gains from that optimization will be minimal. Any and all performance gains will have to come from the new Blackwell architecture and, if the rumors are true, from a broader 512-bit memory interface.
In fact, pure desperation to achieve performance gains, is probably the sole reason why the RTX 5090 will receive a 512-bit bus. If the RTX 5000 series would be built on 3nm, we would probably get the same 384-bit bus for saving costs and power reqs.

We can thank AI for all of that. It will be quite a disappointment in H2/2024 and early 2025. Looks like both nVidia and AMD have dedicated all of their 3nm capacities solely to AI/datacenter. I would even go so far and bet that they have made a non-public arrangement behind the curtains wrt leaving their consumer stuff on 5nm. Zen 5 will also be another 5nm CPU and RDNA 4 is also bound to be produced on 5nm.

What we will be getting in the consumer space in the next few months and into next year will most likely be pretty damn incremental and iterative. In a bit of a twist of irony, the real next big thing could actually come from Intel in the form of Panther Lake in mid or H2/2025 (if Intel 18A is at least half as good as Pat is trying to make us believe).
Desperation?

My dude their main competitor is supposedly not even trying high end this upcoming gen.

NVIDIA are competing with themselves, and I don't think they're going to find it difficult.

Reminder that most 4090 cards use around 450 W out of the box. People like to throw the "600 W" figure around but that's just the max power limit, and only for some cards.

Posted on Reply
#15
damric
I would like chip designers to get back to shipping the processors with default clocks dialed in at peak efficiency, and leaving plenty of headroom meat on the bone for overclockers.
Posted on Reply
#16
bonehead123
Kn0xxPTit should have "Eletric Hazard" sticker on it
More like a radiation hazard sticker, for the reactor that you will need to provide the juice for it, hehehe :)
Posted on Reply
#17
cvaldes
damricI would like chip designers to get back to shipping the processors with default clocks dialed in at peak efficiency, and leaving plenty of headroom meat on the bone for overclockers.
I would like chip designers to continue putting the OC headroom in the boost clock so I don't have to spend hours diddling with various tools to do what GPU engineers can do better than me. They have advanced degrees in electrical engineering, mathematics, physics, etc. plus years of experience as paid professionals not dilettantes. I don't. And I already paid them when bought my card.
Posted on Reply
#18
LabRat 891
600W Design Power?
Can we have active phase change heat pump loops, yet (again)?
Posted on Reply
#19
Nordic
damricI would like chip designers to get back to shipping the processors with default clocks dialed in at peak efficiency, and leaving plenty of headroom meat on the bone for overclockers.
It really isn't hard to undervolt down to peak efficiency. You can even use the power slider if you want a quick and dirty way of doing it. My 6750xt is most efficient at around 50% power level, or alternatively 2000mhz with a heavy undervolt. It took me less than an hour to find that. The 4090 is incredibly efficient when undervolted to peak efficiency.
Posted on Reply
#20
Lew Zealand
damricI would like chip designers to get back to shipping the processors with default clocks dialed in at peak efficiency, and leaving plenty of headroom meat on the bone for overclockers.
"Peak efficiency" changes with different jobs and frequently you continue to get more efficiency per watt as you continue to lower clocks. Are you saying you want to run at 2000 MHz all the time? I don't and nobody here does either. And this is coming from someone using their i7-9700F right now at 3.5 GHz (thanks @unclewebb for the fantastic Throttlestop). I turn it up in games that need it but most don't.

I choose to run at this efficient speed but guess what? If I run at 3.0 GHz it's even (a little) more efficient. Maybe you could run your CPU with Turbo off at all times to maximize your efficiency but most people want the best performance out of the box. The rest of us weirdos can tune for efficiency afterwards.

And if Blackwell has better efficiency than Ada (it should) then bring on the 600W GPUs. You can tune those for efficiency and if you don't like the power draw, it sounds like Nvidia will have a 250W option for you. Which you can tune as well!
Posted on Reply
#21
sephiroth117
450W is the max I'm willing to pick, be it a 5080 or 5080Ti

600W, lol there are PSU just with 600W available for the whole damn PC and now it's just the GPU
Posted on Reply
#22
LabRat 891
sephiroth117450W is the max I'm willing to pick, be it a 5080 or 5080Ti

600W, lol there are PSU just with 600W available for the whole damn PC and now it's just the GPU
@ this point, I'd be all for a separate PSU (on-card or external, ala multi-chip Voodoo)
-48VDC, 12V converted on-card?
Posted on Reply
#23
Assimilator
250W is pretty high for the lowest product on the stack.
Posted on Reply
#24
cvaldes
It's only a rumor, not a verified account witness. As we know, Nvidia has many GPUs with lower power requirements for both mobile and certain usage cases that don't require much 3D performance (like signage).

Just because 250W is what these people purported "seeing" today doesn't mean that there aren't chip designs that Nvidia hasn't bothered benchmarking yet.

As we know, Nvidia tends to work from the top of the stack downward from their largest and most powerful GPUs to smaller ones with less transistors.
Posted on Reply
#25
RogueSix
dgianstefaniDesperation?

My dude their main competitor is supposedly not even trying high end this upcoming gen.
I was obviously (but apparently not obviously enough) not referring to desperation with regard to their non-existing competitor ;) but desperation in finding the much needed performance gains to motivate people to buy a RTX 5090.
As I said, it's not going to be an easy task. The boost from the optimized 5nm process will be minimal, the Blackwell architecture will only provide that much of a boost and the 512-Bit memory interface (if true) will contribute to a higher power envelope as well as a higher cost.

I would be very positively surprised if nVidia can manage to squeeze more than a +30% gain out of the RTX 5090 over the RTX 4090. Maybe they can in ray-tracing by slashing the rasterizing performance (even more) but overall I believe that they will be facing difficulties to make the RTX 5090 an attractive upgrade for RTX 4090 owners. The 512-Bit interface reeks of desperation to be able to advertise at least some innovation (instead of real innovation like a 3nm process).

As I said in my previous post, I'm convinced that both nVidia and AMD will be more or less half-assing their upcoming consumer generations in favor of AI. Can't really blame them either. There is billions to be made from AI/datacenter while consumer stuff is comparatively boring and limited.
They have long since moved their hardware and software top talent to AI. We consumers will have to take a backseat until the AI curve flattens.
Posted on Reply
Add your own comment
May 20th, 2024 07:24 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts