• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,087 (1.09/day)
Fresh details have surfaced about NVIDIA's upcoming flagship "Blackwell" graphics card, the GeForce RTX 5090, suggesting power delivery and board design changes compared to its predecessors. According to Benchlife, the new Blackwell-based GPU will feature a new 16+6+7 power stage design, departing from the RTX 4090's 20+3 phase configuration. The report confirms earlier speculation about the card's power requirements, indicating a TGP of 600 watts. This specification refers to the complete power allocation for the graphics subsystem, though the actual TDP of the GB202 chip might be lower. The RTX 5090 will ship with 32 GB of next-generation GDDR7 memory and utilize a 14-layer PCB, possibly due to the increased complexity of GDDR7 memory modules and power delivery. Usually, GPUs max out at 12 layers for high-end overclocking designs.

The upcoming GPU will fully embrace modern connectivity standards, featuring PCI Express 5.0 x16 interface compatibility and implementing a 12V-2×6 power connector design. We spotted an early PNY RTX 5090 model with 40 capacitors but an unclear power delivery setup. With additional power phases and more PCB layers, NVIDIA is pushing the power delivery and signal integrity boundaries for its next-generation flagship. While these specifications paint a picture of a powerful gaming and professional graphics solution, questions remain about the broader RTX 50 series lineup. The implementation of the 12V-2×6 connector across different models, particularly those below 200 W, remains unclear, so we have to wait for the CES-rumored launch.



View at TechPowerUp Main Site | Source
 
Yes well, I watch a lot of videos from "NorthWestRepair" on YouTube, many of the AIBs can learn to put some damn fuses on their boards, near the core / memory and the power connectors. Seems like only MSi does it with the 4000 series.

@W1zzard I am sure I won't be the only one that would appreciate it, if you could mention which PCB boards have them with the upcoming reviews? They expect us to pay top dollar, we best expect good warranties (5 years) with components that last.
 
That's an extra 1000$ for those 2 layers. Thank you for your purchase.
 
.
That's an extra 1000$ for those 2 layers. Thank you for your purchase.
How much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
 
PCIe 5.0 x16 does not surprise me for the top dog. However I hope this does not mean solutions like 5.0 x8 (or 4x even) for lower spec cards, as this might be an issue with the (low) adoption of PCIe 5.0 motherboards...
 
Hmm... I'm interested in the pricing of this card.
 
For those who don't have any better choice to buy. :laugh:
 
Man Nvidia just goes overkill on everything huh: die size, VRAM, PCB layers, VRM and very possibly prices :cool:
 
PCIe 5.0 x16 does not surprise me for the top dog. However I hope this does not mean solutions like 5.0 x8 (or 4x even) for lower spec cards, as this might be an issue with the (low) adoption of PCIe 5.0 motherboards...
I doubt that. Even if the gpu right after the 5090 (5080, 5080 Ti or whatever Nvidia calls it) is 4090 level, you'd be hard pressed to notice a difference between 4.0 x16 and x8.
 
How much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
I would break it down into chiplets o_O

It's not that bad, try making a Tokamak :nutkick:
 
High end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
 
Man Nvidia just goes overkill on everything huh: die size, VRAM, PCB layers, VRM and very possibly prices :cool:
They are stuck on 4nm (5nm) and brute force everything, so they have some kind of good perf. boost over past generation. I was looking forward to upgrade, but I'm somewhat sceptical right now.
 
very possibly prices :cool:

"Possibly" :roll:

I did get a good chuckle, I'll give you that. ^_^

Where is that awesome cat avatar you used to have? Everytime I saw it, I wanted to pinch those cheeks. :P
 
Given the amount of power stages dedicated to the memories, GDDR7 efficiency needs at least to be questioned. Coming from a 3-phase design on the 4090.
 
High end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
Wat. GPUs were always a luxury, today more than ever. One doesn’t need a dGPU, especially a powerful one, for most of their needs.

And no, 5080 would not be a “mid end”, lol. It will be the second fastest GPU in the world on release. That’s by no meaning of the word a “mid” product. It’s absolutely flagship performance. It’s just that the 5090 is a ridiculous halo product, essentially a Titan and the dual GPU card replacement (with some compromises, true) that straddles the line between consumer and pro products. Nobody needs a 5090 to play games, whatever the unhinged enthusiasts for whom it’s “Ultra with PT at 4K or nothing” would tell you.
 
it can't be cheap to manufacturer what's going on above and under all those layers, a work of art
 
Wat. GPUs were always a luxury, today more than ever. One doesn’t need a dGPU, especially a powerful one, for most of their needs.

And no, 5080 would not be a “mid end”, lol. It will be the second fastest GPU in the world on release. That’s by no meaning of the word a “mid” product. It’s absolutely flagship performance. It’s just that the 5090 is a ridiculous halo product, essentially a Titan and the dual GPU card replacement (with some compromises, true) that straddles the line between consumer and pro products. Nobody needs a 5090 to play games, whatever the unhinged enthusiasts for whom it’s “Ultra with PT at 4K or nothing” would tell you.
Well it won't be. Given the number of shaders on the same node, only memory having 30% more bandwith, 4090 will be faster in rasterization unless Ngreedia learned how to defy the laws of physics. 5080 is a shitshow. $800 value GPU price gauged to $1400 or more.
 
@RedelZaVedno
It absolutely will be. No reason to think otherwise. 4080 was faster than 3090Ti. 3080 was faster than Titan RTX. And so it goes for every generation. There is less than 25% delta between 4080/4080S and 4090. 5080 with less than 25-30% uplift just will not make sense. I am willing to actually bet on it.
 
@RedelZaVedno
It absolutely will be. No reason to think otherwise. 4080 was faster than 3090Ti. 3080 was faster than Titan RTX. And so it goes for every generation. There is less than 25% delta between 4080/4080S and 4090. 5080 with less than 25-30% uplift just will not make sense. I am willing to actually bet on it.
I'd accept the bet. Math doesn't lie. 4080S had 10240 shading units vs 3090TI's 10752, with 3090TI being on shitty samsung node. 5080 has 10752 shading units vs 4090's 16384 on similar nodes. No way can DDR7 speed close that gap in rasterization. Sure it will have better RT (who really cares?) and maybe support new frame gen tech in DLSS (again who really cares), but in raw performance 4090 will be the 2nd best, only behind 5090.
 
Well it won't be. Given the number of shaders on the same node, only memory having 30% more bandwith, 4090 will be faster in rasterization unless Ngreedia learned how to defy the laws of physics.
Shieeet, I guess we all hallucinated when the 970 was faster than 780 with less everything on the same node. Or when the 2070S was a bit faster than the 1080Ti with less everything on the same node (16 and 12 were the same node). Guess NV regularly breaks the laws of physics.

I'd accept the bet. Math doesn't lie. 4080S had 10240 shading units vs 3090TI's 10752, with 3090TI being on shitty samsung node. 5080 has 10752 shading units vs 4090's 16384 on similar nodes. No way can DDR7 speed close that gap in rasterization. Sure it will have better RT (who really cares?) and maybe support new frame gen tech in DLSS (again who really cares), but in raw performance 4090 will be the 2nd best, only behind 5090.
Sure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
 
Sure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
Ngreeida doesn't need to advertise 5080 as being slower. Black Lether jacket one will come on stage and brag 5080 to be x % faster than 4090 when using super duper new DLSS frame generation 4090 won't support. Same goes for RT and new tensor cores. But he'll conveniently forget to mention that in raw rasterization performance 4090 still wipes the floor with 5080. And raw resterization really is all that counts at the high end. I'm not spending $1500+ to use frame gen mess.
 
@RedelZaVedno
Cool, cool. I don’t care for ravings about NGreedia, leather jackets and fake frames. I heard that before. I am sticking to my point. Again. WHAT. ARE. YOU. WILLING. TO. BET?
 
@RedelZaVedno
Cool, cool. I don’t care for ravings about NGreedia, leather jackets and fake frames. I heard that before. I am sticking to my point. Again. WHAT. ARE. YOU. WILLING. TO. BET?
Yes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
 
Last edited:
Back
Top