• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration

Sorry, but your goal is no to "fit", but to have a PCB with the approximate size of the heatsink above it.

More nonsense. Is your cpu heatsink the same size as your motherboard?
 
Oh boy, we got a qualified PCB designer here.

3090ti didnt melt.

Not if you plug it in right.

So it's simultaneously too cramped but also too big?

Why, so you can cut the bus size in half and kneecap the chips performance?

You heard him, this big GPU is too big, dont even think about it!


If you see complaining about poor quality games and hardware as "right wing incels", it may be time for you to go outside and touch grass.

Just because you buy geforce cards to run some commercial software and dont play games doesnt meant hats how other people use it. People need to accept that.



3090/3090Ti, different size/amount of memory IC's, same bus width.

Oof.
 
Last edited by a moderator:
So close to making a comment without making yourself look like a fool. So close bro.

3090/3090Ti, different size/amount of memory IC's, same bus width.

Oof.
The 3090ti came out when 16Gb GDDR6X became available, allowing nvidia to forgo the clamshell design of the 3090 with 2 memory ICs per channel to just a single one with double the density, but still had 12 of those whereas the 3090 had 24.
For the 5090, with its 512-bit bus, given that each controller is 32-bit, you need a minimum of 16 memory chips, period. Currently on GDDR7 the smallest memory ICs available are 16Gb ones, there are no 8Gb ones.

So yeah, to use less chips you'd need a smaller memory bus, which would decrease the performance.
 
The 3090ti came out when 16Gb GDDR6X became available, allowing nvidia to forgo the clamshell design of the 3090 with 2 memory ICs per channel to just a single one with double the density, but still had 12 of those whereas the 3090 had 24.
For the 5090, with its 512-bit bus, given that each controller is 32-bit, you need a minimum of 16 memory chips, period. Currently on GDDR7 the smallest memory ICs available are 16Gb ones, there are no 8Gb ones.

So yeah, to use less chips you'd need a smaller memory bus, which would decrease the performance.
Agreed. However, your comment came across as a universal truth. I was pointing out how there are instances where this isn't true.

Good response though. Obviously you know what you're talking about about.
 
your comment came across as a universal truth.
It is not, since this depends on the memory controller design. GPUs usually use 32-bit controllers, so you can divide the bus size by 32 to get the amount of memory chips needed in the pcb (double that for clamshell designs that want to increase capacity, like your 3090 example).
You regular x86 desktop has 64-bit (or 2x32-bit that kinda work as a single unit) controllers, so for your regular consumer CPU with 128-bit (the so called "dual-channel"), you need a minimum of 2 sticks.
Apple's Mx lineup uses 16-bit controllers, and so on and so on.
I was pointing out how there are instances where this isn't true.
Yeah, but you ended up talking about a different thing. The comment you were replying to was correct still: decreasing the amount of memory modules for the 5090 would imply in a smaller bus, and thus lower performance.
 
Turing had a true Titan card - https://www.techpowerup.com/gpu-specs/titan-rtx.c3311 - with professional features unlocked, with Ampere they brought back the x090 series instead and kept professional stuff locked.

Huge déjà vu of all the arguments from 4 years ago about how the 3090 was not a Titan.
And still all Turing products with tensor cores, from the 2060 up to the 2080ti, had the same tensor fp16:fp32 rates as the titan rtx.

The only "locked" feature on Ampere and Ada was tensor fp16 with fp32 acc, as I had said before, which is not really a "professional" stuff given how other stuff is "unlocked", but it is a market segmentation tactic nonetheless.

Regular fp16 performance has the same rates in both the GeForce and in their professional lineup, that's the point I wanted to correct in your original statement.
 
Quite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.
No just 2GB Micron GDDR7 chips, no 3 or 4GB, if there is room to add 32x 1GB modules, it will be far better for memory efficiency, but here is an image of Gigabyte Aorus Extreme Air Version RTX 5090
 

Attachments

  • NVIDIA-GeForce-RTX-5090-Blackwell-GPU-GDDR7-Memory-PCB-rotated.jpg
    NVIDIA-GeForce-RTX-5090-Blackwell-GPU-GDDR7-Memory-PCB-rotated.jpg
    270.9 KB · Views: 82
x090 are Titan-class things. Ultra high end stuff regardless of naming. The features have and will vary based on what is available and what makes sense for the manufacturer to sell. FP64 (or FP16) ratios were dropped not to cannibalize the expensive Quadro/Tesla cards which makes perfect sense from Nvidia's point of view. It still left performance, VRAM and other bits and pieces to somewhat justify the price in addition to bragging rights. Whether these things are worth the price is wholly up to buyer. There are still professional use cases for x090 cards where their price is a bargain and also looks like everyone underestimates the wealth and "want" of enthusiast gamer crowd who did and will buy those for gaming.

If you find these to be obscenely overpriced and useless, that is OK. Others will have a different opinion, and that is also OK.
 
Is the large size supposed to be impressive?
Both the die size (744mm2) and the pin number are actually impressive. The die size makes it hard to produce with 0 defect, greatly diminishing the yield, and the number of IOs makes the PCB way more complex, with many layers to be able to route the 512b memory bus, so a more expensive PCB as well.

Regular fp16 performance has the same rates in both the GeForce and in their professional lineup, that's the point I wanted to correct in your original statement.
That's also what I pointed out in his original statement. I don't see what's so different between a Titan and a xx90 card.
 
FP64 (or FP16) ratios were dropped not to cannibalize the expensive Quadro/Tesla cards which makes perfect sense from Nvidia's point of view.
FP64 has been dropped in all products par the x100 chips after kepler. The latest consumer-facing product to have proper FP64 hardware was the Titan V, which used a V100 chip.
All other chips, be it geforce, quadro or tesla, lack proper FP64. You'll only be seeing those in the x100-based chips.
Regular FP16 has not been capped between products. Tensor FP16 with FP32 acc has, which is clearly a market segmentation tactic, as I had said before.
I do agree with your points.
That's also what I pointed out in his original statement. I don't see what's so different between a Titan and a xx90 card.
I guess there's a point in that the titan used to have no artificial limitations whatsoever, while the 3090/4090 did, even if those differences were mostly irrelevant, but were still differences nonetheless.
Still, complaining about this is kinda moot IMO, the GPU delivers the performance of the highest-end available product, has a price tag to match, and fits perfectly for the use of prosumers, like a halo product should be.
 
I guess there's a point in that the titan used to have no artificial limitations whatsoever, while the 3090/4090 did, even if those differences were mostly irrelevant, but were still differences nonetheless.
Still, complaining about this is kinda moot IMO, the GPU delivers the performance of the highest-end available product, has a price tag to match, and fits perfectly for the use of prosumers, like a halo product should be.
Yes, that's exactly my thoughts. The only time a Titan was noticeably different is as you pointed out the Titan V but because of the architecture it was based on.
 
In looking into the melting connector issues I came across this post. I just want to point out that two months later that 3valatzy and everyone else that commented that the PCB was poorly designed and would melt/catch fire was absolutely correct in one of the earliest posts about the 5090 PCB.
 
In looking into the melting connector issues I came across this post. I just want to point out that two months later that 3valatzy and everyone else that commented that the PCB was poorly designed and would melt/catch fire was absolutely correct in one of the earliest posts about the 5090 PCB.
it would be nice to point out that the initial set of comments did not say or speculate why things would melt. Power density and connector really ain't it. this time around.
 
In looking into the melting connector issues I came across this post. I just want to point out that two months later that 3valatzy and everyone else that commented that the PCB was poorly designed and would melt/catch fire was absolutely correct in one of the earliest posts about the 5090 PCB.

Is there any report of burning regarding the FE model or is it all partner models?

connector really ain't it

One of the cases (the one analised by derbauer) was unbalanced load in the connector.
 
One of the cases (the one analised by derbauer) was unbalanced load in the connector.
Pumping 20A through a connector probably won't end well for other connectors too. IIRC His PSU side - that was a bit worrying at 150C - were dual 8-pins.
 
Pumping 20A through a connector probably won't end well for other connectors too. IIRC His PSU side - that was a bit worrying at 150C - were dual 8-pins.

If the connector is designed for it it's fine. 8 pin mini fit has a huge load capacity, to the point PSUs commonly use cables that split one on the PSU to 2x 8 pin connectors on GPU side and it was never problematic. The 150W is only a PCIe historic thing, the standard was not designed considering today's monstrosities.

2x 8 pin mini fit connector also have the same number of active conductors as the 12pin 12VHPWR micro fit, the difference really is the safety margin, 12pin 12VHPWR micro fit is rated and being pushed to the limit of what's possible with a good micro fit type connector, while we never even had to use top notch mini fit connectors because it's just bigger and has more current capacity.

The over current over a single pin has to be a problem on the load side - the GPU that is. Just another 12VHPWR failure to add to the list, can't wait for the next episode, I mean revision.
 
The over current over a single pin has to be a problem on the load side - the GPU that is. Just another 12VHPWR failure to add to the list, can't wait for the next episode, I mean revision.
Load side - GPU and 12VHPWR failure seem to be two quite different things, no?

If the connector is designed for it it's fine. 8 pin mini fit has a huge load capacity, to the point PSUs commonly use cables that split one on the PSU to 2x 8 pin connectors on GPU side and it was never problematic.
Yup, if you define huge as 13A which the ones mostly used for the 300W 8-pin connections on PSU side are. Which is all good and fine. In this case though we are talking about 20A or a bit more coming down one of these pins...
 
Last edited:
Load side - GPU and 12VHPWR failure seem to be two quite different things, no?

Not really, the connector is dumping it's 6 hot and 6 common pins into the same pad, it's on the connector to keep things balanced. It wasn't like this with previous gpu's like the 3090, but the 4090 also did something like this according to derbauer.

voltage-area-memory.jpg


It's the cost of cost-cutting
 
Not really, the connector is dumping it's 6 hot and 6 common pins into the same pad, it's on the connector to keep things balanced. It wasn't like this with previous gpu's like the 3090, but the 4090 also did something like this according to derbauer.

voltage-area-memory.jpg


It's the cost of cost-cutting

I wouldn't be surprised if it's also in part to blame for the massive rise in coilwhine with 4090s vs 3090s.
 
Back
Top