• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration

both 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity.
For gaming? Could be.
For prosumers? It's the opposite of an overkill.

RTX 5090 is capable of gaming by accident and it should only be considered a GPU for real maths.
 
It means that the design violates the common sense.

Sorry, but your goal is no to "fit", but to have a PCB with the approximate size of the heatsink above it.

task accomplished

1735169856654.png


There is a fan at the end that moves air upwards for more efficient airflow.. the PCB is shortened to fit that goal. You don't need a paddle.


1735169944486.png
 
I must be doing something wrong cause my connector is still fine. It's been 2 years...

I really like those comments always. You can search for them. I was kinda sure I find them before I started reading the comments.

I do not have an issue with
  • windows 11
  • windows 10
  • nvidia psu connector
  • * fill out whatever you want *

Some people are unable to see. That topic is old enough, enough said about that power connector.

--

Let's wait for the hole hardware assembly to see the constraints.

smarter than Nvidia engineers

Nope. They are not smarter. Work experience most likely. Education. And a job at Nvidia. Not sure if there are robots, humans, aliens or something else doing that job. Maybe Artificial intelligence? Or autorouting software for the printed circuit board design?

If that was true I would never see any flaw in any nvidia product since that company existed, right?

Smart is always up to definition. in my point of view - bad cars and bad car brand.

The problem people have with the connector is that it is simply worse than the last one.

I disagree. If you have bad manufacturing quality. I did not want to write my story again. But I have to in context of those other quotes i did in this post.

You can not judge those connectors with the naked eye. You need a special measuring machine. That I learnt in my work experience so far in different companies. (more than one)

My corsair rm 750 psu died. I replaced it with a enermax revolution d.f 750 Power supply unit. I just checked the emails and conversation I had with enermax Thu, 2 Mar 2023.
on the power supply side some connectors went loose itself while doing the cable management. I wrote an email to enermax and got those cables replaced. These cables had no retention force. You can not see those injection molding defects with the naked eye. You need a ogp machine. I was lucky that on the power supply side the cables for the 24 pin connectors and the cpu connectors both at the same time did disconnect completly in my fractal design meshify 2 case.

I'm not allowed to talk about my previous employer. We tested those assembled connectors even the pull out forces

Visually you can not see those differences with the naked eye. when you have the replacement calbes and the bad cables with the power supply - you can mechanically test the retention force quite easily by hand.

you get bad quality when you have a "bad process" wiht your injection molding machine and the mold.

i recommend you "gently" pull with a little force on those connectors to check if they fit.

I'm more upstet about corsair and their shitty quality. I have no problem with enermax. cables and connectors are replaceable. but a dead corsair power supply which was never used over 50% of those 750W in 1.5 years is unacceptable. While having a full time job.

Plugged in right means once you hear the click, you are there, just like the old style.

It'S about the retention force of the connector. I do not have that spec for that connector. I can tell you other connectors have that specifications. And that is also tested for certain customers and certain connectors.

you can not see with the naked eye visually the differences from a "bad" and a known "good" connector. You need machines to reliable test that.
you need a decent factory with a good injection molding process. And another factory with a good assembly process.

--

I do not think it'S a valid argument to say people will complain and will not buy it.

If the build quality can be seen beforehand it's worse I won'T buy it. I look at the information before I buy something.
 
Last edited:
Kind nuts, gpu market really needs to move to chiplets. Finally a good amount of memory but yet again the x080 will probably be a terrible deal. Supply will also be terrible as always with scalpers running the show, might be a nice buy when the 6090 is close to launch lol

Quite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.

3 or 4gb chips don't exist yet. I doubt they jump to 600 wats, people said the same thing about the 3090ti and 4090, I doubt it will happen for standard models, probably will be the same 450W of the 3090ti and 4090. The price tag will also certainly stay well bellow 3000$, my bet is 1799$ given the previous generations 1499$ and 1599$ launch prices, I could see it go to 1999$ but not more.

Can we please get the USB-C port back?
One cable monitor connection for both display signal and USB data should be a thing on desktops too, not just laptops.

It was cool for VR too. Now that USB4 is getting more common (not really sure why but whatever) it's getting less necessary, but it was still nice to have.

The x090 series are the new Titans

No they aren't, x090 have their FP16 and FP32 severely cut down. Might not matter to you, but it's still there as reminder they're not titans.
 
They make a "click" sound when fully inserted.
I'm just saying it isn't the distinctive click as the old connector.
What can I say, there are a lot of people who don't understand how this stuff works, and just talk a lot of shit. Who knows. Noobs making videos? Dunno.
So all of the major tech channels are just talking out of their asses? Everyone was talking about how it was flawed compared to the 8 pin, less insertions until failure, very little or no click, the connector falling out, can't be angled or bent to fit in many cases, no easy way to tell if its connected all the way in. And theres plenty of videos of repair techs replacing melted power connectors, or ones which got so hot it melted the solder on the PCB. I'm just pointing out examples of videos that are out there.
Don't act like I have never used one before :D
Again, it's just my observation compared to the old connector.
You can feel it, and hear it. Maybe people are using crappy power supplies or something, don't know what to say.
Then every company was making crappy power supplies until the power connector was quietly changed to the updated 12v 2x6 connector.
The xx90 users dont complain about the price, anyone who buys a flagship knows the price of admission, whether it is in your pay grade is up to you. Dont forget, people have credit, so it isnt like they are just dumping all that cash at once, though a lot of people do, and they still dont complain, because they know.

The people who do complain are the ones with no intention at all of buying one. Though they may want it.
I see xx90 users complaining about the price of the next xx90 card, then they go ahead and buy it anyway and blame the other team for the pricing. Having credit is a whole other issue, its why so many people are in debt spending beyond their budget.
This post was edited by a mod, if you have to be so passive aggressive towards me,maybe just put me on ignore instead.
 
Last edited:
Quite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.

This takes armchair engineering to a whole new level, in case you hadn't realized, a small PCB is actually an engineering feat, it is much better due to signal loss concerns. The fact they pulled a full 512 bit interface with such a humongous GPU on a PCB this size is something remarkable. Shame my country went to hell in a handbasket just weeks before this launched, with the real at almost 7 to the dollar I have no possible chance to buy one of these, at least until the currency settles down. Just a couple of weeks ago it was about 5 to the dollar...
 
I'm just saying it isn't the distinctive click as the old connector.
No, but its still there, and you can hear it.

So all of the major tech channels are just talking out of their asses?
If they cant operate a piece of equipment, then yeah..
Everyone was talking about how it was flawed compared to the 8 pin, less insertions until failure, very little or no click, the connector falling out, can't be angled or bent to fit in many cases, no easy way to tell if its connected all the way in. And theres plenty of videos of repair techs replacing melted power connectors, or ones which got so hot it melted the solder on the PCB. I'm just pointing out examples of videos that are out there
Just because you have a YouTube channel doesn't mean you know what you are talking about. And just because you saw it on YouTube doesn't make it the bible. What you just described is an idiot in action.
Then every company was making crappy power supplies until the power connector was quietly changed to the updated 12v 2x6 connector.
Maybe, I just know what I have. And I know it has many insertion cycles, and I know when its plugged in.. because I can hear it click. It sits tight in the socket, so if it falls out, you are an idiot, sorry.
I see xx90 users complaining about the price of the next xx90 card, then they go ahead and buy it anyway and blame the other team for the pricing. Having credit is a whole other issue, its why so many people are in debt spending beyond their budget.
Blame the other team for pricing? The other team is not even in the competition at that end.
 
Last edited by a moderator:
Quite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.

$3.43T company with R&D budget of $8.675B versus random TPU member making corrections on the design of the PCB.
 
Quite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.
Nah, this looks about on brand for Nvidia in terms of PCB design. You really only need it to be big enough to support the circuitry on a board, engineering the PCB around being a structural component of an add-in card is a bad idea if you need to caclulate a stress model at all in the first place. The heatsink and the cute little shell around it will be the rigid, load-bearing part, as it should.
4. Too many memory chips - this needs either 3 GB or 4 GB chips.
More chips + more bus width is cheaper, easier, and just plain better than less, more dense, faster mem chips, ESPECIALLY when you need to cool the desktop space heater right next to it.
2. The single power connector will melt;
I don't get why people think the connector pins can't handle a piddly ~8.3A each. It'll get warm, sure, but the issue with the original connector meltdowns was poor contact—forcing that amperage through a contact patch that was way smaller than designed because connectors don't behave as well around users as they do engineers. 12V-2x6 fixed this by having actually good pins and refusing to energize unless you JAMMED that f*cker all the way in there, as you were supposed to do with the original connector... and every other power connector in your PC.
 
More chips + more bus width is cheaper, easier, and just plain better than less, more dense, faster mem chips

Not really, it's not like there's an alternative since higher density chips don't exist yet, but that thing looks cramed as fuck. Probably better than what they did back with the 3090 where they had memory on both sides of the board - which brings all sorts of complications - but this will be replaced with higher density chips as soon as they're available.

I don't get why people think the connector pins can't handle a piddly ~8.3A each. It'll get warm

Because it doesn't make sense. This was debated to hell already but to sum up, all built in safety margins are gone and there's simply no need to have a teeny-tiny connector on a 3 or 4 slot cinder block of a GPU.
 
They do, but why let facts get in the way of talking crap about nvidia?

We are talking about human beings here it's basically has to be idiot proof they do have to print on bleach not to drink it after all.

I actually ran my 4090 for 8 months with the clip broken on accident no flames......
 
Stick to discussing the topic and not who works for who and whether they would or would not agree to anything.
 
No they aren't, x090 have their FP16 and FP32 severely cut down. Might not matter to you, but it's still there as reminder they're not titans.
FP32 is not cut down, and Tensor FP16 is indeed locked to half instead of the expected 1:2 that we had in Turing, but that's only for tensor FP16 with FP32 acc, tensor FP16 with FP16 acc and also other smaller data types are not capped.
If you know your way around, it's not an issue whatsoever for inference, and you can work around this for training by using fp16 acc for your passes. I'd still consider it pretty much a titan substitute for all ends and means, even in pricing, given how expensive the 5090 is supposed to be.
 
Well it's official, Jensen's overcompensating. Poor guy.

Guys with a $122B net worth don’t have to do any compensating.

If true this card will be instant 100°C
I don't know they squeeze so much the pcb ?
what for ? money ? again....

Vrm need to be away from mem chips/gpu core and same for memory
At least 2x16 pins 3 fans and a 2Kilos cooler

Good luck for repair when it's dead by crack or bend board
Will see

Where did you go to school to learn VLSI circuit board design that the PNY engineers didn’t?

What click? Every tech channel was saying just cram the connector in until you can't see the edge of the connector itself, MSI was the only one who did something to help by making the connector side yellow.
The molex 6+2 and 8 pin have a loud click and you can feel when its plugged in right, not

What click? You just outed yourself as having no personal experience. Have you ever misinstalled a component? How about the AMD Athlons that were lacking built-in thermal protection that fried themselves if you turned then on without a fan connected, or had a fan failure? That was an awesome time, a $5 fan killing $250 CPUs. How many years did you rage about that?

How about the first lot of 7900xtx cards that blew up because AMD screwed up the cooling on them. Do you post about that non-stop?
 
Maybe they could put more L3 cache and stay on a 384-bit bus. Not to mention that both 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity.
This is a no holds barred GPU. It does have a large enough cache but it still makes no excuses.
512-bit bus is something enthusiast crowd has been asking for ever since the last card that did that and that both was a long time ago and GPUs have become more bandwidth hungry.
32GB comes with 512-bit bus, is more than previous gen, there is current fascination with more VRAM and this segment gets a lot anyway so that is simply logical.
Is it expensive? Sure. But this will be ultra-hind end segment where cost is not the question but getting all that is technically possible is.

No, it is not overkill or stupidity.
 
It means that the design violates the common sense.
There is no such thing as "common sense" when it comes to designing a complex PCB like this, this is called engineering and you definitely have no clue about it, seeing your comments about memory among other nonsense.

Sorry, but your goal is no to "fit", but to have a PCB with the approximate size of the heatsink above it.
Not at all. The PCB's main purpose is to host an electronic circuit, being the base support of a heatsink is secondary. For some heatsink designs you may want air to come from both sides so it's even better if the PCB is shorter than the heatsink. Edit: and I would add that the PCB being shorter makes it less prone to bending or breaking, considering the expected size of the heatsink, it's not a bad thing neither. I assume the heatsink assembly will be done in such way to be able to sustain its own weight over the PC case and not too much over the PCI PCB, but we can't discuss that just by looking at this PCB anyway.

Have you not seen how the AMD engineers do it? I can show you.

RX 7900 XTX PCB size:

View attachment 377249
There are boards like this at Nvidia too lol, nothing new... Are you impying the bigger the better ? Usually that's the contrary.

This here size:

View attachment 377250
Maybe they could put more L3 cache and stay on a 384-bit bus.
Do you realize how big the GPU already is ? The bigger the die, the lower the yield, which makes the price exponentially higher. Anyway 744mm2 is close to the maximum doable size

Not to mention that both 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity.
In terms of bandwidth to rendering ratio, the 5090 is on par with previous devices. If you mean the whole device is overkill and "stupid", well for pure gaming it's not wrong. This is either a tech showcase, or a prosumer product, for 3D graphics artists, small AI workstations and such. It's not aimed at average gaming.

Send them many greetings, and tell them to learn more.
Go tell them yourself dude, in particular, go do yourself a favor and tell them to use 8 chips for their 512b bus. Nvidia is the world leader in AI, GPU computing and gaming GPUs, but they definitely should ask 3valatzy's opinion for their next design, makes sense.






Exactly.

View attachment 377251
The fact that it's possible to fail at correctly plugging their previous power connectors does make the previous connector type not perfect, but it doesn't make their current PCB a bad design ? It doesn't even make the next generation connector bad neither. There is literally 0 relation with PCB design and this.

Nope. They are not smarter. Work experience most likely. Education. And a job at Nvidia. Not sure if there are robots, humans, aliens or something else doing that job. Maybe Artificial intelligence? Or autorouting software for the printed circuit board design?

If that was true I would never see any flaw in any nvidia product since that company existed, right?

Smart is always up to definition. in my point of view - bad cars and bad car brand.
Well you're playing on words here... Let's say "knowledgeable" ? But posting a judgemental comment about the engineering quality of a complex PCB just based on a small photo is not smart, that's a fact. And his comments are not knowledgeable neither based on the suggestions he made.
 
Last edited:
It was cool for VR too. Now that USB4 is getting more common (not really sure why but whatever) it's getting less necessary, but it was still nice to have.
Considering there are zero USB4 monitors (afaik) and Thunderbolt monitors are all "pro" stuff, it still makes sense to have a 20 Gbps USB-C port on graphics cards.
Not a huge fan of looping DP from the graphics card to the motherboard either, not that I have a motherboard with USB4 support...
 
Kind nuts, gpu market really needs to move to chiplets.
As far as I know, advanced packaging tech such as COWOS have their production facilities really busy right now so it may not be available enough at decent prices. Not even for 2000$ products.

3 or 4gb chips don't exist yet.
24Gb/3GB chips will never exist...

No they aren't, x090 have their FP16 and FP32 severely cut down. Might not matter to you, but it's still there as reminder they're not titans.
Hmm ? What do you mean ?? x090 don't have their FP16 severy cut down ?? Compared to FP32, it's the same 1:1 ratio as other 40x0 GPUs. And what do you mean by FP32 severly cut down ?? FP32 is the base number, then there is FP16 which maybe 1:1 or 2:1, and FP64 which is usually 1:32 or 1:64, but can be 1:2 for Tesla specialized architectures like Volta and the Titan V, which is an exception. Not sure what you mean but I do think xx90 are indeed like the new Titan boards.
 
Considering there are zero USB4 monitors (afaik) and Thunderbolt monitors are all "pro" stuff, it still makes sense to have a 20 Gbps USB-C port on graphics cards.
Not a huge fan of looping DP from the graphics card to the motherboard either, not that I have a motherboard with USB4 support...

My point was you can use the USB4 in place of the 20gbps USB-C nvidia is no longer giving you. It would be nice if it came back of course, routing DP from GPU to motherboard is just a silly solution that should be solved with DMA copy if manufacturers could get together and agree on a cool solution to do it securely.

24Gb/3GB chips will never exist...

Both JEDEC and Micron says different: https://videocardz.com/newz/first-g...ticking-to-16gbit-2gb-modules-3gb-on-roadmaps

Hmm ? What do you mean ?? x090 don't have their FP16 severy cut down ?? Compared to FP32, it's the same 1:1 ratio as other 40x0 GPUs. And what do you mean by FP32 severly cut down ?? FP32 is the base number, then there is FP16 which maybe 1:1 or 2:1, and FP64 which is usually 1:32 or 1:64, but can be 1:2 for Tesla specialized architectures like Volta and the Titan V, which is an exception. Not sure what you mean but I do think xx90 are indeed like the new Titan boards.

Just what I've seen in benchmarks and what has always happened with consumer retail versions of the gpu.
 
That's actually really interesting, and I have to admit my mistake here. Depths that are not powers of 2, I have never seen such thing so far except on paper. I know it's doable but it forces some odd addressing. I guess a typical memory controller can do that without too much problem but it still feels odd. In practice for the current GPU generation it might help circumventing the lack of memory on some mid range devices for which makers can't afford a 256b bus to get 16GB, but still would like to have those 16GB, like xx70 cards that typically have 192b/6 chips. With 3GB they would get 18GB which would make them more future proof.

Just what I've seen in benchmarks and what has always happened with consumer retail versions of the gpu.
I don't see such thing, even double checking the GPU specs here. The Titans are just the top premium GPUs of their architectures without noticeable difference compared to lower price devices except the number of units. I might have missed something but I don't see where, for me it works the same as xx90 cards.
 
Last edited:
Just what I've seen in benchmarks and what has always happened with consumer retail versions of the gpu.
It only started being a thing in Ampere and Ada, Turing had no such restriction. All other models this did not apply since they didn't have tensor cores to begin with.

That's actually really interesting, and I have to admit my mistake here. Depths that are not powers of 2, I have never seen such thing so far except on paper. I know it's doable but it forces some odd addressing. I guess a typical memory controller can do that without too much problem but it still feels odd. In practice for the current GPU generation it might help circumventing the lack of memory on some mid range devices for which makers can't afford a 256b bus to get 16GB, but still would like to have those 16GB, like xx70 cards that typically have 192b/6 chips. With 3GB they would get 18GB which would make them more future proof.
24Gb modules are already a thing in your 24 and 48GB DDR5 sticks.
 
Is the large size supposed to be impressive?
 
Back
Top