• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Parts of NVIDIA GeForce RTX 50 Series GPU PCB Reach Over 100°C: Report

Don't these guys do a thermal test for temperature when designing these cards?
They probably do, but risk assessments involve a lot more than just looking at your temperatures. It also involves the chance of failure, the cost of a recall/redesign, and the potential cost of RMA requests, and/or lawsuits. If the chance of failure is low, and the cost of redesigning the card is high, they're not gonna do it.
 
Yet its loyal customers keep paying more and getting less. Serves them right.

NEW AMAZING OFFER!!!!!!!
DON'T MISS OUT!!!!!!
QUICKLY ONLY LIMITED SUPPLY AT MSRP!!!!

PAY FOR TWO, GET HALF!

DON'T MISS THIS OPPORTUNITY!!!!
 
More speculation, elaborating on what I've posted previously.

In this video @14:10, der8auer talks about Power Density. Taking that and extrapolating further I present the following:

RTX 5090:
Die Size: 750 mm2
TDP: 575 W
Power Density: 0.77 W/mm2
Avg. nr. of VRM phases for the GPU (based on 7 TPU reviews, including the FE): 22.71
Watts per phase: 25.31 W

RTX 5080:
Die Size: 378 mm2
TDP: 360 W
Power Density: 0.95 W/mm2
Avg. nr. of VRM phases for the GPU (based on 11 TPU reviews, including the FE): 15.9
Watts per phase: 22.64 W
Regarding cooler size compared to the 5090:
360 is 62.61% of 575, but 0.95 divided by 0.77 is 1.2338, multiplying 62.61 with 1.2338 we get 77.25% of cooler size, higher than what the TDP difference would suggest.

RTX 5070 Ti:
Die Size: 378 mm2
TDP: 300 W
Power Density: 0.79 W/mm2
Avg. nr. of VRM phases for the GPU (based on 7 TPU reviews): 15
Watts per phase: 20 W
Regarding cooler size compared to the 5090:
300 is 52.17% of 575, but 0.79 divided by 0.77 is 1.026, multiplying 52.17 with 1.026 we get 53.53% of cooler size.

RTX 5070:
Die Size: 263 mm2
TDP: 250 W
Power Density: 0.95 W/mm2
Avg. nr. of VRM phases for the GPU (based on 4 TPU reviews, including the FE): 9.75
Watts per phase: 25.64 W
Regarding cooler size compared to the 5090:
250 is 43.48% of 575, but 0.95 divided by 0.77 is 1.2338, multiplying 43.48 with 1.2338 we get 53.65% of cooler size, almost the same value as the 5070 Ti.

RTX 5060 Ti:
Die Size: 181 mm2
TDP: 180 W
Power Density: 0.99 W/mm2
Avg. nr. of VRM phases for the GPU (based on 7 TPU reviews, one is a 8GB variant): 5.57
Watts per phase: 32.32 W
Regarding cooler size compared to the 5090:
180 is 31.3% of 575, but 0.99 divided by 0.77 is 1.2857, multiplying 31.3 with 1.2857 we get 40.24% of cooler size, higher than what the TDP difference would suggest.

We see here that the 5080 fares pretty good and the 5070 Ti is the best. Repeating my original post I argue that if one can get a 5070 Ti that has the same PCB and cooler (see the Power Density result) as the corresponding 5080 from the same MFG, that card is the safest bet in terms on PCB hotspot temps.
Of course there will be differences when comparing a 5070 Ti from one MFG to a 5070 Ti from another MFG, it depends on the implementation, some are better than others and usually the price reflects that. However if the thermal behaviour of a tested 5080 is considered good at the very least, then the thermal behaviour of a corresponding untested 5070 Ti with the same PCB and cooler will be better.

Even though the 5090 does not look very good in this comparison, the number of phases is not something that worries me but the power connector is, truly it should have had two and possibly some safeties added on top of that.

Going further with the 5070 the comparison shows signs that on average there is more strain per phase than in the case of the 5080, couple that with a more crammed PCB and the potential for issues is greater.

The results for the 5060 Ti speak for themselves, a lot of cost-cutting is present here but it's not reflected in the price (% over MSRP) for some aftermarket variants.

Obviously this is just speculation and does not replace actual results, still as a napkin-math exercise I would say it's a good enough starting point with regards to purchase decisions in the absence of actual thermal imaging results.
 
After Igors Lab's review, I sent an e-mail to Gainward and asked them about a temperature issue in this review, whether there was a temperature issue in the VRM components of RTX5080 and 5070TI since Gainward and Palitin use the same circuit board, and what the upper temperature limit of these components is. I also sent the Thermal Camera Image and Article link from both Palit and PNY's review in Igor.

They responded to the e-mail, this is an answer for everyone who uses Palit and Gainward (RTX5080-5070TI);

Dear customer,
Thank you for the mail and sorry for my incorrect typing.

The graphics card that has the hotspot 107.3C is the PNY RTX 5070, but not the Palit RTX 5080.
In this article, the Palit RTX 5080 has the hotspot 80.5C.

The board power of the reviewed RTX 5080 GamingPro OC is 360W, while the board power of the RTX 5070 Ti Phoenix is a smaller 300W.
The power components of RTX 5070 Ti Phoenix are all located on the front of the PCB, which all have thermal pads that conduct heat to the heat dissipation module.

Moreover, these power components are highly heat-resistant up to 125 degrees Celsius.
All the power component temperatures are within the normal operation temperature range and comply with NVIDIA's regulations.

The overall heat dissipation of the RTX 5070 Ti Phoenix graphics card has been carefully evaluated and designed, please rest assured to purchase and use it.

Please kindly be notified.
Thank you!

Best regards,
Gainward Support.

Gianward.jpg
 
Mosfet can withstand but not poor capacitors that at 105C have only 5000h lifetime, and they are next to this hotspot.
Not mention thermal cycling 20c to 107c over time will destroy PCB Vias, and make this GPUs not fixable without new PCB.
All looks this deliberate design them to fail after warranty ends.
 
Last edited:
Sounds like a plausible risk to accept for the manufacturer, with 5000 hours and 4 hours daily exposure it will live over 3 years and by then most people who play games that much has likely moved on already. This would mean the cards would on average live for 5ish under most circumstances?

And they can charge extra for professional cards ;)
 
This depends how many thermal cycles PCB can survive 1000 ?
Simply metal fatigue issue. And we have here copper vias and solder that can fail due mechanical fatigue.
 
This depends how many thermal cycles PCB can survive 1000 ?
Simply metal fatigue issue. And we have here copper vias and solder that can fail due mechanical fatigue.

At this scale? Try a million. It's not like it's being heated to the point of melting and hardened all over again, and even if they were, it's largely going to be a connection issue between the circuit board and the components installed on it, after all, PCBs are copper sheets encased in fiberglass.

They are never the root cause of failure on a GPU board, it's always a physical defect if a failure involves the PCB, either a manufacturing issue or damage caused by the user.

Mosfet can withstand but not poor capacitors that at 105C have only 5000h lifetime, and they are next to this hotspot.
Not mention thermal cycling 20c to 107c over time will destroy PCB Vias, and make this GPUs not fixable without new PCB.
All looks this deliberate design them to fail after warranty ends.

5000 hours is just about the most conservative estimate that component manufacturers are willing to warranty under the most extreme conditions. I've motherboards that ran without a heatsink on their MOSFETs, pushing a 95 W CPU on a 3 phase design for upwards of 100k hours, and they still work. It's honestly not relevant.

I am sure you have noticed by now that much of this argumentation is not exactly vested in the science of it, and just another reason to try and hate on Nvidia.
 
Sounds like a plausible risk to accept for the manufacturer, with 5000 hours and 4 hours daily exposure it will live over 3 years and by then the warranty period will be over, meaning the consumer will be forced to buy again, even if the card could serve them for 5 or more years;)
fixed
 
At this scale? Try a million. It's not like it's being heated to the point of melting and hardened all over again, and even if they were, it's largely going to be a connection issue between the circuit board and the components installed on it, after all, PCBs are copper sheets encased in fiberglass.

They are never the root cause of failure on a GPU board, it's always a physical defect if a failure involves the PCB, either a manufacturing issue or damage caused by the user.



5000 hours is just about the most conservative estimate that component manufacturers are willing to warranty under the most extreme conditions. I've motherboards that ran without a heatsink on their MOSFETs, pushing a 95 W CPU on a 3 phase design for upwards of 100k hours, and they still work. It's honestly not relevant.

I am sure you have noticed by now that much of this argumentation is not exactly vested in the science of it, and just another reason to try and hate on Nvidia.
Igors Labs pointed that power Vias will fail in few years of time on this GPUs, probably due temps inside of PCB will be higher than 107.5C measured on surface.
 
Igors Labs pointed that power Vias will fail in few years of time on this GPUs, probably due temps inside of PCB will be higher than 107.5C measured on surface.

Personally, I'm not losing sleep over it, and while it is true that aging etc. is not taken into account, I don't think this will realistically degrade within the hardware's lifetime. It's... copper. Unless so much current is chucked on it to the point that it could internally melt, it'll be safe. Needless to say it is quite manageable.
 
Back
Top