• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Will undervolting a 4090 keep the connector from melting? A discussion about electrical theory.

Joined
Feb 18, 2025
Messages
81 (0.49/day)
Location
Spain
System Name "Nave Espacial"
Processor AMD Ryzen 7 9800X3D
Motherboard MSI B650M Project Zero
Cooling Corsair H150i Elite LCD XT (360mm) + 9 Corsair fans + Thermal Grizzly Duronaut
Memory Corsair Vengeance RGB 64GB (2x32GB) DDR5 6000MT/s CL30
Video Card(s) MSI GeForce RTX 4090 GAMING X SLIM
Storage Samsung 990 PRO 4TB + Acer Predator GM7 4TB
Display(s) Corsair Xeneon 27QHD240 (27", 2560x1440, 240Hz, OLED)
Case Corsair 2500X (black)
Audio Device(s) Corsair HS80 Wireless Headset
Power Supply Corsair RM1200x Shift
Mouse Corsair Darkstar Wireless
Keyboard Corsair K65 Pro Mini 65%
Software Windows 11, iCUE
I was trying to figure this out on my own, but I thought I might as well discuss it publicly here.
I own a 4090 and I'm (once again) anxiously concerned about a potential melting connector. My case is small, and the AIO tubes rest directly on top of the connector area, so worst case scenario I'd be facing having to replace the GPU, PSU and AIO. Really fun stuff.

I have come to consider undervolting the thing vs. power limiting it to 4080 territory to prevent heat build-up at the connector... but then I remembered the IT lessons in high school, and how Joule heating factors in only current (amps), not really watts (amps by volts) as a whole. The formula is
Heat = I² · R · t
...meaning that the real culprits for the heating up would be amps flowing through the cable multiplied by the resistance of the mating point and the time under load. All this tells me that voltage is completely irrelevant, thus making undervolting not helpful to prevent this at all.

That being said, I'm not quite confident in my conclusions. Any mind brither than mine that might chime in? Any electrical engineers in the room? Thanks in advance.
 
Last edited:
To undervolt, you have to install an OS, log into it, install the drivers, install a tuning app (because Nvidia can't be asked to provide an official one), and do your adjustments. And if said tuning app accidentally forgets to load your profile on startup one day...

I don't trust software solutions to control hardware. Never have, never will.
 
The voltage is relevant. You can't have current flow without it. Voltage is a difference in potential energy. Ohm's Law still applies.

By adjusting the power limit, you are essentially adjusting a rheostat that adjusts the resistance. The 12V is still there, you are just adjusting how much current goes through the card. As you lower the resistance, you are increasing current flow. It is an inverse relationship. As you lower resistance, current goes up and vice versa.
 
The problem is that undervolting doesn’t necessarily mean the GPU will draw any less power, but rather it will try to boost even more freely, which could mean even more spikes. You have to limit the power to reduce the risk, otherwise these cards are designed to overclock (ie boost) themselves based on available power and temperature budget, and they do so in milliseconds.

If these cards didn’t have a boost clock, then you’d have something going for you. What I’ve done with my 6700xt was lower the power limit and the voltage, and the card actually runs at boost speeds almost all the time, while using less energy.
 
The amps going throw connector are the 12V power line coming from the PSU to the in-card DC-DC converters. It is 12V in all environments (except the most exotic experimental labs;
actually I tried running GPUs on 13V for couriosity, works fine since card DC-DC converter elements are ~15-16V-rated actually. using 8% lower Amps on 13V, btw
).

The voltages you actually control during undervolting/limiting power usage - are voltages on the DC-DC converter output (0.5-1.1v range).

The impulse power DC-DC converter used in GPU has quite high efficiency, so when the output wattage lowers (no matter volts or amps) - the input wattage lowers too. And given the input voltage is always 12V - this means that input amps going throw connector are going lower.

So, in a nutshell - lowering the output voltage of GPUs on-board DC-DC lowering converter would lead to lower input current passing into the card
 
My 4090 is pretty chilling at stock clocks 2730mhz @ 950mV undervolt (default is 1050mV), power usage is around 300-350W at 4K.

Undervolting your chip so it uses less power = less amps running over the 12V wires
 
I believe Darmok is correct in saying power limiting is your best bet. That said you should keep your ear to the ground on this issue as we still don't have any hard confirmation on the exact cause / causes of the problem. There are quite a few potential culprits: Poor pin contact, lack of current balancing, low number of mate cycles, improper pin plating, low safety margin, and pin current distribution changing on re-seat or simply if the wire is bent. Several of these factors cannot be fully mitigated by lowering power limits, only delayed.
 
The amps going throw connector are the 12V power line coming from the PSU to the in-card DC-DC converters. It is 12V in all environments (except the most exotic experimental labs;
actually I tried running GPUs on 13V for couriosity, works fine since card DC-DC converter elements are ~15-16V-rated actually. using 8% lower Amps on 13V, btw
).

The voltages you actually control during undervolting/limiting power usage - are voltages on the DC-DC converter output (0.5-1.1v range).

The impulse power DC-DC converter used in GPU has quite high efficiency, so when the output wattage lowers (no matter volts or amps) - the input wattage lowers too. And given the input voltage is always 12V - this means that input amps going throw connector are going lower.

So, in a nutshell - lowering the output voltage of GPUs on-board DC-DC lowering converter would lead to lower input current passing into the card

This is one of the most educational replies I've read on the matter. I had no clue about how voltages operate differently through different transformers within the system. I am quite new to PC building and have no experience in overclocking/undervolting whatsoever, so I was simplifying it all to a single voltage/current/power instance.

What this plus Darmok's reply tell me is that both undervolting and power limiting the card at the same time would probably be the safest bet to reduce the current load through the connector.

I have one doubt though - does your explanation mean the input power to the DC-DC transformer coming from the PSU is roughly equal (minus efficiency losses) to the output power sent to the GPU die? This would mean that under a standard 450W TDP the amps flowing into de card would be 450W/12V= 37.5A, but then after the conversion at the DC-DC module they would become ~450W/(0.5-1.1V range)= ~900-409A, which seems like an insane amount of current that would fry any electronics. I would appreciate if someone could clarify this.

I believe Darmok is correct in saying power limiting is your best bet. That said you should keep your ear to the ground on this issue as we still don't have any hard confirmation on the exact cause / causes of the problem. There are quite a few potential culprits: Poor pin contact, lack of current balancing, low number of mate cycles, improper pin plating, low safety margin, and pin current distribution changing on re-seat or simply if the wire is bent. Several of these factors cannot be fully mitigated by lowering power limits, only delayed.

I believe this is a phenomenal summary of the known issues. It really is a dumpster fire.
 
What this plus Darmok's reply tell me is that both undervolting and power limiting the card at the same time would probably be the safest bet to reduce the current load through the connecto
Yes, both changes have some effect. The undervolting lowers the "wanted power" for a given load. Power limiting lowers the maximum allowed power. If the card at the some load wants more power then the limit - it is imemdiately throttled a bit.

So power limiting is the most efficient way to limit peak amps via connector. But with undervolting card can be lower then power limit for a lot of time, so "additional undervolting with already used power limiting" - would lower the average amps over connector (but not change the peak amps). Also, while hitting the power limit while undervolted - the frequency (FPS) is better, then hitting same power limit while non-undervolted,


This would mean that under a standard 450W TDP the amps flowing into de card would be 450W/12V= 37.5A, but then after the conversion at the DC-DC module they would become ~450W/(0.5-1.1V range)= ~900-409A,
Thats mostly true, hunderds of amps are going from the power system into the chip. 900A is not the case since 0.5V is the state without load with minor power usage, but 400+amps - yes.
4090 (as simpler example) has ~20-24 power phases for NVVDD, each providing ~20-30A of current. The GPU die is quite big, so current is not going to a single point: its going under the chip via an extremely wide traces/planes, then hundreds of solder balls into the many semi-independent blocks of the GPU die.
 
Yes, both changes have some effect. The undervolting lowers the "wanted power" for a given load. Power limiting lowers the maximum allowed power. If the card at the some load wants more power then the limit - it is imemdiately throttled a bit.

So power limiting is the most efficient way to limit peak amps via connector. But with undervolting card can be lower then power limit for a lot of time, so "additional undervolting with already used power limiting" - would lower the average amps over connector (but not change the peak amps). Also, while hitting the power limit while undervolted - the frequency (FPS) is better, then hitting same power limit while non-undervolted,



Thats mostly true, hunderds of amps are going from the power system into the chip. 900A is not the case since 0.5V is the state without load with minor power usage, but 400+amps - yes.
4090 (as simpler example) has ~20-24 power phases for NVVDD, each providing ~20-30A of current. The GPU die is quite big, so current is not going to a single point: its going under the chip via an extremely wide traces/planes, then hundreds of solder balls into the many semi-independent blocks of the GPU die.

Once again, thanks for your insightful reply.

:respect:
 

I saw this guy did some undervolting, measured the amps and they were, indeed, brought down - so that answers my question.
My original assumption was (fortunately) wrong, and undervolting does help with the issue.
 
Study bios files and figure out a way to cause them to limit their current and voltage
 
No, undervolting will not have any bearing on this issue, as the failure is mechanical in nature and occurs at a stage that precludes the GPU core's operation. However, power limiting can and will help, the issue will not be gone, but less load means that the cable will heat less, and might avoid a meltdown if you give it a safety margin yourself.
 
No, undervolting will not have any bearing on this issue, as the failure is mechanical in nature and occurs at a stage that precludes the GPU core's operation. However, power limiting can and will help, the issue will not be gone, but less load means that the cable will heat less, and might avoid a meltdown if you give it a safety margin yourself.

If undervolting means reducing the intensity of current (amps) flowing through the cable, then it helps in the same way that power limiting does.
 
I only provided a suggestion, no need to be rude

I was not being rude, I was being humorous and laughing at my own ineptitude. But the fact some mod deleted my post tells me this place is no different from Reddit. Shame.
 
I was not being rude, I was being humorous and laughing at my own ineptitude. But the fact some mod deleted my post tells me this place is no different from Reddit. Shame.
Ok sorry I misunderstood you. I will remove my above message. Maybe you should repost your statement lol.
 
Thats one tool I've been using to reduce that risk, yes.
 
Last edited:
yes it will. 400W TDP is the sweet spot for safety RTX 5090.
 
If undervolting means reducing the intensity of current (amps) flowing through the cable, then it helps in the same way that power limiting does.

As I understand it, the input side (where the physical damage always occurs) is always being fed a mostly static ~12 volts. VRMs are not 100% efficient, but the most important factor regarding its conversion efficiency is current draw. Remember, power (W) = current (A) * voltage (V), if you have the same 600 W at a lower input voltage, the current is necessarily higher to compensate. As would higher input voltage at a lower current result in the same wattage. This of course simplifies the equation and disregards inefficiencies such as resistance, conversion efficiencies etc. but it's the gist of it.

There's a thread I can recommend if you wish to understand how VRMs operate exactly, but I have to admit it's above my pay grade


yes it will. 400W TDP is the sweet spot for safety RTX 5090.

Way too conservative IMHO. If we'd like to leave some "margin" (for no other than feel good reasons), I'd personally go with 525 W at a minimum. Please refer to the Amphenol specification for the connector. If done strictly by the books, no issue should occur... after all, electricity is physics, and physics is math... in a perfect world. One where everyone is using 16 AWG cables, has verified and reverified their installation, there are no crimping issues, everything is wired correctly, etc.

This is why I do not believe it to be acceptable to cut back on safety features such as shunts and load balancing as Nvidia did. That was a reckless move. This doesn't mean "cards are doomed!1" like people spreading FOMO are going on about on Reddit, but it does mean that there is far less tolerance for poorly built cables and the human skill factor (or lack thereof).
 
Shunts are not a saftey feature. It is a low value resistor for measurement.
You may use them to determine a current or a voltage. You will need some sort of schematic or mechanism for a saftey feature.

A fuse is a saftey feature in some cases.

power (W) = current (A) * voltage (V)

Watt = Voltage * Amps = amps * amps * Resistage = Voltage * Voltage / resistance

I'm still not sure about the joule Formula of post #1
Sometimes I wish I would not grew up in a poor environment with better education. I can not answer if Joule is the Unit of "Heat" like #1 suggests
 
If undervolting means reducing the intensity of current (amps) flowing through the cable, then it helps in the same way that power limiting does.
Power limiting gives you the safety. Undervolting just brings the performance back up to where it was without the power limit. If you just undervolt the card will just try to pursue higher clocks which will result in the same power being pulled. Im running mine at +100mhz core / +1400ram restricted to 70% power (which is 320w).
 
i don't see how shoving a little bit less power through cables fixes a cost optimizing issue in the actual circuit between the PSU and GPU.
having 6 cables go into one piece of metal with no way of knowing what cables delivers how much power is the problem. once you end up having two cables with lower resitance than the rest (no matter if its crooked, badly manufactured or whatever), your card will pull it's power through one or two thin 18AWG wires and it will melt.

These Cards will melt for ever and there is nothing that you can do about it... except of selling it.
 
As I understand it, the input side (where the physical damage always occurs) is always being fed a mostly static ~12 volts. VRMs are not 100% efficient, but the most important factor regarding its conversion efficiency is current draw. Remember, power (W) = current (A) * voltage (V), if you have the same 600 W at a lower input voltage, the current is necessarily higher to compensate. As would higher input voltage at a lower current result in the same wattage. This of course simplifies the equation and disregards inefficiencies such as resistance, conversion efficiencies etc. but it's the gist of it.

There's a thread I can recommend if you wish to understand how VRMs operate exactly, but I have to admit it's above my pay grade




Way too conservative IMHO. If we'd like to leave some "margin" (for no other than feel good reasons), I'd personally go with 525 W at a minimum. Please refer to the Amphenol specification for the connector. If done strictly by the books, no issue should occur... after all, electricity is physics, and physics is math... in a perfect world. One where everyone is using 16 AWG cables, has verified and reverified their installation, there are no crimping issues, everything is wired correctly, etc.

This is why I do not believe it to be acceptable to cut back on safety features such as shunts and load balancing as Nvidia did. That was a reckless move. This doesn't mean "cards are doomed!1" like people spreading FOMO are going on about on Reddit, but it does mean that there is far less tolerance for poorly built cables and the human skill factor (or lack thereof).
no it's not. 585 Nvidia power is literally 525W for 12 power connector if you subtract PCI power. So it's already melts with those power. Now if you compare 4090 which was 450 by design, melted too.
So 400W TDP is the sweet spot because you will get 350W from 12 power and 50W from PCI so you will be safe.
350W for 600W rated connector is safety factor of 1.9, which is good.
All you lose only 5% performance going down from 525. But TDP is 25% less!
And heat through connector 175% lower
P = U*I.
I1 = 525/12 = 45 . I2 = 400/12=33
Q = I*I*R = 45*45/33/33 = 1.75
I will go more than 400W if ever I dip below my monitor refresh rate, not sooner. Overwatch/valorant is easy 500 fps at 4k
 
Last edited:
These Cards will melt for ever and there is nothing that you can do about it... except of selling it.

* or just not buying it.

-- Good Summary

i don't see how shoving a little bit less power through cables fixes a cost optimizing issue in the actual circuit between the PSU and GPU.
having 6 cables go into one piece of metal with no way of knowing what cables delivers how much power is the problem.

Nvidia still has not made a single statement or a real 8D-Report so we know the real issue. they just sit the issue out. The warranty period runs out and the products are sold.

Q = I*I*R = 45*45/33/33 = 1.75

What are you calculating? Please clarify. Check my post above and try again.

So you say the resistance is 33 Ohms? Serious? And you divide it two times, although i took my efforts to write the formula above with units.

these are the basics from school any 12 year or younger knows about

Always - always use number plus comma plus number plus technical unit with mathematical operator with number plus comma plus number plus technical unit ...... equal sigh with number plus comma plus number plus technical unit
 
Last edited:
undervolting will keep the connector from melting.
 
Back
Top