• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Reverse engineering a 12VHPWR cable

1) Remove 12vhpwr connector from gpu's pcb
2) Take the '8 pin to 12vhpwr' adapter cable supplied by nvidia and cut the 12vhpwr connector off
3) Solder the adapter cable to the gpu's pcb so that you now have only four 8 pin connectors for the gpu
* (optional) If you have a 12vhpwr connector on the psu side and you want to bypass it just cut the end of the cable off and solder it directly to the psu's pcb.
 
1) Remove 12vhpwr connector from gpu's pcb
2) Take the '8 pin to 12vhpwr' adapter cable supplied by nvidia and cut the 12vhpwr connector off
3) Solder the adapter cable to the gpu's pcb so that you now have only four 8 pin connectors for the gpu
* (optional) If you have a 12vhpwr connector on the psu side and you want to bypass it just cut the end of the cable off and solder it directly to the psu's pcb.
Now you invented a bomb. Thanks.
 
So...let's talk failure mode.
1) Where are the failures occurring?
2) Why are they failing?
3) How are they failing?
The failures occur at the highest-resistance point in the whole chain which is typically at the connector, but not always. Plenty of examples exist where the wire insulation has melted instead (probably AWG18).

The failure is a symptom of the problem, and the problem is a design fault of the GPUs themselves. The cable is just six 12V circuits in a one-piece connector. The wiring gauge, the Microfit connector, and the pins in the connector aren't at fault since the 16-pin standard rates the six circuits at 8.3A per wire for a total of 600W which is well within the tolerance of the wire gauge, and connector pins.

The 16-pin cables aren't failing because they are faulty themselves, they are failing because the device (GPU) that is drawing current is asking for far more than 8.3A across one of the circuits. If the weakest link in the 16-pin standard is the pins themselves, then that's where they'll fail. The pins are supposed to handle no more than 8.3A each in the 12VHPWR or 12V6X2 standard. They are rated, by Amphenol (the manufacturer), to 10.5A each - and that includes a safety margin most likely, so the actual limit before stuff starts to melt is probably in excess of 16A.
It is not the fault of Amphenol, Molex, or the wire manufacturer if the device that is supposed to be drawing 8.3A from that circuit is actually able to draw 25A from that circuit because it's been designed without the necessary load-balancing. You can't blame a cable or a connector for failing when the device using the cable exceeded the specification by 200%!!

Since the PSU end of the circuits is a common rail, all six of the 12V circuits need to be monitored and balanced at the load end, not the PSU end. Doing what Nvidia did with the 5090FE, for example, and connecting all six pins together in another common rail is just dumb:

1749129981156.png


What Nvidia have done is basically made a single 600W (5A) cable and individually split it out six times with no way to know how much current is going down each branch. The wire, the connector, the pins - all of them will fail at 50A, and the way electricity works means that the hottest wire will carry the most current, because for a fixed voltage (12V) the current is a function of the resistance, which increases as the wire heats up. It doesn't take a genius to realise that's a positive feedback cycle, aka a "vicious circle" and once the first wire fails entirely the current is now 20% higher in the remaining wires, accelerating the heat buildup for an even faster failure in the next hottest wire.

1749130428336.png


This connector design on the 5090 (and copied for other GPUs) is a class-action lawsuit, IMO - and any electrical engineer would shake their head in dispair if you showed them this and said it was the same common rail at both ends....
 
And if I just solder two 8-pin connectors there it wouldn't work, right?
1749130718357.png
 
No, I won't use a 5090 or anything remotely close in terms of wattage. I'm targeting 5070 at a roughly 225 W power budget. That's why I'm adamant my cable will work, unless there's some voodoo magic I need to apply to the cable that you normally won't do to a regular power cable.
At 225W why is reverse engineering a 12VHPWR cable any consideration at all?
 
They have solutions for this you know. Bigger pins
The bigger pins still have to fit in the connector WITHOUT the risk of shorting out with (or arcing over to) an adjacent pin. So I don't see how bigger pins can be used without replacing both male and female connectors. Certainly possible on the PSU cable - not the other.
I thought the cable had the problem of inconsistent contact with the pins and not much to do with the gauge of wires.
You are right about it not being the size of the cable. 18awg is plenty big (see NOTE below) with lots of head room for these applications - assuming quality wire (a big assumption) that is not otherwise damaged. The problem is often lousy connections between the wire and the pins.

The best mechanical and thus best electrical connection is through a proper soldering joint. But soldering wires to pins is more complicated, time consuming, labor intensive, thus a more expensive process than crimping. So most connectors use pins where the wire is terminated by crimping. Crimping is great, when done right. But a poor crimp, unless the wire just falls out, is harder to spot than a poor solder joint.

Often the bigger problem is just the mechanical connection (or lack of a good one) between the two halves of the connector. If not tight for each and every pin, there may be poor continuity. But worse, if loose on day one, it will only get worse as a loose connection will allow dirt and other contaminates between the contacts, may promote arcing and thus carbon buildup, and or oxidation/corrosion.

Plenty of examples exist where the wire insulation has melted instead (probably AWG18).
But again, with a quality wire, 18awg is plenty big (again, see NOTE below) for these applications. So in those instances where the insulation melted (and I totally agree, there are examples out there), it happened for one of two reasons. (1) Poor/cheap wire with insulation that did NOT meet industry standards was used to save a few pennies during production or (2), there was a partial short somewhere in the circuit causing an increase in current and thus too much heat.

NOTE: Before someone jumps in and claims 18awg is too small for today's monster graphics cards, that is wrong!!!! Why? Because the motherboard ALWAYS supplies 75W through the PCIe slot and the remaining power demand is divided among additional wires in the 6, 6+2, 8 or 12 pin connectors.

No single wire will ever be required to handle more current than any 18awg wire is capable of handling (assuming undamaged and manufactured to industry standards for 18awg).

Anyone tried stabilant 22 on these connectors?
Only in some mobile devices - devices subject to knocks and bumps and rough handling and/or outdoor weather.
 
The failures occur at the highest-resistance point in the whole chain which is typically at the connector, but not always. Plenty of examples exist where the wire insulation has melted instead (probably AWG18).

The failure is a symptom of the problem, and the problem is a design fault of the GPUs themselves. The cable is just six 12V circuits in a one-piece connector. The wiring gauge, the Microfit connector, and the pins in the connector aren't at fault since the 16-pin standard rates the six circuits at 8.3A per wire for a total of 600W which is well within the tolerance of the wire gauge, and connector pins.

The 16-pin cables aren't failing because they are faulty themselves, they are failing because the device (GPU) that is drawing current is asking for far more than 8.3A across one of the circuits. If the weakest link in the 16-pin standard is the pins themselves, then that's where they'll fail. The pins are supposed to handle no more than 8.3A each in the 12VHPWR or 12V6X2 standard. They are rated, by Amphenol (the manufacturer), to 10.5A each - and that includes a safety margin most likely, so the actual limit before stuff starts to melt is probably in excess of 16A.
It is not the fault of Amphenol, Molex, or the wire manufacturer if the device that is supposed to be drawing 8.3A from that circuit is actually able to draw 25A from that circuit because it's been designed without the necessary load-balancing. You can't blame a cable or a connector for failing when the device using the cable exceeded the specification by 200%!!

Since the PSU end of the circuits is a common rail, all six of the 12V circuits need to be monitored and balanced at the load end, not the PSU end. Doing what Nvidia did with the 5090FE, for example, and connecting all six pins together in another common rail is just dumb:

View attachment 402612

What Nvidia have done is basically made a single 600W (5A) cable and individually split it out six times with no way to know how much current is going down each branch. The wire, the connector, the pins - all of them will fail at 50A, and the way electricity works means that the hottest wire will carry the most current, because for a fixed voltage (12V) the current is a function of the resistance, which increases as the wire heats up. It doesn't take a genius to realise that's a positive feedback cycle, aka a "vicious circle" and once the first wire fails entirely the current is now 20% higher in the remaining wires, accelerating the heat buildup for an even faster failure in the next hottest wire.

View attachment 402613

This connector design on the 5090 (and copied for other GPUs) is a class-action lawsuit, IMO - and any electrical engineer would shake their head in dispair if you showed them this and said it was the same common rail at both ends....

Note, I did not touch on failure root cause. I touched on the failure mode. Rather significant difference.

Additionally, the instances where wires failed were almost exclusively (based upon my investigation) when wires were bent at 90 degrees and compressed. IE, the cross sectional area was forcibly decreased, causing the point of greatest resistance to suddenly be at that severe bend. That doesn't change the failure mode (heating at areas of limited cross sectional area). It absolutely does change root cause (bending leading to breaks in the wiring, or other decreases in cross section).


While I believe this is all sorts of fun to have a discussion on...the point was the OP wanted to solder heavy gauge wiring to address the heating failure...and the answer to their inquiry remains the same from both of us. This would not address the failure mode. Cool, I think we're on the same page, if not in the same paragraph.
 
And if I just solder two 8-pin connectors there it wouldn't work, right?
View attachment 402614
It would work just fine.
What I'm saying is that it wouldn't really be any safer than the existing 16-pin cable because the design flaw is in the power delivery and power monitoring circuitry of the GPU itself.
 
[nitpick mode on]

Just because, IMO, this topic has played out, I'll stray a little and point out that this is not "reverse engineering". Reverse engineering is, essentially, tearing apart something we don't already understand. That may be dismantling a piece of hardware (ET's flying saucer, or an adversary's missile guidance system), or decoding a competitor's software, or separating components in a secret pharmaceutical formula.

This is done to learn the science behind the object, learn how it works and how it was made - typically to recreate it then improve it, or learn how to defeat it.

We already know the science behind these connectors, how they work, and how they are made. We may want to re-engineer them, but no need to reverse engineer them.

[nitpick mode off]
 
But again, with a quality wire, 18awg is plenty big (again, see NOTE below) for these applications.
18AWG is fine. It's a sign of cost-cutting from the PSU manufacturer but it meets H+/H++ spec of 8.33A per wire.
For short distances 18AWG is supposed to handle 10A so there's room to spare
NOTE: Before someone jumps in and claims 18awg is too small for today's monster graphics cards, that is wrong!!!! Why? Because the motherboard ALWAYS supplies 75W through the PCIe slot
Incorrect - and easy to disprove with modern GPUs, which have sensors that report slot power and connector power separately.
Here's a 5060Ti, for example - capped out by its power limit but only pulling 40W from the slot, which is typical of most Geforce 30, 40, 50, and RDNA 2, 3, 4 cards, IME:

1749150371052.png


Realistically, AMD were the last I'm aware of to make the "full slot power" mistake with the RX480 that had a 6-pin (good for 75W) and the slot (good for 75W) but then was measured as pulling 165W or so, of which 85W was coming from the slot. AMD issued a hasty driver update and subsequent testing from reviewers using power monitoring hardware showed that the update had dropped the actual TDP by ~10W and around 65W were being pulled from the slot, which got them off the hook for any potential "you damaged my motherboard" lawsuits. That meant that 85-90W were being pulled over a 6-pin cable but nobody cared because this was from the "dual 6-pin" sharing a single cable from the PSU era, so the PSU cable was rated to 150W at a bare minimum, anyway and we all know that the exact same wire gauge and connector standard became "dual 6+2pin" capable of 300W on a cable.

Historical anecdotes aside, you can probably ballpark 30-40W of 12V power from the GPU slot for any modern GPU that isn't slot powered. I don't know why that is, it's just what I've observed over hundreds of cards from the last half decade.
 
1) Remove 12vhpwr connector from gpu's pcb
2) Take the '8 pin to 12vhpwr' adapter cable supplied by nvidia and cut the 12vhpwr connector off
3) Solder the adapter cable to the gpu's pcb so that you now have only four 8 pin connectors for the gpu
* (optional) If you have a 12vhpwr connector on the psu side and you want to bypass it just cut the end of the cable off and solder it directly to the psu's pcb.

The only real option. So sad graphic card makers have stupid warranty terms.

First 4 or 5 posts. Assuming the wires are correct - I do not see any problems. I assume most output pins of the power supply unit are soldered anyway to the same plane.
Back to the roots. Wires directly out from the power supply unit.
 
Incorrect - and easy to disprove with modern GPUs, which have sensors that report slot power and connector power separately.
No, I was correct, but I apparently didn't phrase it so all could understand. My bad.

What I should have said is the PCIe slot is always, as required by the standard, "capable" of supplying 75W of power. Now if the card demands that much through the slot, or not, is a different story - which your image illustrates.

That said, the main point I was making stands as is and is absolutely correct,
No single wire will ever be required to handle more current than any 18awg wire is capable of handling (assuming undamaged and manufactured to industry standards for 18awg).

18AWG is fine. It's a sign of cost-cutting from the PSU manufacturer

Not necessarily. You said so yourself, 18awg is fine. Cost-"cutting", to me, invokes a negative context. "IF" the supply was capable of supplying more than 18awg could "safely" handle, and the manufacturer chose to cut-corners and costs to make more profit, that would be bad - at least for the consumer.

But if the 18awg could safely handle what the supply was designed to deliver, then cost-savings is a win-win for both the maker and the consumer. Now, of course, this also assumes the user selected a supply big enough to support the load without choking (or burning up) too. So some of the onus is on the consumer too. We must do our homework and buy decent supply from reputable makers capable of meeting demands.

Just looking on Amazon and Newegg at replacement cables for Corsair and Seasonic (OEM and upgrades). Most are 18awg. There are a few 16awg, not many.

But remember, this discussion is about the OP planning on using 14awg!
 
Last edited:
Another option would be to build motherboards around GPU's and slot in the CPU instead.
At 225W why is reverse engineering a 12VHPWR cable any consideration at all?

You know, beat a dead horse to death, then bury it, dig it up, beat the shit out of it some more.

On topic -

I'd keep the wire size and build a new connector with larger diameter male/female pins. But this would mean rebuilding the male ends on the GPU it's self.

But I can tell you a 5070/ti will have no issues. My 220w 4070 Super doesn't have any issues. And the 10's of actual reported melt downs out of thousands and thousands of cards make this pure hobbyist fun only. Just to do it. And then, say, you are now smarter than the engineer that designed the connectors lol. What a great thread. It's full of good laughs.
 
If you are going to splice together cables you may as well go one step further and put some fuses in there too. Then instead of the connectors melting you just have the fuses fail.
 
PCI-SIG. Not Nvidia.
PCI-SIG handle the cable and connector standard, but Nvidia are (ir)responsible for the card design and how it (fails to) manage the current draw from each of the 6 circuits the cable carries.

If you have a faulty appliance that keeps blowing fuses, do not blame the fuse. This much, I believe everyone understands. So here is the same exact reasoning applied to the infamous 12VHPWR connector:

If a cable with 12V circuits designed to handle 8.3A each keeps getting melted and it takes at least 16A or more to melt any given circuit, do not blame the cable. It is not the cable's fault that the device on the end of it asked for more than double the current it is supposed to.
 
Last edited:
PCI-SIG handle the cable and connector standard, but Nvidia are (ir)responsible for the card design and how it (fails to) manage the current draw from each of the 6 circuits the cable carries.
Then the card is not compliant and should be reported.

Or, people are dumb and don’t know how to plug things in.
 
Then the card is not compliant and should be reported.
That is correct, as far as I understand, and what I've been saying.

This is why I suggested there might be a class-action lawsuit against Nvidia over this particular design at some point. The tipping point to start most class-action lawsuits is usually when someone gets hurt, or the financial damage to someone becomes significant enough to incentivise their need for compensation/revenge.
 
That is correct, provable, and what I've been saying, yes.
This is why I guessing there might be a class-action lawsuit against Nvidia over this particular design at some point.
Yep, because thousands upon thousands of cards are having problems.

Funny, AMD is a PCI-SIG member, you would think they would say something - even through back channels - that there is a problem with Nvidia’s implementation of the specification.

Or, it could be there’s not an issue and people are falling for internet sensationalism.
 
Back
Top