• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Design analysis discussion of Nvidia-supplied 12VHPWR connectors

Joined
Aug 21, 2015
Messages
1,890 (0.53/day)
Location
North Dakota
System Name Office
Processor Core i7 10700K
Motherboard Gigabyte Z590 Aourus Ultra
Cooling be quiet! Shadow Rock LP
Memory 16GB Patriot Viper Steel DDR4-3200
Video Card(s) Intel ARC A750
Storage PNY CS1030 250GB, Crucial MX500 2TB
Display(s) Dell S2719DGF
Case Fractal Define 7 Compact
Power Supply EVGA 550 G3
Mouse Logitech M705 Marthon
Keyboard Logitech G410
Software Windows 10 Pro 22H2
The hot-button (ha!) issue of the moment is melting 12VHPWR adapters provided with RTX 4090 graphics cards. Many of us (hi!) downplayed the trouble to begin with, chalking the furor to new-launch hype and anti-NV sentiment. Turns out there is a real issue, and I'd like to discuss it specifically, and hopefully rationally. Igor's Lab has an excellent teardown of the connector, and goes top-level into what the problems with it are. Please read that first.

I'd like to dive a bit further into why the connector is designed the way it is, and fill gaps in my own understanding, of which there are many. I don't have any credentials in electronics engineering, and am in fact an EE dropout. What I do have is a reasonable grasp of DC circuit fundamentals (P=I*V, V=I^2*R, etc.) and a strong analytical bent. Anyway, based on Igor's article, I drew a crude block diagram in MS Paint of how Nvidia's PCIe-to-12VHPWR connector is configured, and it doesn't make a lot of sense at first blush:

1666909342387.png


The questions that pose themselves include:
1) Why 4-14ga rather than 6-16ga, including two 1-into-2 nodes?
2) Why a monolithic contact array instead of individual contacts?

To the 14ga question, there are some conflicting pro/con factors. 4-14 can carry as much or more current than 6-16. The draw back is that it is thicker, and thus necessarily less flexible. One of my hopotheses is that the extra bending resistance encountered when routing cabling is putting more strain on weak solder joints than 16ga would, increasing risk of broken joints. The existence of only four 12V leads helps answer a question I didn't list above, which is why soldered rather than crimped. Well, because you can't crimp one wire onto two connectors, also soldering is faster and cheaper to implement. The other is that given the adapter draws from four 8-pins, combining those four groups of three wires (twice, once for hot and once for ground) into six leads rather than four is unnecessarily complicated given that it's all going to one node anyway.

The contact array/block is honestly my biggest question. Something I originally typed up in this post but deleted was something about the GPU not knowing that a connection had been lost when one or more 12V leads breaks, since all current is running through the same node. I deleted it due to a lack of knowledge of how things work on the GPU side. Igor posted a block diagram of the connector that indicates that all 12V leads run to the same node by design. Why would this be? If a graphics card isn't treating each 12V connection as an individual circuit but all together as a single node, what's the point of a 12-pin connector? It would be just (or almost) as space-efficient to implement a two- or four-pin with nice robust contacts. Two pins is maybe optimistic because it would require 50A capacity, but 25A for a four-pin seems like it would be doable. If, by contrast, those separate circuits are meaningful, why run all the supply circuits through a single node?

This is meant to be a technical discussion. Do not flame, complain, troll, or hijack. Report button will be employed without hesitation or remorse.
 
The questions that pose themselves include:
1) Why 4-14ga rather than 6-16ga, including two 1-into-2 nodes?
2) Why a monolithic contact array instead of individual contacts?
I figure its cost saving for both.

This is meant to be a technical discussion. Do not flame, complain, troll, or hijack. Report button will be employed without hesitation or remorse.
Yeah.. you aren't going to get a lot of replies because the fear of being report just for having a opinion.
 
About the 14gx4 vs 16gx6, my opinion is at it's purely a cost saving thing. Less required wire, soldering required.
 
14 gauge is rated at 24 amps, 16 gauge at 15 amps, these are recommended max ampz, any higher and insulation melts.. Size matters, insulation thickenss is based on wire gauge, Also, the number of cores (strands) matter, more strands mean more flex in the wire without breaking a strand. That also means when you bend a higher strand count wire, it should not hold the shape it was bent into, so to keep a shape you'll need to heat and cool it.

Four 14 gauge wires would carry 96 amps max, well over the 900watts (originally reported). Six 16 gauge wires carry 90 amps max but run a higher risk of melting at over 900watts. As @ir_cow said this would come down to cost.

As for the contact block, I assume its for a distribution of incoming power. I'd even say there was maybe an optional deisgn for there to be 6 wire input.
 
If a graphics card isn't treating each 12V connection as an individual circuit but all together as a single node, what's the point of a 12-pin connector?
You gain some redundancy. One or two pins may become bad over time, and I don't mean complete failure here, just an increased resistance. As a consequence, those pins carry less current and other pins take up more, and everything still works in the long term. The total resistance of all six parallel circuits remains low.

In connectors discussed here, two or three pins lost contact completely, for a wrong reason - moderate (but maybe repeated) mechanical strain. What followed was a chain reaction of ever increasing resistance and temperature. The pin(s) that apparently survived it intact were those which lost contact first.

14 gauge is rated at 24 amps, 16 gauge at 15 amps
Is that for single wires in free air? Bunches of wires? In a sleeve/tube? For mains power installations, ratings depend somewhat on things like that (but I'm not familiar with any details). Anyway, it's not the wires and their insulation that melted here. In the worst case they made a small contribution to high temps in the connector.
 
Last edited:
Yeah.. you aren't going to get a lot of replies because the fear of being report just for having a opinion.
All people have to do is deliver their opinion without doing those things which frankly, should be across most threads. For this instance, in his thread, it's his rules and I welcome it, far too much sensationalist toxicity, complaining, hijacking and trolling going on for my taste all over these forums lately. Someone had to be told to stop those attitudes by Mods in the 4090 Owners club for goodness sake. Some serious axe grinding going on.

Now, on topic.

My 2c so far about the adapter, Nvidia themselves didn't really think too hard about it (hindsight is 20/20), it's likely made by a 3rd party not them internally (obviously could be very wrong on that, but it's my current feeling), and Nvidia gave them a spec to build it to, 4x8pin, to 1x12VHPWR capable of 600w. They ordered those in quantity, tested them when they received them, had no issues in testing and rolled them out to the customers, I'd wager they didn't dissect one or thoroughly evaluate the internals of it, just validated that it did the job they wanted, when used the exact way they wanted. After all they'd recently gone through a very similar exercise without significant issue.

Obviously with the ensuing fiasco comes lessons learned, which there will be multiple, I'm sure they're not at all happy that a $10 adapter is generating massive negative press for an otherwise blockbuster product.
 
Having solder joints on a connector meant for 600W is unbeliavable. Why they didn't put the wires to the connectors like in classic PCIe connectors?
 
Having solder joints on a connector meant for 600W is unbeliavable. Why they didn't put the wires to the connectors like in classic PCIe connectors?
Because it would mean the wires on the PSU side would be cramped together to just one pin on the card but there is not enough space to do that.
And it would need 6 by 8 pins, too complicated. 3 of them take care of 3 by 12V
 
Last edited:
Yeah.. you aren't going to get a lot of replies because the fear of being report just for having a opinion.

I'm willing to take that risk.

You gain some redundancy. One or two pins may become bad over time, and I don't mean complete failure here, just an increased resistance. As a consequence, those pins carry less current and other pins take up more, and everything still works in the long term. The total resistance of all six parallel circuits remains low.

In connectors discussed here, two or three pins lost contact completely, for a wrong reason - moderate (but maybe repeated) mechanical strain. What followed was a chain reaction of ever increasing resistance and temperature. The pin(s) that apparently survived it intact were those which lost contact first.


Is that for single wires in free air? Bunches of wires? In a sleeve/tube? For mains power installations, ratings depend somewhat on things like that (but I'm not familiar with any details). Anyway, it's not the wires and their insulation that melted here. In the worst case they made a small contribution to high temps in the connector.

Let's presume two pins go bad. Your six hot terminals that handled (up to) 16.7A 8.3A per have become four responsible for 25A 12.5A each. Are they rated for that? (EDIT: Yes, just; see below) Can't find that spec at the moment. I still wonder about the GPU side, though: on non-12VHPWR cards, are individual 12V circuits monitored for integrity or in some way treated individually, or are all 12V conductors joined to a single node basically right at input, much the way the Nvidia adapter does it?

Now, was it the pins that lost contact, or the solder connection failing? I'd interpreted it as the latter.

You're correct that it's not the wires that melted, but one of the questions I had was about 4x14 instead of 6x16. Charts differ on how many amps solid core wire can handle, but agree that 7-24 core wire is 10A and 7A for 14ga and 16ga respectively. That supports 480W and 420W. Neither of those numbers is 600W. I'm clearly missing something.

My 2c so far about the adapter, Nvidia themselves didn't really think too hard about it (hindsight is 20/20), it's likely made by a 3rd party not them internally (obviously could be very wrong on that, but it's my current feeling), and Nvidia gave them a spec to build it to, 4x8pin, to 1x12VHPWR capable of 600w. They ordered those in quantity, tested them when they received them, had no issues in testing and rolled them out to the customers, I'd wager they didn't dissect one or thoroughly evaluate the internals of it, just validated that it did the job they wanted, when used the exact way they wanted. After all they'd recently gone through a very similar exercise without significant issue.

You're probably correct on all of this. The chances these were made in-house is as close to zero as makes no nevermind.

Because it would mean the wires on the PSU side would be cramped together to just one pin on the card but there is not enough space to do that.
And it would need 6 by 8 pins, too complicated. 3 of them take care of 3 by 12V

PSU side doesn't seem like it would have anything to do with it. The adapter connects to four 8-pin leads, each with three 12V pairs. For each 8-pin, those three pairs are combined into a single 14ga 12V pair. These four pairs are then merged at the 12VHPWR connector. But there are two problems: the 12VHPWR is 12-pin, and you're trying to connect 8 conductors. You can merge them outside the connector and back into 12, or do what the mfr did and feed them in as-is. Second problem: The connector is based (I believe) on the Molex Micro-Fit+ system, which employs crimp connectors, and only supports up to 16ga wire. Even if you wanted to, you couldn't feed 14ga into the system as designed. The system also treats all conductors as individual circuits; the type of block connector that NV is using isn't in their catalog AFAICT, nor are solder terminals, so it looks to be something custom-designed.

EDIT: Micro-Fit+ maxes out at a rated 13A per connection. 6 * 13A = 78A, 78A * 12V = 936W, so the system hypothetically has headroom.
 
Last edited:
Here is a possible solution, solder 2 leads on 3 pads and this should make it harder to break off the outlier pad.

1666929067840.png
 
Is that for single wires in free air?
yes.

Anyway, it's not the wires and their insulation that melted here. In the worst case they made a small contribution to high temps in the connector.
I imagine if this was a single wire supply there wouldnt be such an issue with the connector.
 
The problem with Nvidia's adapter is that all 12V inputs from the seperate 8-pin adapters get merged into a single bus bar which is made of woefully inadeqate material amounting to nothing more than some thin foil as a base for the solder pad.

The solution is to just beef up this thin foil bus bar into a substantial piece of steel and then the cracking/weak/overheating connections go away.
 
The problem with Nvidia's adapter is that all 12V inputs from the seperate 8-pin adapters get merged into a single bus bar which is made of woefully inadeqate material amounting to nothing more than some thin foil as a base for the solder pad.

The solution is to just beef up this thin foil bus bar into a substantial piece of steel and then the cracking/weak/overheating connections go away.

Bus bar. That's the jargon term that's been eluding me this whole time.

Even if the bus bar is beefed up and all the connections solid, is that an acceptable solution? I so far haven't gotten/found an answer to circuit topology on the GPU side. Does a GPU always see the aggregate of the 12V leads as a single source, or if the contacts were true discrete connections (at the connector level, anyway; they obv. all merge back at the PSU), is there any sort of integrity monitoring per connection?

Maybe I'm the only person that's this interested in these questions...
 
Bus bar. That's the jargon term that's been eluding me this whole time.

Even if the bus bar is beefed up and all the connections solid, is that an acceptable solution? I so far haven't gotten/found an answer to circuit topology on the GPU side. Does a GPU always see the aggregate of the 12V leads as a single source, or if the contacts were true discrete connections (at the connector level, anyway; they obv. all merge back at the PSU), is there any sort of integrity monitoring per connection?

Maybe I'm the only person that's this interested in these questions...
Yes, Igor's lab basically looked at the solder pad foil bar and said that the ends that the outside pins are connected to are snapping internally under the strain of bends.

By itself, that "bus bar" breaking changes nothing about the poor design of that foil-based common rail, but the problem is that the end pins on the HPWR connector take a whole 150W (12.5A) load from one of the 8-pin cables all by themselves. Rather than 100W per pin with the 6 pins taking 600W between them, it's now dealing with 150W for those snapped-off, isolated end pins. 12.5A is too much for a thin piece of foil that has already been damaged enough to snap away from the rest of the solder pad. That 13A rating is valid for the steel pins of the microfit+ connector. There's no way in hell the thin little piece of foil is going to handle 12.5A at sensible temperatures, and clearly it's getting hot enough to melt the plastic. I'm not particularly pleased that the foil pad is being asked to carry 10A on average even when the adapter is undamaged.

Distributing the four incoming 8-pins more evenly between the 6-pins of the HPWR connector is one fix, but a better fix is to solder to something more substantial than foil. Honestly, there's no place for foil in a connector with this much current going through it so that's the idiotic decision that needs to be rectified, IMO.
 
Last edited:
It never even occured to me to ever use the adapter included with any card, just buy a compatible power supply or modular cable.
And on the subject of single 12V, yes it has become known that the card oonly sees one 12V source, to the extent that the PCIe slot is not even connected as a source at all. Imagine the disaster if it did.
 
Overall, I feel it is just a bad design that comes from the space of cost savings as opposed to reliability. As someone else noted, for a $10 adapter to cause this seems really bad "falling through the cracks" of a top end product.

If you look at the 30 Series plug, it is a one to one as far as pins to ground. 6 12V pins to 6 GND pins. There is not a bus bar for splitting off an input. They should have kept that same plug for this card. Don't fix what isn't broken. In this case you have a 12-pin connector being fed by 32 pins of input. If you look at a proper 12VHPWR cable, all 4 sense pins are used and not just 2. Looking at Igor's diagram, if done properly and not cut corners, the far left two pins of 12V and GND would talk to the Sense IC and communicate with the card and power supply. Again, the power supply needs to have the proper configuration for this communication to take place.

Having done this, you would have 24 pins going into the 12-pin connector and the other 8 pins going into the 4-pin sense connector on top.

Remember, you don't have the sense pins on the 30 series cards and adapters. I would say most power supplies on the market currently don't meet the PCIe 5.0 spec.

This reminds me of when PCIe first came about, there were those two Molex to one 6-pin adapters. I had issues with Molex pins being pushed out or sometimes just falling out of the connector. I would never trust a cheap adapter, let alone on a top end product. I would consul my power supply manufacturer for the appropriate cable and avoid the adapter at all costs.
 
The problem with Nvidia's adapter is that all 12V inputs from the seperate 8-pin adapters get merged into a single bus bar which is made of woefully inadeqate material amounting to nothing more than some thin foil as a base for the solder pad.

The solution is to just beef up this thin foil bus bar into a substantial piece of steel and then the cracking/weak/overheating connections go away.
It should never have been a buss bar in the first place, that's ridiculous and stupid. ..... .

And definitely, absolutely, positively Dangerous Even with 1000x the material. ,..... .
 
Has anyone... made their own cable??? :rockout::fear:
 
In short: This adaptor is ugly and a fire hazard as far i have seen now. I value my hardware, life and home. So i am not going to use this garbage for much longer.

Solution for me at least: i just ordered a replacement cable from Cablemods.

End of this sad story for my part.

Has anyone... made their own cable??? :rockout::fear:
nope not made my own. But just ordered one from cablemods. I dont trust this adaptor one bit.
 
Coming soon RGB Fan's on the cable for cooling,..

FgLBYmXXgAAQvDm.jpg

I'll wait for the water cooled version. :laugh:
 
It appears it's not just nVidia cables causing this: i've just spotted a post @ AnandTech forums where a 2ND PSU with non-nVidia cables had the problem.

It always seems to be the outer left pins (looking at the GPU connector) which is always the ones under the most tension from the natural cable bend towards cable management channels at the front of almost any PC case.

Either these connectors simply CANNOT handle common stresses and bends of modern PCs because they're too small to hold the pins straight, or there's another problem on the GPU-side - perhaps the connection from the socket to the PCB has uneven-length traces so one set of pins in the socket has lower resistance and more current flows through that side?
 
When I raised the topic organically last month he downplayed the whole thing though. Even the experts are clueless!
Scietist revised his stance in light of new evidence, must be clueless! /s

It probably didn't help his opinion that WCCF was drama farming by outright fabricating parts of their stories on the matter.
 
Back
Top