• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PC Enthusiast's Next Stop is... 12VHPWR Power Connector with Active Fan Cooling?

Here we go, inventing something that shouldn't exist in the first place if someone did their job right... Not to mention it costing money too...
 
Imagine a 12VHPWR connector, that, instead of a fan and an oled had a shunt per cable...

Given the high tecnology they are using, even a PTC X_D
 
I mean, if I owned a 5090 and this was a fullproof fix, $30 is small price to pay for security of a 2k card. Tis sad to see none the less.

The problem is, how do you monitor the PSU side?
 
Whilst this stupid thing with a fan to cool the connector is a farce, what would be useful is a 12V-6X2 adapter with six shunt resistors to measure the current in each wire pair. Cost to manufacture would be very very low.

It could link via a 2-wire cable to a similar adapter placed between the ATX 24-pin power connector and the motherboard with the sole functionality of opening the circuit on pin 16 to immediately shut off the PSU if current in any wire pair exceeds 12A or something like that.

Solve the problem at the source, don't engage in a futile fight against the symptoms!
 
wow. how stupid. i thought this was a joke at first.
 
1740256978647.png
I would rather a fire extinguisher
 
If the connector is the real problem, and the reason that particular connector is used is because of space constraints on the card, why not have a pigtail off of the card to a large, heavy duty connector that will not melt?
 
All just because the industry isn't willing to recall and replace the flawed 12VHPWR cables with the newer 12V-2x6 cables.
What little trust remained in this cable is completely gone but that was only a symptom of the issue for 5000 series.
It's the rail configuration on a densely packed card. I went from a single rail PSU to multi, just to have the option to run it.
Then the card takes whatever jacked input through 12VHPWR and condenses that input back into an unchecked single rail.
What are we doing nvidia? Stop that.
 
If the connector is the real problem, and the reason that particular connector is used is because of space constraints on the card, why not have a pigtail off of the card to a large, heavy duty connector that will not melt?

Warranty. MY AMD powercolor card restricts according to the rma rules any modification.

A new card should not need a "ghetto mod" or a bad fix. It should work out of the box.
 
What little trust remained in this cable is completely gone
Yes, because the industry has failed to recall all the flawed cables. There is no trust any longer.

... but that was only a symptom of the issue for 5000 series.
It's the rail configuration on a densely packed card. I went from a single rail PSU to multi, just to have the option to run it.
Then the card takes whatever jacked input through 12VHPWR and condenses that input back into an unchecked single rail.
What are we doing nvidia? Stop that.
That's a nice to have protection, but is itself only a resulting symptom. If the cables don't work, then the protection only serves to highlight the failing (and prevents damage too). The protection doesn't fix it. The loss of trust is now becoming the big issue.

If the connector is the real problem, and the reason that particular connector is used is because of space constraints on the card, why not have a pigtail off of the card to a large, heavy duty connector that will not melt?
No need. The corrected 12V-2x6 cables fixed the issue back in 2023 - https://www.techpowerup.com/314066/...-to-handle-full-load-while-partially-inserted

The problem now is there is a pile of flawed 12VHPWR cables still floating around. These older flawed cables have to be recalled to regain trust.
 
Last edited:
Only time will tell if those "corrected" cables will have solved that issue. I had the impression some nvidia based graphic card printed circuit boards have a design flaw. That also includes the connector, the cables, and the connector on the power supply unit side.

I would not want to bet money on that statement: "the corrected cables ... fixed that issue"

I also wonder how you can make that bold statement that only the old cables are effected? No one has access to all bad hardware and issues. Without data I would not make such statements. Or do you trust only those youtubers like the boss from thermalgrizzly and some random heavily censored reddit page? Those 5000 nvidia graphic cards are only in small numbers on the market. Very, very, very small numbers in comparison with other series.
 
True, I haven't proved my position beyond doubt. I've gained more confidence as each case one-by-one fits my position. But even if I'm incorrectly assigning the root cause (A badly designed spring contact inside the plugs, presumably from one plug manufacturer) there clearly is a problem with at least some 12VHPWR plugs. It can't really be anything else.

The thing I'm most worried about is the problem could exist in 12V-2x6 plugs too. I dearly hope that hasn't happened.

Those 5000 nvidia graphic cards are only in small numbers on the market. Very, very, very small numbers in comparison with other series.
Less power means lower quality of pins required to keep a card running. There is possibly a lot of flawed plugs that aren't showing any problem for the moment. Those could be ticking time bombs.
 
Last edited:
Its amazing the length of excuses that everyone are willing to go before flat out calling this crap a failure.

Of course this is because Ngreedia is untouchable, because if it was AMD pushing this sh!t, it would be banned and removed immediately.
 
Need to change the name of TechPowerUp to something more modern, like "TechGoBoom" or "WeJustWantYourMoney".

Disclaimer

TPU are legendary, THIS IS HUMOUR.
 
All just because the industry isn't willing to recall and replace the flawed 12VHPWR cables with the newer 12V-2x6 cables.

12V-2X6 is only a very minor improvement over 12VHPWR. The fundamental issue is that Nvidia shrunk the connector, reducing its current-carrying capacity, and then doubled the current from 4.2A to 8.3A because they could blame Intel and wash their hands of any responsibility.

If we're giving standards their safety ratings based on safety margins:
  • 12VHPWR = 1/10 (8.3A through a connector designed only for 9.5A with teeny tiny pins known to wiggle in their housing)
  • 12V-6X2 = 1.5/10 (8.3A through a slightly longer pin that hopefully mates better but still has the same 9.5A limit)
  • EPS 12V = 6/10 (8A through a chonky pin designed for 13A, lots of mating surface over a nice long pin using thicker steel)
  • PCIe 8-pin (or 6+2pin) = 9/10 (Same as EPS 12V but the standard only asks for 4.2A per 13A pin so there's a huge 200% safety margin compared to the 14% of Nvidia's connector)
In theory, 12V-6X2 is okay, as long as the graphics card has a way to monitor current per wire and cut power draw if it exceeds a safety threshold, but Nvidia's board designs don't even have that basic safety monitoring any more, so they're at risk of vastly exceeding the already pathetic 14% safety margin. In a PCIe 8-pin you have a 200% safety margin that can have a fault in two of the three 12V wire pairs and STILL be within spec at safe temperatures.

The optimum solution to safely delivery more power than a PCIe 8-pin would be a 12-pin MiniFit Jr connector with 75W per pair (6.25A) and a MANDATORY requirement for at least three independent shunt resistors on the board, effectively meaning that even in a worst-case scenario, a wire would only be carrying 150W, which is 13A and at the upper limit of the spec. Yeah, it would only provide 450W, but holy crap, that's too much power for a single connector soldered to single copper layer PCB trace already. Use two connectors across two traces for the (ridiculous) 5090.
 
Last edited:
Whilst this stupid thing with a fan to cool the connector is a farce, what would be useful is a 12V-6X2 adapter with six shunt resistors to measure the current in each wire pair. Cost to manufacture would be very very low.

It could link via a 2-wire cable to a similar adapter placed between the ATX 24-pin power connector and the motherboard with the sole functionality of opening the circuit on pin 16 to immediately shut off the PSU if current in any wire pair exceeds 12A or something like that.

Solve the problem at the source, don't engage in a futile fight against the symptoms!
Really now the psu makers have to solve the problem what is not caused by the psu
I do not think so my 1600 watt psu can manage a huge load so a 5xxx is not even able to make it move the fan for a second faster than it does now which is whirling slowly every 15 minutes.
The blame is the fault of a uneven distributed amount of current flowing over one of the cables for some reason.
Now we have to wait if someone can find what is causing this as the other cables clearly did had almost nothing todo.
It is not even certain that the cable itself was flawed so again way too soon to start seeking for a culpritt which is not known yet
 
Yes lets add more possible failure points to an already questionable connection, ,makes sense...
 
The blame is the fault of a uneven distributed amount of current flowing over one of the cables for some reason.
Not "some reason"

If you have a single rail source like modern PSUs, and a single-rail load like the stupid 5090/5080 FE design then you are splitting current across all of the individual wires. The resistance of copper wire increases slightly as it heats up, so if too much current makes on particular wire hot, its resistance will increase, but the current being drawn at the source is locked by the single-rail of the GPU connector which means MORE current is drawn from the PSU down the hotter wire in order to overcome the higher resistance of that wire, which makes it hotter still.

It's a vicious circle of heat causing the current in the wire to increase which causes the wire to get hotter.

The solution is shunt resistors to monitor current across individual wires, or at least groups of wires, so that if any one wire starts to succumb to this thermal runaway, the GPU can pull less current from that particular wire or group of wires to restore the balance of current across all of the wires from the PSU.
 
The linked article from 2023, demonstrated a 12V-2x6 having no problems running above max rating while at the same time not fully mated. Here it is again - https://www.techpowerup.com/314066/...-to-handle-full-load-while-partially-inserted
I'm struggling to see why that isn't being paid attention to.
So even the dubious 12VHPWR is fine if all 12 pins are making contact and all cables are physically undamaged. At 100W per pair that's 8.3A per pin and the pins are rated to 9.5A. Even if the connector is not fully mated, there should at least be contact.

Where connectors are melting and thermal imaging of hot cables (der8auer, for example) is happening, that's because both the PSU and Load are 'dumb' designs that aren't balanced, and you can get the vicious circle I mentioned in my last past. All it takes to get a wire too hot is some invisible damage to the internal wire strands under the sheathing, or a bad crimp job on one of the Amphenol connector pins in the cable's plug. Once you have a hot wire with higher resistance, the problem is self-feeding because it's a positive feedback loop that requires shunt resistors and a monitoring to counter, not a single dumb rail with zero monitoring.
 
12V-2X6 is only a very minor improvement over 12VHPWR. The fundamental issue is that Nvidia shrunk the connector, reducing its current-carrying capacity, and then doubled the current from 4.2A to 8.3A because they could blame Intel and wash their hands of any responsibility.

If we're giving standards their safety ratings based on safety margins:
  • 12VHPWR = 1/10 (8.3A through a connector designed only for 9.5A with teeny tiny pins known to wiggle in their housing)
  • 12V-6X2 = 1.5/10 (8.3A through a slightly longer pin that hopefully mates better but still has the same 9.5A limit)
  • EPS 12V = 6/10 (8A through a chonky pin designed for 13A, lots of mating surface over a nice long pin using thicker steel)
  • PCIe 8-pin (or 6+2pin) = 9/10 (Same as EPS 12V but the standard only asks for 4.2A per 13A pin so there's a huge 200% safety margin compared to the 14% of Nvidia's connector)
In theory, 12V-6X2 is okay, as long as the graphics card has a way to monitor current per wire and cut power draw if it exceeds a safety threshold, but Nvidia's board designs don't even have that basic safety monitoring any more, so they're at risk of vastly exceeding the already pathetic 14% safety margin. In a PCIe 8-pin you have a 200% safety margin that can have a fault in two of the three 12V wire pairs and STILL be within spec at safe temperatures.

The optimum solution to safely delivery more power than a PCIe 8-pin would be a 12-pin MiniFit Jr connector with 75W per pair (6.25A) and a MANDATORY requirement for at least three independent shunt resistors on the board, effectively meaning that even in a worst-case scenario, a wire would only be carrying 150W, which is 13A and at the upper limit of the spec. Yeah, it would only provide 450W, but holy crap, that's too much power for a single connector soldered to single copper layer PCB trace already. Use two connectors across two traces for the (ridiculous) 5090.

How many of the new PSU's have multiple 12VHPWR? My guess is Nvidia consider it worse to jump to two of the connectors, as PSU's would need redesigning again so soon. One of those moments where someone goes all in on a bad decision.
 
How many of the new PSU's have multiple 12VHPWR? My guess is Nvidia consider it worse to jump to two of the connectors, as PSU's would need redesigning again so soon. One of those moments where someone goes all in on a bad decision.
Pass. Why would anyone except the few hundred 5090 owners even care? Nvidia includes adapters in the box so I don't think anything would change if the connector was changed. New adapter(s) would be included in the new boxes if there was ever a change in connector standard.

600W for a single GPU is horrifyingly ridiculous. This is a world that's trending towards laptops and tablets. Even in the datacenter, performance/Watt is the key motive for purchases and 99.9x% of people do not want a 600W GPU.
 
Back
Top