• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
Problem solved :roll:
 

Attachments

  • N6HSTIy.jpeg
    N6HSTIy.jpeg
    33.9 KB · Views: 44
Another 3rd party cable. This one from MOD-DIY. Seems like the most likely cause is the ends were crimped instead of soldered. High resistance on 2 pins caused the others to exceed 9a.

Once again, don't buy 3rd party cables...
Have you even followed the news?

No amount of cable can fix the shitty board design NVIDIA came up with. If 600 watt can flow through 1 pin because of the board design you fucked up badly. NVIDIA did this knowingly and they simply don't give a fuck.

If you want to buy a new GPU it's best to dig through Bullzoids YT channel to find in formation about the board design of the card you want. For example the Sapphire 9070 XT is one to avoid as well, because they don't give a shit about balancing either.

In this video Bullzoid explains how simple balancing actually is.

 
Another 3rd party cable. This one from MOD-DIY. Seems like the most likely cause is the ends were crimped instead of soldered. High resistance on 2 pins caused the others to exceed 9a.

Once again, don't buy 3rd party cables...
I heard from some other very smart people on this forum that Roman (Der8auer) is a liar scumbag salesman.


What are you talking about? Show me the connector spec.

1746427850253.png


1746427895259.png


Also, spec is behind paywall on PCI-SIG.

 
I get ir_cow's point, it was a cable that isnt authorised by Nvidia.
However on the 4000 series the cables supplied were awful, a 3cm limit before can bend, yet the cables were stiff and not bendy, plus tall GPU's with the connector going out at 90 degrees, brain dead design.
This leaves people with a few hard choices, either replace the cable with something more suited for the case, replace the case and rebuild the system, or ignore the issue and hope bending the nvidia cable close to the connector causes no problems.
I also still have the opinion the connector is a flawed designs so even though blame could be attributed to 3rd party cables because they not manufactured exactly as Nvidia intended, it doesnt absolve the flawed connector.

There is also the issue where Nvidia decided to require 3 8pin's on their cable, for a 320 TDP card, whilst they were happy enough with 2 on the 3000 series. That leaves owners of PSU's that have only 2 8 pin cables in a tough position as well. In theory only connecting two would cap the power to 400w, but in practice I believe it prevents the card from posting (this happened to me, and to others who reported the same). Then its a choice of either pig tailing a cable meaning 66/33 load, or using 3rd party to remedy it, or a new PSU.

Nvidia put their customers in a position of having to pick from a number of bad options.

The whole saga could be avoided by simply keeping the perfectly working old connector.
 
Last edited:
Have you even followed the news?

No amount of cable can fix the shitty board design NVIDIA came up with. If 600 watt can flow through 1 pin because of the board design you fucked up badly. NVIDIA did this knowingly and they simply don't give a fuck.
This is what happens when people who don't understand an inkling of power engineering begin believing Youtube videos. If the supply cable is out of spec, there are still countless failure modes where per-pin current sensing will register no problem, yet the cable will still overheat and fail. It wouldn't even fully solve this failure mode. In a lab setting, I've used power cables that supply 250,000+ watts -- with no per-pin sensing. But the cables were in spec.

By the way, did you know that Seattle is having an outright epidemic of automobile windshield pitting? Apparently all the H-Bombs we've tested have caused unacceptable radiation levels in the area:

 
Last edited:
For safety 600W/12VDC=50A
50A/6 PEGPins=8.33A per pin, Round it up to 9 or 9.5A, per wire diameter gauge it would be 14 AWG at the largest per conductor, 16 AWG Minimum

The square contact (female conductor) would need its surface area and thickness figured out to support 9.5A safely, same with the square pin (which might just be 14 or 16AWG) for 24/7 366 use.

Ofc there needs to be current monitoring and regulation on cards and psus so the cards cannot draw more than 50A in total or 8.33A on each pin.
When some load (electrical, mechanical, whatever) is distributed across several equal parts, you can't assume there's perfect balancing of that load. If six parallel wires and parallel contacts are designed to carry 50 A, each of them should be designed for... what, 12 A? 14 A? I don't know what a reasonable safety factor would be but sure it's significantly more than 1.
 
This is what happens when people who don't understand an inkling of power engineering begin believing Youtube videos. If the supply cable is out of spec, there are still countless failure modes where per-pin current sensing will register no problem, yet the cable will still overheat and fail. It wouldn't even fully solve this failure mode. In a lab setting, I've used power cables that supply 250,000+ watts -- with no per-pin sensing. But the cables were in spec.

By the way, did you know that Seattle is having an outright epidemic of automobile windshield pitting? Apparently all the H-Bombs we've tested have caused unacceptable radiation levels in the area:

"The Seattle windshield pitting epidemic is a phenomenon which affected Bellingham, Seattle, and other communities of Washington state in April 1954; it is considered an example of mass panic."

"Although natural windshield pitting had been going on for some time, it was only when the media called public attention to it that people actually looked at their windshields and saw damage they had never noticed before."

I can assure you, people were not en masse ignoring melting cables before today.
 
Watch the video above my post.
What I meant is that it is a crimped-on connector. It's designed to be crimped. It's supposed to be crimped. They don't make them in solder cup style.
 
Have you even followed the news?

No amount of cable can fix the shitty board design NVIDIA came up with. If 600 watt can flow through 1 pin because of the board design you fucked up badly. NVIDIA did this knowingly and they simply don't give a fuck.
Exactly. Many people here think that 300W card is perfectly fine with 12V2x6 connector, because, well, 300W is not 500+W as seen with RTX 5090.
A 300W card will have overall current flowing throught the connector smaller than on 500+W card.
This does not say anything about a fact that even on 300W card through 1 out of 6 pins in that shitty connector can go more than 9.3 amps.
300W requires 25A at 12V. So, it can be 4.16A per pin, or maybe 14A+2A+0.7A+7A+1.3A. Current takes path of least resistance, so, ...

When some load (electrical, mechanical, whatever) is distributed across several equal parts, you can't assume there's perfect balancing of that load. If six parallel wires and parallel contacts are designed to carry 50 A, each of them should be designed for... what, 12 A? 14 A? I don't know what a reasonable safety factor would be but sure it's significantly more than 1.
SF should be at least 1.3. 1.5 would be optimal.
 
"The Seattle windshield pitting epidemic is a phenomenon which affected Bellingham, Seattle, and other communities of Washington state in April 1954; it is considered an example of mass panic."

"Although natural windshield pitting had been going on for some time, it was only when the media called public attention to it that people actually looked at their windshields and saw damage they had never noticed before."

I can assure you, people were not en masse ignoring melting cables before today.
Because -- either today or before it -- there has been on "en masse" melting. However it is, like the SWPE described above, yet another example of mass hysteria, with the proximal cause in both cases being a media campaign aimed at the credulous.

SF should be at least 1.3. 1.5 would be optimal.
If a pin is rated for 12A, it doesn't mean it fails if it carries 12.1A. Any reputable manufacturer designs in (and tests for) a degree of safety factor. What we're seeing now is which cable makers are reputable, and which are not.
 
Another burnt 12VHPWR cable. Zotac 4090 + FSP Hydro G Pro 1000W ATX3.0 + 12VHPWR cable that came with the PSU.
Minor discoloration melting on GPU side, major damage on the PSU side.
1746469514075.jpeg

1746469471240.jpeg

Source: https://quasarzone.com/bbs/qf_vga/views/6688606
edit: discoloration -> melting on GPU side.
 
Last edited:
Finally we also talk about power supply units. I mentioned that ages ago already. Nvidia could sell their cards as is if the power supply unit would be properly made and the cables and connectors would be properly made. I do not mind more expensive power supply units for nivida graphic cards only.

Restrict circuit wise the current on each pin on the power supply unit. Psu are cheaply made inside.

This leaves people with a few hard choices, either replace the cable with something more suited for the case, replace the case and rebuild the system, or ignore the issue and hope bending the nvidia cable close to the connector causes no problems.

* return the nvidia gpu modell and buy another gpu.

I also return something in a few days again because the product is not as described according to the marketing papers and product packaging.
 
Last edited:
Any reputable manufacturer designs in (and tests for) a degree of safety factor.
The root of the problem we are experiencing right now is that this connector spec as a whole has a terribly small safety factor compared to its predecessors.
 
The root of the problem we are experiencing right now is that this connector spec as a whole has a terribly small safety factor compared to its predecessors.
No, what you're seeing is an increase in power demands. When a cable supplying 200 or 300 watts is out of spec, it merely prevents the card from booting, or some more innocuous error. When a cable supplying 600 watts is out of spec, it can generate enough heat to melt. This is also why we're only seeing these issues with third-party cables -- and only those from certain manufacturers.
 
No, what you're seeing is an increase in power demands. When a cable supplying 200 or 300 watts is out of spec, it merely prevents the card from booting, or some more innocuous error. When a cable supplying 600 watts is out of spec, it can generate enough heat to melt. This is also why we're only seeing these issues with third-party cables -- and only those from certain manufacturers.

We're seeing more than one problem. But one problem, without a doubt, is that the margin of safety of the spec has dropped from roughly 200% previously to around 25% now with this connector.

There is no 'No' that you can respond with in regards to what I said.
 
2 pins, not 4.. unless I am mistaken. All these cable that burn look like they have the 2 wire dealio.

Bunk.
 
We're seeing more than one problem. But one problem, without a doubt, is that the margin of safety of the spec has dropped from roughly 200% previously to around 25% now with this connector.
Your calculation is based on ignoring the safety factor designed into the cables and connectors themselves, rather than the specification. That's the proper place for it, anyway. If you need 50A, you specify 50A -- you don't specify 75, on the assumption that parts will fail at the slightest overage.

In any case, that's not even the point I'm refuting, but rather the asinine belief that the lack of "per pin sensing" is some sort of design fault on a peripheral card. If it was required, the proper place for it would be on the PSU itself, since that captures failure modes that occur before the card could sense it. But it's not any sort of requirement.
 
Your calculation is based on ignoring the safety factor designed into the cables and connectors themselves, rather than the specification.
I don't think so. The same connectors have a very different rating in sane applications where it matters. Such as automotive and aviation. It's only in these garbage consumer products that they are magicly able to supposedly handle way more current. And that comes down to the safety factor in the spec being trash compared to previously.
 
I don't think so. The same connectors have a very different rating in sane applications where it matters. Such as automotive and aviation. It's only in these garbage consumer products that they are magicly able to supposedly handle way more [sic] current.
You're contradicting yourself in your own post. If "the same connector" is rated differently in another application, that proves the manufacturer of that connector is manipulating the rated figure. A specification standard doesn't apply to particular connectors, it merely states that the connector itself should be rated for a specific load.

And yes, your figure ignores the safety factor designed into the cables and connectors themselves. Post your calculation and I'll prove it to you. There is absolutely zero reason that a specification standard should have to itself add a further safety standard, beyond what is (or should be) already within the components themselves.
 
You're contradicting yourself in your own post. If "the same connector" is rated differently in another application, that proves the manufacturer of that connector is manipulating the rated figure. A specification standard doesn't apply to particular connectors, it merely states that the connector itself should be rated for a specific load.

And yes, your figure ignores the safety factor designed into the cables and connectors themselves. Post your calculation and I'll prove it to you. There is absolutely zero reason that a specification standard should have to itself add a further safety standard, beyond what is (or should be) already within the components themselves.
No.... NVIDIA and PCI-SIG.
The spec lacks safety factor. Indivindual vendors don't create specs.
PCI-SIG created the spec for video cards. NVIDIA accepted it and continues to accept it despite the results.
PCI-SIG didn't invent the terminal pins and sockets. They used off-the-shelf stuff. That off-the-shelf stuff has a very different rating when it comes to automotive and aviation products. Consumer electronics are class 1. Crap. I guess PCI-SIG decided crap electronics deserve crap safety factor. Why? Who knows. The previous PCI-E aux power connector spec was way more reasonable and safe.
The old PCI-E connectors had around 200% safety factor. This new connector (and revisions of it) have a safety factor of around 25% thanks to PCI-SIG's wishes.
Crap.
It's not "manufacturers" (vendors) who are fudging with the safety factor. It's a bad spec from the source (when it comes to safety factor and being resilient against error and imperfection).
Let's be clear: PCI-SIG established the safety factor. Not industry. Not the original designers and manufacturers of those contacts and pins which were already being used in the real-world with more reasonable safety factors. Why PCI-SIG chose what they did, who knows. Probably they were catering to sexiness and not safety. Were they coerced? Who knows.

And what is that sic stuff up there? What did you take exception to? I said current, not power? Current is the proper consideration.
 
Last edited:
PCI-SIG created the spec for video cards. NVIDIA accepted it...PCI-SIG didn't invent the terminal pins and sockets. They used off-the-shelf stuff. That off-the-shelf stuff has a very different rating when it comes to automotive and aviation
You couldn't possibly be more wrong. Neither the 12VHPWR and 12V-2X6 connectors were "off the shelf" -- both first appeared in use with NVidia graphics cards. Nor does PCI-SIG doesn't "set ratings" for individual components. The standard specifies a minimum specification, whereas manufacturers like Molex and Amphenol specify a maximum rating for components which are used to meet that specification. I rather doubt your claim that 12VHPWR type connectors are being used in "automotive and aviation", but even if true, if the manufacturer is rating them differently there, that's on them, and has nothing to do with PCI-SIG or their standard.

It's not "manufacturers" (vendors) who are fudging with the safety factor. It's a bad spec from the source .... Let's be clear: PCI-SIG established the safety factor. Not industry
Again, you fail to understand. I'll illustrate with actual numbers. The standard specifies a per-pin rating of 9.5A and a sustained draw of 600W. If a pin fails when subjected to 9.51A, or a connector fails when supplying 601 watts, the safety factor is zero. Any reputable manufacturer will ensure a FoS substantially above 1 -- 1.3 or higher. In components where a failure could potentially result in the loss of hundreds of lives (like aviation, for instance), they might demand a FoS of 2 or even 3. But again -- that's a manufacturer rating.

Some Youtube lackwits simple-mindedly multiply 9.5 x 12 x 6 = 684, divide by 600, and thus conclude the "safety factor" here is the 14% overage. But, despite your belief otherwise, the actual safety factor lies within the components themselves. Which is where it should be.

And what is that sic stuff up there? What did you take exception to? I said current, not power? Current is the proper consideration.
Actually, I was taking exception to the grammar and spelling, which was beneath me. I apologize -- however the proper consideration is indeed power. The standard specifies 9.5A @ 12V .. because V x I = power. These connectors could handle far more current at 1.5 volts than they can at 12.
 
Last edited:
You couldn't possibly be more wrong. Neither the 12VHPWR and 12V-2X6 connectors were "off the shelf" -- both first appeared in use with NVidia graphics cards. Nor does PCI-SIG doesn't "set ratings" for individual components. The standard specifies a minimum specification, whereas manufacturers like Molex and Amphenol specify a maximum rating for components which are used to meet that specification. I rather doubt your claim that 12VHPWR type connectors are being used in "automotive and aviation", but even if true, if the manufacturer is rating them differently there, that's on them, and has nothing to do with PCI-SIG or their standard.


Again, you fail to understand. I'll illustrate with actual numbers. The standard specifies a per-pin rating of 9.5A and a sustained draw of 600W. If a pin fails when subjected to 9.51A, or a connector fails when supplying 601 watts, the safety factor is zero. Any reputable manufacturer will ensure a FoS substantially above 1 -- 1.3 or higher. In components where a failure could potentially result in the loss of thousands of lives (like aviation, for instance), they might demand a FoS of 2 or even 3. But again -- that's a manufacturer rating.

Some Youtube lackwits simple-mindedly multiply 9.5 x 12 x 6 = 684, divide by 600, and thus conclude the "safety factor" here is the 14% overage. But, despite your belief otherwise, the actual safety factor lies within the components themselves. Which is where it should be.


Actually, I was taking exception to the grammar and spelling, which was beneath me. I apologize -- however the proper consideration is indeed power. The standard specifies 9.5A @ 12V .. because V x I = power. These connectors could handle far more current at 1.5 volts than they can at 12.
The contacts and pins which were used already existed. In other applications they have more sane safety factors. That's simply fact. If you think PCI-SIG designed them, we are off to a bad start. No other logic will make sense to you if your premise is so far off. The fact that they already existed and were used in a much more sane way is core to the understanding of why PCI-SIG messed this up so bad.

These pins have a max current rating from their manufacturer. I'm comparing apples to apples. PCI-SIG didn't design them or rate them. All PCI-SIG did is decide to use them and decide how far to push their limit. All these Youtube lackwits are similarly comparing apples to apples. They are right to make their conclusions. They are comparing minifit jr max current rating to microfit plus, and then seeing how crazy PCI-SIG underrates minifit jr and overrates microfit plus.

If you look at the margin of safety of the 6 and 8-pin connector which proceeded it, you will see exactly my point. Look at the max rating of each pin, look at the number of connections in the circuit. Look at the power rating. The margin of safety of this connector is absolute crap in comparison. It's not hard to verify. The 6-pin was rated at 75 watts but the connector could handle 324 watts max (assuming proper wire gauge and insulation quality). A safety factor of 432%, or 332% more. 8-pin, a safety factor of 216%. This 12vhpr connector is rated for 600 watts but the connector can handle 936 watts maximum (assuming best conditions, wire gauge, etcetera). A safety factor of 56% more. No comparison. You can play with the numbers. There are all sorts of different coatings and ratings and scenarios, but that is one comparison. Each scenario will give you different numbers, but the same conclusion. I kept everything the same in this example and gave the 12vhpr the best benefit of doubt. One version of the 12vhpr connector (using 9A max contacts) is only rated for 648 watts. PCI-SIG dropped the ball. This connector sucks. Did someone twist their arm, or did they drop this ball on their own? In the past, PCI-SIG would have rated this connector at 300 watts maximum; but now up to 600 watts instead.

Mod-DIY or Asus or Corsair or NVIDIA or any similar vendor do not manufacture the contacts and pins. They are off-the shelf components manufactured in a supply chain which existed previously.


Current is the proper consideration. Current is what creates the heat, not voltage. Power needlessly gets voltage involved in a discussion of current.
 
Last edited:
The root of the problem we are experiencing right now is that this connector spec as a whole has a terribly small safety factor compared to its predecessors.
The safety margin is only a small side problem compared to the much bigger problem which is the GPU board design NVIDIA made up and forces partners to use.

As it stands now it's only safe to use this connector on a 100 Watt GPU, because then you'll know if 5 pins fail the one good connection isn't going over a 100. And by that logic with the way NVIDIA has cooked their board design, even 10 connectors could be unsafe, because 59 out of 60 pins could make a bad connection and then still the 1 pin melts... that's how stupid engineers at NVIDIA are.

I think only PCI-SIG could force new rules onto NVIDIA, AMD and Intel. But that's out of my field of knowlegde.
 
Last edited:
Current is the proper consideration. Current is what creates the heat, not voltage. Power needlessly gets voltage involved in a discussion of current.

I doubt this fact has changed

Watt = Voltage multiplied Current

Watt = volts * Ampere.

+ the subformulas when you replace it so it get W = U^2 / R = I^2 *R

for direct current circuit of course.

0 Voltage = 0 effect.

The standard specifies 9.5A @ 12V .. because V x I = power.

which seems to be explained anyway above it. Do not ignore maths from electronics AND physics. else I will have to use ignore button on the user profile
maths and physics are the few fields were most stuff are facts. Language and some other fields are more nonsense fields.

No, what you're seeing is an increase in power demands. When a cable supplying 200 or 300 watts is out of spec, it merely prevents the card from booting, or some more innocuous error. When a cable supplying 600 watts is out of spec, it can generate enough heat to melt. This is also why we're only seeing these issues with third-party cables -- and only those from certain manufacturers.

I suspect some people are overclocking. Reference e.g. gamers nexus overclocking video of the astral.

Opinion: Nvidia made hard facts with cables which can not supply more power in "some cases" for "some reason".

I also do not know if a graphic card under a typical gaming load like, e.g. the last of us part I, will always run at 100% = 600 Watts. I also do not know if every Nvidia 4090 + whatever cards are heavily used. I also own a Radeon 7800xt which barely has work to do. Except 3 Games that card never got abused over 100 Watts since december 2023. My card always uses half the rated Watts.
 
Regarding connector and pin/socket ratings, they are typically handled separately.

The pins/sockets are rated in current because there is a expected contact resistance that is acceptable. I^2*R=W means the heat generated in the contact is impacted the most by current. The R is the contact (pin-socket) resistance. The rating given with contacts like this that go into a plastic housing is intended to keep the Wattage (heat) low enough to not melt the plastic connector.

A connector often has a voltage rating that has more to do with creapage and clearance spacing (i.e. it prevents arcing or shorts with margin for standardized hi-pot testing for the rated voltage). When you run a 12V circuit through this connector, it isn't dropping 12V across the contact. It's dropping I*R of the contact point. So if the contact resistance is 12mOhms and the current is 8A, the voltage drop is .096V even though you're operating at 12V. So assuming that the application rating for 12V and the contact current rating of 9.5A (times 6 connections) means the connector is rated 684W is wrong.

In the context of the connector ratings and contact ratings and what we're seeing here, you'd have the same problem if the operating voltage was 5V or 18V. It's the current and the contact resistance that are causing a local voltage drop in the connection that are resulting in heat melting the connector.

Anticipating a "why do they rate the cables 600W?" question, it's because the applicant of these cables is very specific to graphics cards that operate at 12V and the wattage here relates to the current rating of the contacts. If you go back to the connector specification, the contacts are rated in current because the operating voltage doesn't matter, it's the current times the voltage-drop across the contacts.
 
Last edited:
Status
Not open for further replies.
Back
Top