• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Joined
Jun 3, 2008
Messages
933 (0.15/day)
Location
Pacific Coast
System Name Z77 Rev. 1
Processor Intel Core i7 3770K
Motherboard ASRock Z77 Extreme4
Cooling Water Cooling
Memory 2x G.Skill F3-2400C10D-16GTX
Video Card(s) EVGA GTX 1080
Storage Samsung 850 Pro
Display(s) Samsung 28" UE590 UHD
Case Silverstone TJ07
Audio Device(s) Onboard
Power Supply Seasonic PRIME 600W Titanium
Mouse EVGA TORQ X10
Keyboard Leopold Tenkeyless
Software Windows 10 Pro 64-bit
Benchmark Scores 3DMark Time Spy: 7695
Again...

Engineering controls are not effective in preventing damage to the less-robust design.

 
That Asus's R002 (0.002-Ohm per wire) GPU board-side balancing the resistance of power wires is quite interestig idea. For more safety similar balancing thing can be included on the PSU side.
 
Last edited:
Ah, so one person using a third-party cable in a space-constrained SFX build is evidence of an endemic problem. Got it!

Buildzoid used to be a reputable source, now he's become just yet another clickbait YouTuber spreading FUD.
 
The fact that he used a third party connector is not an excuse for this reoccurring issue.

If manufacturers seem unable to make products which work fine with these connectors time and time again then clearly it's not a problem just on their end, the design still sucks just as it did before.
 
Engineering controls are not effective in preventing damage to the less-robust design....
What engineering? Every good engineer knows you can "flow" maximum 10A per pin as thick as in this "12V high power" connector.
That is maximum, where is the safe value? Divide that by at least 2 because you have to take into account the contact surface. And you are already at ~300W (safe maximum), not 600 anymore.
 
From 3rd party Cable and was used for the 4090 first. I am not surprised that it melted.

Why would anyone use a third party cable and not the one that comes in the box of the gpu or power supply box? Humans make no sense to me.
 
Yeh, i read his comment on the NVIDIA reddit.

"I am not distant from the PC-building world and know what I'm doing."

"Third party cable"

:rolleyes:
 
Interesting.

None of the third party 8 pin cables I've ever used have melted.


Hmmmm.....maybe the shitty connector with no safety overhead....is still shitty?
 
der8auer is probably laughing his ass off, he already said that those connectors SUCK years ago

thank lord my 3080 uses traditional PCIE 8pin ones.
 
PC hardware used to be fool proof on everything related to power at least.
Not anymore I guess

Statistically mistakes (on way or the other) are happening everywhere and you can not relay on the human factor. Its a major principal in a lot of cases.
In this case when a connector is pushed close to the edge of what a physical connection of this size can sustain like the 600W, and there is little or no room for mistakes you have to force the user to do the right thing. You can't just hope for the best.
Add monitoring to each of the pin is a sensible thing to do, just like @buildzoid is suggesting in this video, especially on a $2000+ product.
I dont think he is unreasonable here.

Its simple... The second the GPU sees a deviation above a certain level between current on the pins it will prevent the user from running anything that can damage the card.
This is the simplest way to force the user to the proper connection.
 
Why would anyone use a third party cable and not the one that comes in the box of the gpu or power supply box? Humans make no sense to me.

Because what is included in the box is a pretty bad adapter which you wanna avoid using - not a cable.
 
PC hardware used to be fool proof on everything related to power at least.
Not anymore I guess

Statistically mistakes (on way or the other) are happening everywhere and you can not relay on the human factor. Its a major principal in a lot of cases.
In this case when a connector is pushed close to the edge of what a physical connection of this size can sustain like the 600W, and there is little or no room for mistakes you have to force the user to do the right thing. You can't just hope for the best.
Add monitoring to each of the pin is a sensible thing to do, just like @buildzoid is suggesting in this video, especially on a $2000+ product.
I dont think he is unreasonable here.

Its simple... The second the GPU sees a deviation above a certain level between current on the pins it will prevent the user from running anything that can damage the card.
This is the simplest way to force the user to the proper connection.

Indeed, this would have been a none-issue with pcie cables... but nvidia in their infinite wisdom ofc had to invent a new problrm...
 
Also the PSU he used appeared to have 12VHPWR on it and not 12V-2x6 which has slighty thicker pins.

Indeed, this would have been a none-issue with pcie cables... but nvidia in their infinite wisdom ofc had to invent a new problrm...
It was Intel's idea, NVIDIA choose to adopt it.
 

Yeah wauw, it happened to 1 guy where we don't know the circumstances... was it fully seated, was it running stock or a custom 1000w bios, etc etc...

Totally the same as the shitty 12pin connector...

Also the PSU he used appeared to have 12VHPWR on it and not 12V-2x6 which has slighty thicker pins.


It was Intel's idea, NVIDIA choose to adopt it.

No, nvidia chose to enforce it... at least with 3000 series they only used it on their own fe card, while aibs stuck to pcie (obviously) - but with 4000 series nvidia demanded aibs use that shitty connector aswell...
 
Yeah wauw, it happened to 1 guy where we don't know the circumstances... was it fully seated, was it running stock or a custom 1000w bios, etc etc...

Totally the same as the shitty 12pin connector...



No, nvidia chose to enforce it... at least with 3000 series they only used it on their own fe card, while aibs stuck with pcie (obviously) - but with 4000 series nvidia demanded aibs use that shitty connector aswell...
Plenty of 8-Pins have melted throughout the years, you just never heard much about it.

But feel free to stay in denial.
 
Why would anyone use a third party cable and not the one that comes in the box of the gpu or power supply box? Humans make no sense to me.
The user chose to use a directly-connecting PSU<->GPU cable, rather than the included 'hydra head' adapter.
Because, logically: Adapter(s)=more impedance/resistance.

Whether flawed-thinking or not, I can 100% see why the user chose a short +12VHPWR/2x6<-to->+12VHPWR/2x6 cable over the included 4x8-pin to +12VHPWR/2x6.
Space constraints aside, it would *seem* to be the better choice, electrically.


(PCIE) 8-pins have burned, yes.
Still, doesn't change the fact that it takes *considerably less* 'going wrong' for +12VHPWR/2x6 to catastrophically fail,
over PCIe 8-pin or EPS 8-pin.
1739143120075.png
 
Plenty of 8-Pins have melted throughout the years, you just never heard much about it.

But feel free to stay in denial.

Well duh - pcie has been in use for MANY years, and for a ton of stuff. There is obviously going to be product failure and user error from time to time. But pcie has never at any point had a failure rate anywhere near 12pin...

The user chose to use a directly-connecting PSU<->GPU cable, rather than the included 'hydra head' adapter.
Because, logically: Adapter(s)=more impedance/resistance.

Whether flawed-thinking or not, I can 100% see why the user chose a short +12VHPWR/2x6<-to->+12VHPWR/2x6 cable over the included 4x8-pin to +12VHPWR/2x6.
Space constraints aside, it would *seem* to be the better choice, electrically.


(PCIE) 8-pins have burned, yes.
Still, doesn't change the fact that it takes *considerably less* 'going wrong' for +12VHPWR/2x6 to catastrophically fail,
over PCIe 8-pin or EPS 8-pin.
View attachment 384116

Agree 100% ! Even if they did want to use a single connector, there was absolutely zero reason to go with such a small gauge, which is what is causing all the issues...
 
Let me get this straight: anyone who points out Nvidia's mistake loses credibility, regardless of whether it's true? This just keeps getting better. Jensen for president! :p
nVidia truly has ascended to "Apple-dom". :laugh:

15 years, and still not much has changed...
 
Again.... another one of these.... people really like to drag the horse after it has long been dead and pummeled beyond recognition.
 
Also the PSU he used appeared to have 12VHPWR on it and not 12V-2x6 which has slighty thicker pins.


It was Intel's idea, NVIDIA choose to adopt it.

Nvidia and Dell were the sponsors. Intel just introduced it.

In regards to using third party cables, that should be irrelevant if the cable meets the spec. The whole point of having a spec is so that everyone can produce cables that are interoperable. If you meet spec and cables are still failing it's pretty obvious the issue is the spec.

It'd be one thing if this was a 1-off specific to the manufacturer but as we are all well aware, it's a long ongoing issue.

Again.... another one of these.... people really like to drag the horse after it has long been dead and pummeled beyond recognition.

Until the spec is fixed or until Nvidia drops it's power draw to more sane levels, this will keep occurring as it has.

You shouldn't have to babysit your video card because the connector is the least robust thing in your rig.
 
Back
Top