• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
it's insane to me that Nvidia continues to move forward on this and gets no actual pushback. All the influencers that destroyed Intel with such ease must really like Nvidia wink wink :rolleyes:
I'm even surprised that NVIDIA are not even being forced to recall all RTX 50s ! We all knew that RTX 40s could have melting issues, and they did nothing to prevent the issue on RTX 50s (even though the 3090 Ti design seems to have it right...) :shadedshu:
They just did their best to have the smallest PCB ever on the 5090, so adding extra capacitors would have forced Nvidia to re-design their PCB + Cooler sizes and increase their R&D again. They probably didn't want that so they went with it anyway. Nvidia should be sued and forced to fix all their RTX 50 GPUs before being able to sell them.
 
They just did their best to have the smallest PCB ever on the 5090, so adding extra capacitors would have forced Nvidia to re-design their PCB + Cooler sizes and increase their R&D again. They probably didn't want that so they went with it anyway. Nvidia should be sued and forced to fix all their RTX 50 GPUs before being able to sell them.
Which part of "it's a cable and/or connector problem" do you not understand? "Redesigning the PCB" isn't going to change that.

Buy a good quality cable and don't abuse it. You'll be fine.
 
it's insane to me that Nvidia continues to move forward on this and gets no actual pushback. All the influencers that destroyed Intel with such ease must really like Nvidia wink wink :rolleyes:

A lot of these people rely on Nvidia to get samples. Given that it holds 90%+ of the market, Nvidia has a lot of leverage over what is said.
 
I take it no one spotted this on wikipedia?

48VHPWR connector

PCIe 5.1 CEM introduced a connector for 48 V with just two current-carrying contacts and four sense pins.[41]. The contacts are rated for 15 A continuous current.

From the PCIe article.
 
What???

If there are some unsafe cables, they are not safe because the connectors in the plugs due to damage or manufacturing flaws do not have enough grip on the pins.

The connector tubes in the plugs may be too loose or they are shifted back in the plug body.
But even a perfectly good cable that makes connection at specified resistance can produce situations where there would be current going through one lead which you would deem unsafe. If no load balancing is applied.

The broken connectors and user errors are a different thing.
 
bad card design is bad, this failure mode is a violation of UL anybody claiming the cable was misused or abuse is a moron we have the UL exactly for this kind of reason.
 
bad card design is bad, this failure mode is a violation of UL anybody claiming the cable was misused or abuse is a moron we have the UL exactly for this kind of reason.
For those in the US, all others you need to contact the agency that is like the FTC in your country.


Has Louis Rossman even been notified of this crap, he is the 1 all about Right to Repair.
 
Perhaps because most 5090 purchasers are able to think for themselves and look at the actual data, rather than buying into "influencer" generated hype.
Influencer generated hype versus 5090 day one purchasers that are able to think for themselves..., do you even halfway realize how strange it is what you have just said?
 
because if it did, you wouldn’t be able to operate the 5090 at all.
Me thinks this is a feature :slap:
For those in the US, all others you need to contact the agency that is like the FTC in your country.
Probably because no one knows if there'll be an FTC tomorrow or not, certainly given what that half a trillion dollar man is doing to all "govt" agencies or whatever this falls under!
 
I'm glad we're at page 25 and we can still just quite simply conclude the cable is underspecced for its job and far too prone to manufacturer variance. Something, again... a child could tell you given the size of the older cable and its rated wattage. Simple game of spot the 10 differences.

Let's go guys, page 50 you can do it. Don't forget to keep buying in the meantime.
The cable and the connector are fine. The main issue here is that the graphics cards themselves don't monitor the pins like they did with the 3090ti. The cable getting wear and tear over time is normal and it's something that most likely affected 8pins as well, the difference is due to the 150w limit on the 8pin cables - the max that could go through a pin is what, 12 amps? Now we are looking at over 20.
 
The cable and the connector are fine. The main issue here is that the graphics cards themselves don't monitor the pins like they did with the 3090ti. The cable getting wear and tear over time is normal and it's something that most likely affected 8pins as well, the difference is due to the 150w limit on the 8pin cables - the max that could go through a pin is what, 12 amps? Now we are looking at over 20.
You do realize that with proper connector design (safety factor way above 1.1) those safeguards (as per pin current monitoring) are completely unnecessary?
Was there ever any per pin current monitoring implemented for 8pin cable? No. Was there ever a need to have per pin current monitoring for 8pin cable? No.
Because 8pin cable + connector were properly designed (robust, with high enough safety factor) for rated 150W.
Not to mention that one 8pin cable will outlive like ten 12pin connectors in terms of (dis)connecting cycles.

Nvidia forced use of new connector with low safety factor and along with that omitted providing safeguards. To save costs. Pathetic.
 
Nvidia (and AMD) is working hard against overclocking, my speculation is that this "16-pin problem" is deliberate and partly a goal, which is why they are exacerbating it rather than solving it. With the older cards, it wasn't a problem to make shunt mod and throw over 800W+ at card.
Both companies went to great lengths to lock performance into the percentages they wanted, and no more.
So it's all about money and control here, because control over that makes more money.
 
Well, I've read through all of this thread. While I see there are users here that have used the connector for an extended period of time without issue, I don't understand the need to hand-wave away the repeated problems this connector is having as if it's no big deal.

You are spending thousands of dollars on a piece of hardware that could potentially and catastrophically fail on you. Yes, going by the statistics the failure rate is currently low. But the fact of the matter is we are seeing more cases of this coming to light and to avoid them you seemingly need to meet an ambiguous set of conditions which don't apply to the 8-pin connector. Don't buy third-party, use the current generation only, ensure your PSU is modern enough, ensure it is seated correctly, don't bend or flex to the cable too far, monitor the voltage in HWiNFO to make sure there are no drops on the problematic rail, monitor the connector heat and so on and so forth. And even with those conditions met there's no guarantee it won't just happen to you.

With the amount of money you are spending on this hardware and the horror stories of the modern RMA/warranty processes users have to go through, why is this OK? Would it not be better to take the manufacturer to task or at least vote with your wallet until they do? The company is not your friend, they won't do you any favors so why are you running defence for them, for free?

The acceptable failure rate for a multi-thousand-dollar product should be zero or as close to zero as possible. It doesn't matter that the current amount of failures is statistically low, they should not be happening. And the manufacturer should absolutely not be allowed to get away with it because "it works fine on my machine".
 
Nvidia (and AMD) is working hard against overclocking, my speculation is that this "16-pin problem" is deliberate and partly a goal, which is why they are exacerbating it rather than solving it. With the older cards, it wasn't a problem to make shunt mod and throw over 800W+ at card.
There is no conspiracy on overclockers. What prevents them from directly soldering the cables, especially on the FE. One big blob.
 
The cable and the connector are fine. The main issue here is that the graphics cards themselves don't monitor the pins like they did with the 3090ti. The cable getting wear and tear over time is normal and it's something that most likely affected 8pins as well, the difference is due to the 150w limit on the 8pin cables - the max that could go through a pin is what, 12 amps? Now we are looking at over 20.
Nope, you're wrong. The cable/connector is very prone to damage by just normal use and you cannot even tell it actually has degraded. Even just reseating the cable already influences the amount of amps (and whether it exceeds 9,5A), so its clear this is a wobbly, incompetent piece of crap. Its just not good enough - and that still leaves plenty of room for the fact that GPUs also need some kind of load balancing measures, or perhaps you even need something on the PSU side.

Basically every aspect of 12VHPWR needs a re-review.
 
There is no conspiracy on overclockers. What prevents them from directly soldering the cables, especially on the FE. One big blob.
What about the bios lock ? :)
Why do you think there is no MPT for the 7000 series ? ;)
Why do you think there is telemetry in/out watts on all cards ?

There is no conspiracy - it's just reality.
 
A lot of these people rely on Nvidia to get samples. Given that it holds 90%+ of the market, Nvidia has a lot of leverage over what is said.

conflict of interest
 
Nope, you're wrong. The cable/connector is very prone to damage by just normal use and you cannot even tell it actually has degraded. Even just reseating the cable already influences the amount of amps (and whether it exceeds 9,5A), so its clear this is a wobbly, incompetent piece of crap. Its just not good enough - and that still leaves plenty of room for the fact that GPUs also need some kind of load balancing measures, or perhaps you even need something on the PSU side.

Basically every aspect of 12VHPWR needs a re-review.
But all of that might and probably does apply to the 8pin as well. The difference is the 8pins weren't allowed to pull more than 150w each, which means that even when the pins are damaged the amount of amperage going through them is relatively safe.
 
But all of that might and probably does apply to the 8pin as well. The difference is the 8pins weren't allowed to pull more than 150w each, which means that even when the pins are damaged the amount of amperage going through them is relatively safe.
Its more a case of the pins not ever getting more than 150W each. Lots of high power GPUs were made without load balancing.
Instead, the problem was mitigated by just adding more connectors. 2x/3x 8 pin already does load balancing mechanically.

Moreover, you just assume the 12VHPWR robustness is equal to the 6/8 pin pcie connector. Clearly though this is not the case, even just the size and amount of material is different; pcie has far more mating cycles, etc.
 
bad card design is bad, this failure mode is a violation of UL
For those in the US, all others you need to contact the agency that is like the FTC in your country.
Sure, let's take the inane fearmongering up another notch. Why not skip the middle man and just grab a torch and pitchfork and head to Santa Clara. to burn down evil Doctor NVidia's fortress of power? That's about as sensible as telling people to contact the FTC, because they read a few stories on the Interwebs.

As for this "violation of UL" idiocy, there is no legal requirement for UL certification, nor does UL certify "standards", but only individual products, meaning it doesn't affect NVidia whatsoever.
 
Nope, you're wrong. The cable/connector is very prone to damage by just normal use and you cannot even tell it actually has degraded. Even just reseating the cable already influences the amount of amps (and whether it exceeds 9,5A), so its clear this is a wobbly, incompetent piece of crap. Its just not good enough - and that still leaves plenty of room for the fact that GPUs also need some kind of load balancing measures, or perhaps you even need something on the PSU side.

Basically every aspect of 12VHPWR needs a re-review.

Just cut the sense0 cable and call it a day. No need for evaluation the writing is on the wall. But It looks pretty sturdy to me.
On the other hand the female terminals I could find are 20 AWG, that's 5A at 60C and a RTX 5080 max. It seem to me they're trying to mount monster truck tires on a buggy by having 16 AWG on those terminals. The contact area is just two small bumps on each side that I can't even see.
 
But all of that might and probably does apply to the 8pin as well. The difference is the 8pins weren't allowed to pull more than 150w each, which means that even when the pins are damaged the amount of amperage going through them is relatively safe.
You got none of what he said. Besides 8pin being able to transfer way more (2x or more) than 150W per pin, the connector itself is also less prone to overheat with bad contact or have bad contact in general + surviving 100 times more connects.
 
You got none of what he said. Besides 8pin being able to transfer way more (2x or more) than 150W per pin, the connector itself is also less prone to overheat with bad contact or have bad contact in general + surviving 100 times more connects.
Huh? You think the 8pin can transfer 300w per pin? Oh kay.
 
Bad wording, I will give you that. Propably it can. But I meant per unit ^^
 
Bad wording, I will give you that. Propably it can. But I meant per unit ^^
Sure then, but so can the 12vh. We don't see burned connectors on 4070, it's mostly the 90 parts and some 80s (which can also be pushed to 450w and beyond btw).
 
Status
Not open for further replies.
Back
Top