• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
There are more than one cause to this problem. Current balancing and inability to sense faults in one problem. The spec using a woefully underspecced connect is another problem.
Saying one problem is not a problem because another problem is a problem is just crazy. They are both problems. Together, it's just laughable; especially after such a length of time.
 
Could You elaborate on what type of hardware You where working?
Materials science lab with a variety of equipment ... the specific cabling I was referring to were those to charge ultracapacitor banks -- 250,000+ watts. The cables didn't have per-pin sensing ... but they were intended to be regularly inspected for damage and proper insertion.

When someone plugs connector as it should be plugged and then it melts due to uneven distribution of current (design flaw),
Stop. Just stop with the insanity. The specification requires even flow. Uneven current distribution happens -only- if the cable is faulty or improperly mated.

This cable is not suitable to be delivering 600+ watts at first place. It works on the paper, but in reality can pose a fire hazard.
And yet there hasn't been one single fire from it. Meanwhile, there are several million home fires a year worldwide (250K in the US alone) due to faulty 120v cables, and tens of thousands of deaths. Odd you don't seem outraged over that.

I beg to differ. The correct place for load balancing is on the GPU imho. The GPU asks for the power...
And what happens when the failure occurs before the GPU? As just one example, a crimp or damaged insulation can cause a ground short within the cable itself, causing extreme flow from one or more pins on the PSU. No amount of GPU-side sensing will detect that.
 
Last edited:
And yet there hasn't been one single fire from it. Meanwhile, there are several million home fires a year worldwide (250K in the US alone) due to faulty 120v cables, and tens of thousands of deaths. Odd you don't seem outraged over that.
This is faulty logic. This is a tech forum, not a home improvement forum. There are numerous things, many tragic, to be outraged about, but this is just false equivalency. Besides, “starting a fire” is not the sole benchmark for faulty equipment. It’s certainly one indicator, but when there are other protective measures in place to prevent an issue from going from “melting connector” to “house fire,” it’s not the only indicator or measure. Losing a PSU or $3000 GPU is certainly a frustrating unexpected consequence. If loss of life or dwelling is the benchmark, then we can expect quality to decline on everything even more than it already has.

This is not to say that I think this is a massive issue, but 120V household wiring is an entirely different issue with many potential causes.
 
Works good for me, my card only does 400w at the tippy top though.
 
120V cables only handle 15 Amps... Maybe 20 Amps on the bigger circuits worst case.

These 12VHPR connectors handle 600W at 12V, or 50 Amps. While all being a small fraction of the size.

And remember, the heat generated inside of wires is current^2 * resistance.

----------

We've shrunk the cables, dropped the cost of pins to fractions of a penny and then increased the current by multiples over normal cables. It's an absurd situation.
And what happens when the failure occurs before the GPU? As just one example, a crimp or damaged insulation can cause a ground short within the cable itself, causing extreme flow from one or more pins on the PSU. No amount of GPU-side sensing will detect that.

It's fine to have cable specifications

The problem is that MilSpec cable specifications and connector specifications are hundreds-of-dollars per connection. They aren't there tiny fractions of a penny plastic housings.

They're cheaping out on the cable and connector. Not necessarily a bad thing but it does seem like we've gone too small and too cheap.
 
I get ir_cow's point, it was a cable that isnt authorised by Nvidia.

Whether a cable is authorized or not misses the point of having a standard in the first place. The whole point of a standard is so that anyone can build a cable to spec and have it be interoperable in the ecosystem.

If the prevailing logic of those defending this nonsense connector is "3rd party cables are bad", that in and of itself is an admission of the failure of both Dell and Nvidia being the two sponsors of the standard. PCI-SIG requires those sponsors to submit the initial draft for the standard and let's be honest, Nvidia was the one pushing this through the whole time.

All third party cables are not bad, many are likely higher quality than Nvidia's stock adapter. The prevailing commonality between all these failures of both 1st party and 3rd party cables / adapters is the crap standard.

If the supply cable is out of spec

You'd first have to prove that it was indeed out of spec. You tried that earlier by saying the cable was crimped until it was pointed out that the official spec calls for crimping and that crimping produces a better connection.

Because -- either today or before it -- there has been on "en masse" melting. However it is, like the SWPE described above, yet another example of mass hysteria, with the proximal cause in both cases being a media campaign aimed at the credulous.

And do you have any proof to back up this additional conspiracy theory?

At this point your comments amount to gish gallop. Making up as many things as possible without proving any of them.

Finally we also talk about power supply units. I mentioned that ages ago already. Nvidia could sell their cards as is if the power supply unit would be properly made and the cables and connectors would be properly made. I do not mind more expensive power supply units for nivida graphic cards only.

Restrict circuit wise the current on each pin on the power supply unit. Psu are cheaply made inside.

Buildzoid made a video a few weeks back on why doing per pin balancing on the PSU makes no sense, it's a good watch. It's linked earlier in this thread.

FYI, the plastic on the PSU end isn't designed to take as high temps as the GPU end, as it's not exposed to as much heat. The PSU end should not be getting hot, that is not as a result of lower quality PSUs.

In any case, that's not even the point I'm refuting, but rather the asinine belief that the lack of "per pin sensing" is some sort of design fault on a peripheral card. If it was required, the proper place for it would be on the PSU itself, since that captures failure modes that occur before the card could sense it. But it's not any sort of requirement.

It's not really of question of if it's useful, clearly it was given Nvidia did it for a long time (3000 series are earlier). The baffling part is why they stopped precisely when power requirements were exploding.

No one said anything about just sensing but balancing. You cannot have one without the other either way, given one has to be able to sense current in order to balance it.

Your calculation is based on ignoring the safety factor designed into the cables and connectors themselves, rather than the specification. That's the proper place for it, anyway.


The safety factor of the individual components is irrelevant, the only number that matters is the specification safety factor as it takes all considerations into account.

If a hypothetical wire has a safety factor of 2.0, it means jack diddly if the connector is only rated for 1.2 or if some other component or factor would reduce that. You are pretty much asking people to ignore the actual safety margin an end user will see in practice and to instead look at safety factors of individual parts that tells them nothing useful. It's like saying an awful tasting cheeseburger is actually delicious because when you remove the stale bun, soggy lettuce, and rancid mayonnaise the meat patty isn't totally a shriveled puck.

FYI, the prior commentor's numbers were generous. 12VHPWR has a palty safety factor of 1.14: https://overclock3d.net/reviews/gpu_displays/asus-saved-our-bacon-we-had-12vhpwr-12v-2x6-cable-issues/#:~:text=Cable wear is an issue that PC,safety factor of 1.1. That's pitifully small.

Not that 1.2 is much better.

If you need 50A, you specify 50A -- you don't specify 75, on the assumption that parts will fail at the slightest overage.

Both the rated and maximum are specified. The rated value is the customer facing value that everyone sees while the maximum is the value that should not be exceeded, period. The maximum of 12V2X6 of which is not that much higher than the rated spec. You seem to be implying that the maximum value is fine to exceed when that is not the case. You want to stay at or under the rated value.

You're contradicting yourself in your own post. If "the same connector" is rated differently in another application, that proves the manufacturer of that connector is manipulating the rated figure. A specification standard doesn't apply to particular connectors, it merely states that the connector itself should be rated for a specific load.

No silly, that just means different applications have different safety factors due to various conditions and variables.

What you call manipulation is common practice and common sense.

No.... NVIDIA and PCI-SIG.
The spec lacks safety factor. Indivindual vendors don't create specs.
PCI-SIG created the spec for video cards. NVIDIA accepted it and continues to accept it despite the results.

PCI-SIG = part manufacturers and in this instance Nvidia sponsored the connector with Dell. By extension, that means they created the initial draft (as required of sponsors) and were very likely the one that pushed it through to the end.

I think only PCI-SIG could force new rules onto NVIDIA, AMD and Intel. But that's out of my field of knowlegde.

PCI-SIG isn't a separate body with the power to enforce things on it's members. Members can choose whether or not to adapter certain standards, and standards are entirely developed my members. In the case of 12VHPWR and 12V2X6 both Nvidia and Dell, a requirement of them being the two sponsors. Likely with Nvidia being the main one pushing it.

If what you wrote is true, what did Nv do in this direction to issue and check (the operation) a certificate to manufacturers of plugs and cables?
Did the cables and plugs pass stress tests (eighth sweat) for at least a few days?
Should it include with its products for free (taking into account the astronomical prices of GPU) tested and recommended cables by external manufacturers to whom it issued appropriate certificates and other papers?
Should it take responsibility and include repairs or replacement of equipment?
Should it blame users who have been building, upgrading their computers for several decades and have never had as many problems as they do now, at least with GPU 4090, 5090, melting plugs, disappearing ROPs, lack of HS sensor and drivers!!! etc...

Who screwed up? Certainly not the user, but the one who came up with it and did not even bother to test it specifically, correct it and only then allow it for production under strict quality control. You're right, it's a design defect, but why does everyone wash their hands of it and it's hard to prove that it's not the user's fault.

The PC ecosystem is hands off. PCI-SIG (and particularly Nvidia being the sponsor of the connector) designed it and it's up to manufacturers to follow it. This has been a very successful model for the PC industry for a long time. There's no check by Nvidia or anything like that, such a thing would increase costs. It's up to the manufacturer to do the testing.

At the end of the day, it comes down to safety margin and bad card design. The connector doesn't have enough margin and Nvidia stripped it's GPUs of the ability to balance load across pins. Both issues were created by Nvidia. Frankly if only one of those didn't exist, the other likely would have never been found out.

As for who takes the blame, the common party here is Nvidia. It's a standard drafted by Nvidia and a poor card design by Nvidia.

Some are quite able to do so. Others are able -- but prefer to generate higher profits by cutting corners. And even when they don't cut corners, parts and materials do occasionally fail. I'll point out that, in just the USA alone, there are more than a quarter million house fires every year caused by failed 120v cabling, despite that standard being a century old.

This is nothing new and yet this issue only exists with this connector. Pretty obvious what that tells us.

Very true. You just can't get around the laws of physics. Higher current flows requires higher-quality materials and manufacturing. This is why I believe that, at some point in the not-to-distant future, consumer PSUs will include a 24v or 48v supply line. It's just impractical to supply kilowatt-level wattages on 12v lines.

That is simply not going to happen anytime soon. People are not going to install high voltage lines for their PC, that's expensive and very extreme just to get a PC going.

Good question. Why only 3rd party cables are failing and not the Nvidia's ones? How come all 3rd party cables tend to fail and not Nvidia original cable? Either Nvidia is doing cable differently (out of spec), or specification is wrong and all that other brands adhering to specification are releasing in spec product made on bad specification. Anyway ... the truth is ... Nvidia's cable sucks, too:

Nvidia cables are failing too. This hasn't only been 3rd party cables.

There's just many factors to consider:

- How many people are actually using the Nvidia provided adapter as compared to 3rd party / PSU provided cables? I'd imagine very few as a percentage of the whole use the stock adapter, if not justfor the looks. People buying 4090s / 5090s tend to be more picky about those sorts of things.
- Is there a bias in the group of people using the Nvidia adapter vs the stock cable / 3rd party? I'd argue it's possible that the group of people using the Nvidia adapter may be less technically inclined as a whole compared to the other group as thus less capable of narrowing down the issue to the adapter in the first place.

Correlation doesn't equal causation, it'd be akin to saying OEM PCs are better because there are less reported issues with them. The basis for that conclusion may be correct but it ignores the fact that OEM PC owners are less likely to report issues in general because they may not know they have an issue and there are much more unlikely to be able to diagnose the root cause of an issue as well. They may blame software, windows, or something else.

NVME drives are rated at only 60 mating cycles. My system has a hot-swappable bank, and I've had more than one fail after 10-20 swaps. (Interestingly enough; if they don't fail early, they generally last to several hundred.)

You mean M.2. NVMe is a protocol. For example, my U.2 drives use NVMe but they are not rated for the same number of cycles as M.2.
 
Last edited:
This is faulty logic. This is a tech forum, not a home improvement forum.
You missed the point. Electricity is electricity. If engineering a cable to carry 50 amps with zero failures is such a minor problem, then why are 15 amp cables destroying so many homes and killing so many people?

We've shrunk the cables, dropped the cost of pins to fractions of a penny and then increased the current by multiples over normal cables. ... The problem is that MilSpec cable specifications and connector specifications are hundreds-of-dollars per connection. They aren't there tiny fractions of a penny plastic housings.
Very true. A more reliable cable could obviously be engineered, but it would be either (a) much more expensive, (b) significantly larger, or (c) have to operate at 24V or higher. Ultimately what matters is the overall failure rate. And so far that seems extremely low.

You'd first have to prove that it was indeed out of spec. You tried that earlier by saying the cable was crimped until it was pointed out that the official spec calls for crimping
No, you simply failed to understand plain English. The spec calls for pins to be crimped to terminate to the cable. My statement was in reference to cable crimping or damage to the point it caused an internal short.

The safety factor of the individual components is irrelevant, the only number that matters is the specification safety factor as it takes all considerations into account
Y have it entirely backwards. Take for instance the specification for home wiring: the NEMA 5-15R/15P standard for gen-use 120V plugs and receptacles. The specification requires them to handle 15 amps -- that's it. Nothing beyond. Individual component manufacturers engineer in safety factors above that specification.

- Is there a bias in the group of people using the Nvidia adapter vs the stock cable / 3rd party? I'd argue it's possible that the group of people using the Nvidia adapter may be less technically inclined as a whole compared to the other group as thus less capable of narrowing down the issue to the adapter in the first place.
So your theory is that NVidia cable owners aren't reporting melted cables, because, despite seeing a mass of molten plastic on one end, they have no clue why it stopped working? :)
 
You missed the point. Electricity is electricity. If engineering a cable to carry 50 amps with zero failures is such a minor problem, then why are 15 amp cables destroying so many homes and killing so many people?
Eh?

Designing for 50 amps is easy. Designing for overkill is easy. It's real simple mechanical stuff. The hard part is maximizing profit and designing the bare minimum excess to get the job done as fast as possible and as inexpensive as possible.

It all comes down to cost and complacency.

In this case, a cheap, small connector, with no power monitoring by default design, pushed 4x harder than its predecessor. On cards that could afford something better.
 
Last edited:
* return the nvidia gpu modell and buy another gpu.

I also return something in a few days again because the product is not as described according to the marketing papers and product packaging.

That about sums up what truly needs to happen because reputable psu makers have plenty of protections like ocp, opp, ovp, scp.

I believe thermistors and line vrms need to be implemented alongside with current limiting on the psu side and gpu side so affected gpus dont become fire hazards.

Also the pins and barrels need more surface area contact. Maybe a canonplug conector needs to be used even...

There are more than one cause to this problem. Current balancing and inability to sense faults in one problem. The spec using a woefully underspecced connect is another problem.
Saying one problem is not a problem because another problem is a problem is just crazy. They are both problems. Together, it's just laughable; especially after such a length of time.
You get it. Others here need to quit deflecting, this is a nvidia problem and they aren't fixing it.
 
You missed the point. Electricity is electricity. If engineering a cable to carry 50 amps with zero failures is such a minor problem, then why are 15 amp cables destroying so many homes and killing so many people?


Very true. A more reliable cable could obviously be engineered, but it would be either (a) much more expensive, (b) significantly larger, or (c) have to operate at 24V or higher. Ultimately what matters is the overall failure rate. And so far that seems extremely low.


No, you simply failed to understand plain English. The spec calls for pins to be crimped to terminate to the cable. My statement was in reference to cable crimping or damage to the point it caused an internal short.


Y have it entirely backwards. Take for instance the specification for home wiring: the NEMA 5-15R/15P standard for gen-use 120V plugs and receptacles. The specification requires them to handle 15 amps -- that's it. Nothing beyond. Individual component manufacturers engineer in safety factors above that specification.


So your theory is that NVidia cable owners aren't reporting melted cables, because, despite seeing a mass of molten plastic on one end, they have no clue why it stopped working? :)

Most cable fires occur where there is a discontinuity. A cable rated for 50 amps is fine, until it gets crimped, bent, and generally starts to fatigue over time. Because there is no standard for replacement of 50 amp cables over time you get wear causing hot spots where internal conductor area decreases, which causes heating and fires. This is also why most people gut houses 30-40 years old and replace wiring...because even in walls there are wear conditions like heating, cooling, and making outside coatings become brittle.

You miss the point entirely on the mil-spec versus PCI-SIG spec. Milspec requires aggressive testing, wear conditions, and puts durability above cost, period. The specification provided for the 12 volt high power doesn't test for things like an unbalanced load...which we know happens. Back to the start of this thread people posited it was user error. It was faulty 3rd party cables. Thing is, multiple instance of people failing these were within spec...so crappy spec produces crappy results. Period.

No comment on this. What I do see is you deciding cable crimping is a potential issue, then pretending it isn't an issue when you claim burning down houses. It's almost like you've decided on a magical wall that prevents failure from wear only when it'd make your point invalid. So...are you being selective in thinking or just wanting to pretend that the obvious answer you give isn't an obvious retort to your point about how well specified items can fail (due to going out of spec)? You can't have this both ways.

-don't want to argue specifications not written for computer hardware, because apples are not oranges-

-I've got nothing. This stupid argument that people are always the problem is so much an engineering fallacy. I get it, but claiming failures are all down to user error is silly when you design something that your fellow engineers can bastardize...if you even got the design right in the first place. Layer that with a cost savings mentality, and I'm just out of any reasonable world in which these arguments are had by anybody who isn't seeking blame before seeking a solution.-
 
All I got to say is - if this was a design issue - we should be seeing a LOT of incidents. Like, hundreds - possibly thousands of them. I can't fathom how it's a design issue (which means it should plague ALL cables) with failure rates at 0.05% or something.
 
Designing for 50 amps is easy.... It's real simple mechanical stuff.
How many such cables have you designed? A number somewhere between 0 and none? If 50A plug-and-play cabling was easy, automakers wouldn't be now shifting to 48V systems, to reduce the amperage and cable mass that modern vehicles are now requiring.

Milspec requires aggressive testing, wear conditions, and puts durability above cost, period. T
And Milspec cabling and connectors would add a few hundred dollars to the price of these cards. Is that what you're advocating?

What I do see is you deciding cable crimping is a potential issue, then pretending it isn't an issue when you claim burning down houses.
Eh? I can't even begin to fathom how you transmogrified my plain English into that. I stated that per-pin sensing was far from a requirement, but if one wanted such a feature, it makes far more sense on the supply side than the demand side, because of all the failure modes that supply can sense (like an internal cable short), but demand cannot.

All I got to say is - if this was a design issue - we should be seeing a LOT of incidents. Like, hundreds - possibly thousands of them. I can't fathom how it's a design issue (which means it should plague ALL cables) with failure rates at 0.05% or something.
It's like claiming every auto fatality is a design flaw, because we could have better engineered the vehicle. I once saw a calculation demonstrating we could eliminate more than 99% of all such fatalities -- but the resultant vehicles would be far larger and heavier, have a fraction of the fuel efficiency, and cost several million dollars each. Does that mean there's a "design flaw" in every passenger car made?
 
Last edited:
Most cable fires occur where there is a discontinuity. A cable rated for 50 amps is fine, until it gets crimped, bent, and generally starts to fatigue over time. Because there is no standard for replacement of 50 amp cables over time you get wear causing hot spots where internal conductor area decreases, which causes heating and fires. This is also why most people gut houses 30-40 years old and replace wiring...because even in walls there are wear conditions like heating, cooling, and making outside coatings become brittle.

You miss the point entirely on the mil-spec versus PCI-SIG spec. Milspec requires aggressive testing, wear conditions, and puts durability above cost, period. The specification provided for the 12 volt high power doesn't test for things like an unbalanced load...which we know happens. Back to the start of this thread people posited it was user error. It was faulty 3rd party cables. Thing is, multiple instance of people failing these were within spec...so crappy spec produces crappy results. Period.

No comment on this. What I do see is you deciding cable crimping is a potential issue, then pretending it isn't an issue when you claim burning down houses. It's almost like you've decided on a magical wall that prevents failure from wear only when it'd make your point invalid. So...are you being selective in thinking or just wanting to pretend that the obvious answer you give isn't an obvious retort to your point about how well specified items can fail (due to going out of spec)? You can't have this both ways.

-don't want to argue specifications not written for computer hardware, because apples are not oranges-

-I've got nothing. This stupid argument that people are always the problem is so much an engineering fallacy. I get it, but claiming failures are all down to user error is silly when you design something that your fellow engineers can bastardize...if you even got the design right in the first place. Layer that with a cost savings mentality, and I'm just out of any reasonable world in which these arguments are had by anybody who isn't seeking blame before seeking a solution.-
Well written, in my line of work I use a megger for insulation testing to see if it finds a discontinuity in it between 2 separate conductors. I've seen arcs in bbq pit igniters cut across the wire and not hit the electrode, not making it function properly because the insulation was crushed/stretched where it was too thin in that 1 spot.
 
Last edited:
How many such cables have you designed? A number somewhere between 0 and none? If 50A plug-and-play cabling was easy, automakers wouldn't be now shifting to 48V systems, to reduce the amperage and cable mass that modern vehicles are now requiring.


And Milspec cabling and connectors would add a few hundred dollars to the price of these cards. Is that what you're advocating?


Eh? I can't even begin to fathom how you transmogrified my plain English into that. I stated that per-pin sensing was far from a requirement, but if one wanted such a feature, it makes far more sense on the supply side than the demand side, because of all the failure modes that supply can sense (like an internal cable short), but demand cannot.


It's like claiming every auto fatality is a design flaw, because we could have better engineered the vehicle. I once saw a calculation demonstrating we could eliminate more than 99% of all such fatalities -- but the resultant vehicles would be far larger and heavier, have a fraction of the fuel efficiency, and cost several million dollars each. Does that mean there's a "design flaw" in every passenger car made?
-

You are really trying to be obtuse here. There's a whole rainbow of specifications that exist between milspec and bad spec. We like to call them good specifications. Need I remind you that the existing 8 pin connector specification still exists, and the point of this new specification was to save money by having less connectors? Yes, it was a spec built on cost savings...that was pushed through because Nvidia wanted to save a few extra cents on not having 2 8 pins or an 8+6 pin design, or some other permutation therein. Maybe you forgot, but the argument was that the spec was bad, and not that we "need" milspec. Creating a false dichotomy, then slaying the opposing view point's answer is a silly attempt at making a bad argument sound somewhat good.

You see, at this point I can't tell if you are being obtuse, or cannot read. You respond to a person who said you had issues with cable crimping, that they thought was about the end connector crimp. You then say no, that was crimping in the cable...which is damaged cabling...which by definition is bringing the cable out of specification...which you plainly refuse to acknowledge when you start by stating that 50 amp cabling causes fires...and that if it was so easy to design it never should. Do you not see any thread where all of these things are interlinked? I mean, come on. You cannot pretend that cabling cannot be specified so as to never cause fires, want to blame users for connector installation, and want to pretend that fires aren't mostly caused by wear related items going out of spec. Let me hit you with the fact that a 20 year old house experiences more fires than a 2 year old one...and back it up with statistics rather than some BS hand waving 2025 housing fire statistics from Zebra

-This one gets me. In principle you are not wrong. You are also not right. There's a thing called six sigma, where you get x number of defects in y number of parts. Let's call it 3.4 in a million. Sounds like a small number, but you have thousands of components. If each one has a 99.99966% success rate then the actual rate of cars with 2000 parts, and no defects, is 0.9999966^2000. That's 0.9932231. Yes, 67769 defects in a million. That's only 2000 parts, and everyone's at six sigma level performance. If you over engineer parts you can maybe reduce the 3.4 defects in a million, but when a single circuit board has hundreds of components you functionally have to have a 0 failure rate for any commercially produced car to actually "not have any failures." Add in checking, add in overspeccing, and theoretically with cost no option...like milspec...you could get something that doesn't fail based upon initial construction. Thing is, wear absolutely will eventually cause failure. Lightbulbs burn out, capacitors dry out, thermal cycling destroys mechanical connections. Yes, if cost was no objective we could all own stuff like the DUCKS that are from WWII and still in use as tourist vehicles today...but arguing that because it'd cost a lot to remove all failures, you should not be obliged to create a spec that is not demonstrable in the real world as bad is...silly. You guys seem to want to pretend that the failure rate is fractions of a percentage...when some cards include some form of monitoring to prevent this. Fine. Let me suggest that instead of trying to make this about vanishing percentages you ask yourself what the failure rate is required to be al 12 pins and 12 connectors. x^24= 0.9995
x = 0.9999792 -> if the failure was purely due to issues with process failures expected from a well controlled process it'd be spitting out 208 failures per million...so you'd either expect the supplier to not meet the minimum of six sigma, let alone the "net zero defects" most companies require of suppliers now, under the auspices that they pawn off all of their issues to subsuppliers who have to both quote them a good rate, and follow specifications, because a few pennies of connector is going to have a profit margin that is so low that any defects less than in the millions would mean that they are not profitable.
 
-

You are really trying to be obtuse here. There's a whole rainbow of specifications that exist between milspec and bad spec. We like to call them good specifications. Need I remind you that the existing 8 pin connector specification still exists, and the point of this new specification was to save money by having less connectors? Yes, it was a spec built on cost savings...that was pushed through because Nvidia wanted to save a few extra cents on not having 2 8 pins or an 8+6 pin design, or some other permutation therein. Maybe you forgot, but the argument was that the spec was bad, and not that we "need" milspec. Creating a false dichotomy, then slaying the opposing view point's answer is a silly attempt at making a bad argument sound somewhat good.

You see, at this point I can't tell if you are being obtuse, or cannot read. You respond to a person who said you had issues with cable crimping, that they thought was about the end connector crimp. You then say no, that was crimping in the cable...which is damaged cabling...which by definition is bringing the cable out of specification...which you plainly refuse to acknowledge when you start by stating that 50 amp cabling causes fires...and that if it was so easy to design it never should. Do you not see any thread where all of these things are interlinked? I mean, come on. You cannot pretend that cabling cannot be specified so as to never cause fires, want to blame users for connector installation, and want to pretend that fires aren't mostly caused by wear related items going out of spec. Let me hit you with the fact that a 20 year old house experiences more fires than a 2 year old one...and back it up with statistics rather than some BS hand waving 2025 housing fire statistics from Zebra

-This one gets me. In principle you are not wrong. You are also not right. There's a thing called six sigma, where you get x number of defects in y number of parts. Let's call it 3.4 in a million. Sounds like a small number, but you have thousands of components. If each one has a 99.99966% success rate then the actual rate of cars with 2000 parts, and no defects, is 0.9999966^2000. That's 0.9932231. Yes, 67769 defects in a million. That's only 2000 parts, and everyone's at six sigma level performance. If you over engineer parts you can maybe reduce the 3.4 defects in a million, but when a single circuit board has hundreds of components you functionally have to have a 0 failure rate for any commercially produced car to actually "not have any failures." Add in checking, add in overspeccing, and theoretically with cost no option...like milspec...you could get something that doesn't fail based upon initial construction. Thing is, wear absolutely will eventually cause failure. Lightbulbs burn out, capacitors dry out, thermal cycling destroys mechanical connections. Yes, if cost was no objective we could all own stuff like the DUCKS that are from WWII and still in use as tourist vehicles today...but arguing that because it'd cost a lot to remove all failures, you should not be obliged to create a spec that is not demonstrable in the real world as bad is...silly. You guys seem to want to pretend that the failure rate is fractions of a percentage...when some cards include some form of monitoring to prevent this. Fine. Let me suggest that instead of trying to make this about vanishing percentages you ask yourself what the failure rate is required to be al 12 pins and 12 connectors. x^24= 0.9995
x = 0.9999792 -> if the failure was purely due to issues with process failures expected from a well controlled process it'd be spitting out 208 failures per million...so you'd either expect the supplier to not meet the minimum of six sigma, let alone the "net zero defects" most companies require of suppliers now, under the auspices that they pawn off all of their issues to subsuppliers who have to both quote them a good rate, and follow specifications, because a few pennies of connector is going to have a profit margin that is so low that any defects less than in the millions would mean that they are not profitable.
Wait, you actually think that asus implemented (only on it high end super expensive model) a way to measure the cable amperage because they are actually worried about the cables melting or just because they wanted to take advantage of the mass hysteria?

It's like claiming every auto fatality is a design flaw, because we could have better engineered the vehicle. I once saw a calculation demonstrating we could eliminate more than 99% of all such fatalities -- but the resultant vehicles would be far larger and heavier, have a fraction of the fuel efficiency, and cost several million dollars each. Does that mean there's a "design flaw" in every passenger car made?
The same principle applies to every facet of the human life. But it always comes down to cost and practicality. The safer something is, the more impractical and expensive it usually is.

The asus rog model is the perfect example. All the naysayers worrying about cables melting, why, there is a model that solves your issue. What, "it's too expensive" I hear you say? Oh well.. That's the point.
 
Wait, you actually think that asus implemented (only on it high end super expensive model) a way to measure the cable amperage because they are actually worried about the cables melting or just because they wanted to take advantage of the mass hysteria?
I believe they wanted to at least partially reduce RMA rates on the GPU side.
 
I believe they wanted to at least partially reduce RMA rates on the GPU side.
Then they would have implemented it on every model. It's the cheaper models that cost them more in RMA anyways, since they are making huge bucks on the Astral.
 
Then they would have implemented it on every model. It's the cheaper models that cost them more in RMA anyways, since they are making huge bucks on the Astral.
You gotta up-sell people on Astral in some way.
 
Większość pożarów kabli ma miejsce tam, gdzie występuje nieciągłość. Kabel o natężeniu 50 amperów jest w porządku, dopóki nie zostanie zagięty, pofałdowany i ogólnie nie zacznie się męczyć z czasem. Ponieważ nie ma normy wymiany kabli 50 amperów z czasem, powstają punkty zużycia powodujące gorące punkty, w których zmniejsza się powierzchnia wewnętrznego przewodnika, co powoduje nagrzewanie i pożary. To również powód, dla którego większość ludzi wybebesza domy mające 30-40 lat i wymienia okablowanie... ponieważ nawet w ścianach występują warunki zużycia, takie jak nagrzewanie, chłodzenie i sprawianie, że zewnętrzne powłoki stają się kruche.

Zupełnie nie rozumiesz sedna sprawy, jeśli chodzi o specyfikację mil-spec w porównaniu ze specyfikacją PCI-SIG. Specyfikacja mil-spec wymaga agresywnych testów, warunków zużycia i stawia trwałość ponad koszt, kropka. Specyfikacja podana dla 12-woltowej dużej mocy nie testuje takich rzeczy, jak niezrównoważone obciążenie... co, jak wiemy, się zdarza. Wracając do początku tego wątku, ludzie zakładali, że to błąd użytkownika. To były wadliwe kable innych firm. Rzecz w tym, że wiele przypadków, w których ludzie je zepsuli, mieściło się w specyfikacji... więc kiepska specyfikacja daje kiepskie wyniki. Kropka.

Nie ma komentarza na ten temat. Widzę, że decydujesz, że zagniatanie kabli jest potencjalnym problemem, a potem udajesz, że nie jest problemem, kiedy twierdzisz, że podpalasz domy. To prawie tak, jakbyś zdecydował się na magiczną ścianę, która zapobiega uszkodzeniom spowodowanym zużyciem, tylko wtedy, gdy unieważniłoby to twoje stanowisko. Więc... czy jesteś wybiórczy w myśleniu, czy po prostu chcesz udawać, że oczywista odpowiedź, którą dajesz, nie jest oczywistą odpowiedzią na twoje stanowisko o tym, jak dobrze określone elementy mogą zawieść (z powodu wyjścia poza specyfikację)? Nie można mieć obu rzeczy naraz.

- nie chcę dyskutować o specyfikacjach, które nie są przeznaczone dla sprzętu komputerowego, bo jabłka to nie pomarańcze-

- Nie mam nic. Ten głupi argument, że ludzie są zawsze problemem, to tak bardzo inżynierska nieścisłość. Rozumiem, ale twierdzenie, że wszystkie awarie są wynikiem błędu użytkownika, jest głupie, kiedy projektujesz coś, co twoi koledzy inżynierowie mogą zepsuć... jeśli w ogóle zaprojektowałeś to dobrze. Dodaj do tego mentalność oszczędzania kosztów, a ja po prostu nie będę w stanie znaleźć rozsądnego świata, w którym takie argumenty wysuwa ktoś, kto nie szuka winy, zanim nie znajdzie rozwiązania.

Most cable fires occur where there is a discontinuity. A cable rated for 50 amps is fine, until it gets crimped, bent, and generally starts to fatigue over time. Because there is no standard for replacement of 50 amp cables over time you get wear causing hot spots where internal conductor area decreases, which causes heating and fires. This is also why most people gut houses 30-40 years old and replace wiring...because even in walls there are wear conditions like heating, cooling, and making outside coatings become brittle.

You miss the point entirely on the mil-spec versus PCI-SIG spec. Milspec requires aggressive testing, wear conditions, and puts durability above cost, period. The specification provided for the 12 volt high power doesn't test for things like an unbalanced load...which we know happens. Back to the start of this thread people posited it was user error. It was faulty 3rd party cables. Thing is, multiple instance of people failing these were within spec...so crappy spec produces crappy results. Period.

No comment on this. What I do see is you deciding cable crimping is a potential issue, then pretending it isn't an issue when you claim burning down houses. It's almost like you've decided on a magical wall that prevents failure from wear only when it'd make your point invalid. So...are you being selective in thinking or just wanting to pretend that the obvious answer you give isn't an obvious retort to your point about how well specified items can fail (due to going out of spec)? You can't have this both ways.

-don't want to argue specifications not written for computer hardware, because apples are not oranges-

-I've got nothing. This stupid argument that people are always the problem is so much an engineering fallacy. I get it, but claiming failures are all down to user error is silly when you design something that your fellow engineers can bastardize...if you even got the design right in the first place. Layer that with a cost savings mentality, and I'm just out of any reasonable world in which these arguments are had by anybody who isn't seeking blame before seeking a solution.-
Many fires are unfortunately the fault of people - too many devices connected to one socket via extension cords, own modifications during renovations, poorly selected cable parameters, bad wire connections, etc. If we are talking about fires in houses or apartments, in the case of GPU 4090 5090 it can only be the fault of the manufacturers of plugs and sockets.
 
.Need I remind you that the existing 8 pin connector specification still exists, and the point of this new specification was to save money by having less connectors? ...that was pushed through because Nvidia wanted to save a few extra cents on not having 2 8 pins ....
Your conspiracy theory has it exactly backwards. NVidia spent far more designing, implementing, and promoting the new standard -- then redesigning it again for 12v-2x6 -- than the trivial cost of a second connector -- especially when nearly all that latter cost would have been born not by NVidia by the AIB makers who produce 95%+ of boards. They avoided the multi-connector approach because it is cumbersome, adds a failure mode, and requires an even larger PCB, on a board that already struggles to fit many cases.

You are really trying to be obtuse here. You respond to a person who said you had issues with cable crimping, that they thought was about the end connector crimp. You then say no, that was crimping in the cable...which is damaged cabling...which by definition is bringing the cable out of specification
Talk about obtuse. That's not even remotely close to correct:

E: "If "per pin sensing" [were a requirement], the proper place for it would be on the PSU"
D: "I beg to differ. The correct place for load balancing is on the GPU imho."
E: "And what happens when the failure occurs before the GPU? As just one example, a crimp or damaged insulation can cause a ground short within the cable itself"

You've misinterpreted that exchange three times now in three different ways. Will you go for four?
 
Your conspiracy theory has it exactly backwards. NVidia spent far more designing, implementing, and promoting the new standard...
Trolling here can't help with Nvidia's crappy "new" (a few years old) standard.
 
You missed the point. Electricity is electricity. If engineering a cable to carry 50 amps with zero failures is such a minor problem, then why are 15 amp cables destroying so many homes and killing so many people?
Because household wiring is inside walls made of wood and other flammable materials, and the age of all that hardware varies from brand new to maybe 100 years old. There are various causes and points of failure, from a mouse chewing through insulation, bad outlets, bad wiring, faulty breaker panels (or even fuse panels), to incorrect installations and more. None of those have to do with cable or connector specs. There is a crap ton of variability in household electrical setups, but there is exactly one connection type that we’re actually taking about here, and it’s a newborn compared to what you keep trying to bring into the discussion. Just because electricity is involved doesn’t make for correlation. You’re asking us to be equally outraged by household wiring issues, which is not the subject of this thread.
 
How many such cables have you designed? A number somewhere between 0 and none? If 50A plug-and-play cabling was easy, automakers wouldn't be now shifting to 48V systems, to reduce the amperage and cable mass that modern vehicles are now requiring.
I work on cables all the time.

Ever seen 0000?
 
Just throwing this out there.. because maybe it needed to be mentioned.

Your wall socket if you live in NA is capable of like 1200w, maybe 1500.. and 12 amps. 12 of em.
 
Status
Not open for further replies.
Back
Top