• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

No, what you're seeing is an increase in power demands. When a cable supplying 200 or 300 watts is out of spec, it merely prevents the card from booting, or some more innocuous error. When a cable supplying 600 watts is out of spec, it can generate enough heat to melt. This is also why we're only seeing these issues with third-party cables -- and only those from certain manufacturers.
The things is, it's really hard to make a proper 12HPWR or 12V2x6 cable. It's robustness is so low that any higher deviation in thickness will result in a problem. On the paper, sure, it works. And you're saying that PSU makers are that dumb to not be able to manufacture proper cable?

While manufacturing such cables, deviations appear due to quality shift. This is normal in manufacturing. Depends on how you set the limits. Tighter limits increase costs significantly, as tools and molds would need to be replaced much more often. For plastics, molds are the highest problem, as they degrade with time.

In other words, you have to manufacture that 12 pin cable with much more higher precision than standard 6/8 pin cables, because there is not enough robustness to compensate for manufacturing deviations. It is so delicate that it even has very limited plug-in cycles. I'd call it design flaw, because it requires unreasonable precision and manufacturing costs compared to all other cables/connectors previously used.
 
The things is, it's really hard to make a proper 12HPWR or 12V2x6 cable. It's robustness is so low that any higher deviation in thickness will result in a problem. On the paper, sure, it works. And you're saying that PSU makers are that dumb to not be able to manufacture proper cable?

While manufacturing such cables, deviations appear due to quality shift. This is normal in manufacturing. Depends on how you set the limits. Tighter limits increase costs significantly, as tools and molds would need to be replaced much more often. For plastics, molds are the highest problem, as they degrade with time.

In other words, you have to manufacture that 12 pin cable with much more higher precision than standard 6/8 pin cables, because there is not enough robustness to compensate for manufacturing deviations. It is so delicate that it even has very limited plug-in cycles. I'd call it design flaw, because it requires unreasonable precision and manufacturing costs compared to all other cables/connectors previously used.
If what you wrote is true, what did Nv do in this direction to issue and check (the operation) a certificate to manufacturers of plugs and cables?
Did the cables and plugs pass stress tests (eighth sweat) for at least a few days?
Should it include with its products for free (taking into account the astronomical prices of GPU) tested and recommended cables by external manufacturers to whom it issued appropriate certificates and other papers?
Should it take responsibility and include repairs or replacement of equipment?
Should it blame users who have been building, upgrading their computers for several decades and have never had as many problems as they do now, at least with GPU 4090, 5090, melting plugs, disappearing ROPs, lack of HS sensor and drivers!!! etc...

Who screwed up? Certainly not the user, but the one who came up with it and did not even bother to test it specifically, correct it and only then allow it for production under strict quality control. You're right, it's a design defect, but why does everyone wash their hands of it and it's hard to prove that it's not the user's fault.
 
And you're saying that PSU makers are that dumb to not be able to manufacture proper cable?
Some are quite able to do so. Others are able -- but prefer to generate higher profits by cutting corners. And even when they don't cut corners, parts and materials do occasionally fail. I'll point out that, in just the USA alone, there are more than a quarter million house fires every year caused by failed 120v cabling, despite that standard being a century old.

In other words, you have to manufacture that 12 pin cable with much more higher precision than standard 6/8 pin cables
Very true. You just can't get around the laws of physics. Higher current flows requires higher-quality materials and manufacturing. This is why I believe that, at some point in the not-to-distant future, consumer PSUs will include a 24v or 48v supply line. It's just impractical to supply kilowatt-level wattages on 12v lines.
 
In other words, you have to manufacture that 12 pin cable with much more higher precision than standard 6/8 pin cables, because there is not enough robustness to compensate for manufacturing deviations. It is so delicate that it even has very limited plug-in cycles. I'd call it design flaw, because it requires unreasonable precision and manufacturing costs compared to all other cables/connectors previously used.
This is an example of the mass hysteria endymio was explaining before. Limited mating cycles has been a thing for every cable that has ever existed. Including your pcie slots, dimm slots etc. It was never brought up. And yet it does on 12vhpowr as though the fact that you can't mate it 1000 times proves anything, since you couldn't do that with any other cable anyways.
 
If what you wrote is true, what did Nv do in this direction to issue and check (the operation) a certificate to manufacturers of plugs and cables?
Did the cables and plugs pass stress tests (eighth sweat) for at least a few days?
Should it include with its products for free (taking into account the astronomical prices of GPU) tested and recommended cables by external manufacturers to whom it issued appropriate certificates and other papers?
Should it take responsibility and include repairs or replacement of equipment?
Should it blame users who have been building, upgrading their computers for several decades and have never had as many problems as they do now, at least with GPU 4090, 5090, melting plugs, disappearing ROPs, lack of HS sensor and drivers!!! etc...

Who screwed up? Certainly not the user, but the one who came up with it and did not even bother to test it specifically, correct it and only then allow it for production under strict quality control. You're right, it's a design defect, but why does everyone wash their hands of it and it's hard to prove that it's not the user's fault.
Good question. Why only 3rd party cables are failing and not the Nvidia's ones? How come all 3rd party cables tend to fail and not Nvidia original cable? Either Nvidia is doing cable differently (out of spec), or specification is wrong and all that other brands adhering to specification are releasing in spec product made on bad specification. Anyway ... the truth is ... Nvidia's cable sucks, too:

1746610649692.png
1746610678071.png
1746611138855.png


Maybe they fail less than 3rd party ones due to much stricter regulation limits on manufacturer process. This makes the product more expensive.
But yes, keep blaming 3rd party cable makers anyway. Seasonic, Corsair, be quiet!, SuperFlower ... they're all just dumb because they don't know how to make a cable properly.

This is an example of the mass hysteria endymio was explaining before. Limited mating cycles has been a thing for every cable that has ever existed. Including your pcie slots, dimm slots etc. It was never brought up. And yet it does on 12vhpowr as though the fact that you can't mate it 1000 times proves anything, since you couldn't do that with any other cable anyways.
Everything has plug-in cycles. Except, not just 30-50. No one ever bothered about those cycles because it was never important, as connectors were robust enough. Such a low amount of cycles points to a thing that the connector is nothing but delicate with low safety factor. If it wasn't, there would be hundreds of plug-in cycles instead.
 
Everything has plug-in cycles. Except, not just 30-50. Noone ever bothered about those cycles because it was never important, as connectors were robust enough. Such a low amount of cycles points to a thing that the connector is nothing but delicate with low safety factor. If it wasn't, there would be hundreds of plug-in cycles instead.
So what are the plug in cycles on dimms, pcie slots, the 8pin power connectors? Cause some of them are in the single digits AFAIK.
 
So what are the plug in cycles on dimms, pcie slots, the 8pin power connectors? Cause some of them are in the single digits AFAIK.
I can only tell for ATX connector because I reused my current PSU few times. So far, never had an issue. But single digit number of cycles for any ATX connector is too few, I'd say. I read somewhere at least 100 cycles for gold-plated molex mini fit connector fittable for 16awg and 18awg wires. Based on number of reviews per one machine, I'd say even DIMM's and PCIe slots can last some time.

Plug in cycles is not a problem of this connector. It's a result of unsufficient robustness. Pins with lower thickness require stronger mating force (than connectors with thicker pins) because mating surface is smaller. Higher mating force causes higher wear and tear. With this 12 pin connector, well, you can plug it for the 1st time and it may as well burn shortly after. It's more like luck than plug in cycles. Based on that, yes, I agree, this cycles phenomena might have been brought to public a bit unfounded.
 
So what are the plug in cycles on dimms, pcie slots, the 8pin power connectors? Cause some of them are in the single digits AFAIK.
NVME drives are rated at only 60 mating cycles. My system has a hot-swappable bank, and I've had more than one fail after 10-20 swaps. (Interestingly enough; if they don't fail early, they generally last to several hundred.)

Anyway ... the truth is ... Nvidia's cable sucks, too:
The mere fact you believe a single such instance is relevant demolishes your argument. The number "one" does not constitute a statistical universe. And that's even without considering the environment, prior use or damage, possible overclocking, etc. All parts and materials can fail. But ones that carry 50 amps often melt when they do.
 
The mere fact you believe a single such instance is relevant demolishes your argument. The number "one" does not constitute a statistical universe. And that's even without considering the environment, prior use or damage, possible overclocking, etc. All parts and materials can fail. But ones that carry 50 amps often melt when they do.
I posted 3 pics, that's just what I could quickly find on the internet. I'm not gonna spend all day browsing the internet to wider your statistical universe. Feel free to do it yourself.
Do Nvidia connectors suffer from the same problem as well? Yes, they do. Period.
 
There is another problem with this topic, and a lot of stuff going on with pc market, that is clarity of specification, trying to find something about mating cycles I've found that zotac had said in it's brochure about 30 mating cycles for their adapter for 4090.
If You don't know where to look You don't know anything, and second is even if it's viable to find it for one card, it could be another story for different manufacturer.
 
I posted 3 pics...Do Nvidia connectors suffer from the same problem as well? Yes, they do. Period.
Only if you define "the problem" as "all cables can fail, especially if damaged, abused, and repeatedly misused". One of those doesn't even appear to be melted, and the other very well may be a 12VHPWR, rather than 12V-2x6. Here, I'll give you a random image to demonstrate how poorly Ferraris are engineered -- they can actually split in two when driven!

1746618028021.jpeg


For cables carrying 600+watts, certainly tolerances are tighter, and the potential for heat damage higher. Not even NVidia can escape the laws of physics. However, I defy you to find an example of another cable -- in PCs or any other industry in the world -- rated for 50 amps with higher reliability. And not just the raw current, but 4 sideband signal channels as well, all in a compact package that size. Is it perfect? No ... but its orders of magnitude better than the Youtube lackwit FUD brigade believes.
 
I can only tell for ATX connector because I reused my current PSU few times. So far, never had an issue. But single digit number of cycles for any ATX connector is too few, I'd say. I read somewhere at least 100 cycles for gold-plated molex mini fit connector fittable for 16awg and 18awg wires. Based on number of reviews per one machine, I'd say even DIMM's and PCIe slots can last some time.

Plug in cycles is not a problem of this connector. It's a result of unsufficient robustness. Pins with lower thickness require stronger mating force (than connectors with thicker pins) because mating surface is smaller. Higher mating force causes higher wear and tear. With this 12 pin connector, well, you can plug it for the 1st time and it may as well burn shortly after. It's more like luck than plug in cycles. Based on that, yes, I agree, this cycles phenomena might have been brought to public a bit unfounded.
Well I didn't ask how many times you can mate them in practice, I asked how many you can based on spec, since you are criticizing the 12vh based on spec.
 
No ... but its orders of magnitude better than the Youtube lackwit FUD brigade believes.
but cable is only part of the story, the problem lies with overall design of gpu power delivery, it's like having 4 wheel drive, but only 1 wheel is getting almost all power (in case of higher resistance in other wheels).
This cable could be really great if not the problem on the gpu that is connecting all 12v cables into 1, and all ground cables into 1 on pcb.
 
My personal experience with an original 12vhpwr from cablemod (not the 12v - 2x6), after mating it a gazillion of times (testing the gpu on different machines) 0 issues thus far, and I had the gpu since day one.
 
but cable is only part of the story, the problem lies with overall design of gpu power delivery, it's like having 4 wheel drive, but only 1 wheel is getting almost all power (in case of higher resistance in other wheels).
This cable could be really great if not the problem on the gpu that is connecting all 12v cables into 1, and all ground cables into 1 on pcb.
Mindless repetition of Internet memes is one of society's largest problems today. There's no design flaw with such cross-coupling: why the very cable wires themselves are comprised of such cross-coupled stranded copper. If due to bending, other damage, or simply poor quality materials, some strands break or fail, those remaining will "get almost all power" and melt destructively. I've used cables that supplied 250,000 watts in perfect safety, without per-pin sensing. Nor would such a change make the design inherently safer; it closes one failure mode, but opens others in its place.

Ultimately what matters is the overall failure rate, which, by all accounts, is substantially below 1%. I have no doubt that PCI-SIG will continue to refine the standard to reduce that further, but I'd lay odds the only real improvement will come by requiring PSUs to add a 24v or 48v supply specifically for GPUs.
 
So what are the plug in cycles on dimms, pcie slots, the 8pin power connectors? Cause some of them are in the single digits AFAIK.
8-pin: 75 cycles
 
This mindless repetition of Internet memes is one of society's largest problems today. There's no design flaw with such cross-coupling: why the very cable wires themselves are comprised of stranded copper. If due to bending, other damage, or simply poor quality materials, some strands break or fail, those remaining will "get almost all power" and melt destructively. I've used cables that supplied 250,000 watts in perfect safety, without per-pin sensing. Nor would such a change make the design inherently safer; it closes one failure mode, but opens others in its place.
I would love to agree with You if it was done by ppl that don't know what they are doing with pc, and with the connector is failing even when plugged correctly on new hardware it's just strange, and for me i can't get to understand why the design is not a problem in that case. or maybe I'm missing something.
edit: It's strange for me not about per pin sensing, but per pin workload balancing.
Could You elaborate on what type of hardware You where working?
 
Good question. Why only 3rd party cables are failing and not the Nvidia's ones? How come all 3rd party cables tend to fail and not Nvidia original cable? Either Nvidia is doing cable differently (out of spec), or specification is wrong and all that other brands adhering to specification are releasing in spec product made on bad specification. Anyway ... the truth is ... Nvidia's cable sucks, too:

View attachment 398524View attachment 398525View attachment 398527

Maybe they fail less than 3rd party ones due to much stricter regulation limits on manufacturer process. This makes the product more expensive.
But yes, keep blaming 3rd party cable makers anyway. Seasonic, Corsair, be quiet!, SuperFlower ... they're all just dumb because they don't know how to make a cable properly.


Everything has plug-in cycles. Except, not just 30-50. No one ever bothered about those cycles because it was never important, as connectors were robust enough. Such a low amount of cycles points to a thing that the connector is nothing but delicate with low safety factor. If it wasn't, there would be hundreds of plug-in cycles instead.
Good question. Why only 3rd party cables are failing and not the Nvidia's ones? How come all 3rd party cables tend to fail and not Nvidia original cable? Either Nvidia is doing cable differently (out of spec), or specification is wrong and all that other brands adhering to specification are releasing in spec product made on bad specification. Anyway ... the truth is ... Nvidia's cable sucks, too:

View attachment 398524View attachment 398525View attachment 398527

Maybe they fail less than 3rd party ones due to much stricter regulation limits on manufacturer process. This makes the product more expensive.
But yes, keep blaming 3rd party cable makers anyway. Seasonic, Corsair, be quiet!, SuperFlower ... they're all just dumb because they don't know how to make a cable properly.


Everything has plug-in cycles. Except, not just 30-50. No one ever bothered about those cycles because it was never important, as connectors were robust enough. Such a low amount of cycles points to a thing that the connector is nothing but delicate with low safety factor. If it wasn't, there would be hundreds of plug-in cycles instead.
I didn't write anywhere that Nv makes good cables, and secondly what do connection cycles have to do with it? How many times did new users connect their 4090/5090 GPUs until the plug was damaged? Maybe 3 to 10 times.

I should enjoy "plug and play" gaming and not watch if the plastic smell comes from the case.
In my opinion and only mine, the plug and connector in the GPU are crap.
 
If it was required ("per pin sensing"), the proper place for it would be on the PSU itself
I beg to differ. The correct place for load balancing is on the GPU imho. The GPU asks for the power, get's it, is too "dumb" (let's say it's made dumb by Nvidia because of poor design decisions on the GPU) to sense something is wrong and keeps on asking for full power from the PSU. Checking the load balance is an essential GPU task. Shunt resistors come to mind - they should be on the GPU.

Also, I personally feel like we are too often deviating from the real responsibility for those issues. Let's not do the dirty work for Nvidia.

We have those problems NOT because of
  • PSU makers skipping protection standards
  • cable vendors not understanding how to execute 12VHPWR/12v-2x6 specs on 16 AWG wires
  • noob users unable to connect cables and adapters (that was a myth from the beginning and we were presented with the 12v-2x6 revision like it was a permanent solution but that did not fix the essential risks with this standard at all)

IMHO we have all those problems because of Nvidia alone! It's that simple.

The design decisions that led to this mess:
- Nvidia ditched the shunt resistors on the 4090 and 5090 to save a tiny amount of cost - therefore increasing the risk for the customers. The 3090/3090TI were fine in that aspect (shunts).
- Then Nvidia designed a toy cable standard that is unable to cope with high power draws of 600w or even less when combined with a tiny problem on a pin or wire (whatever the cause is - it could be a production flaw, one too many reseats or even a foreign debris issue like a tiny speck of dust on one of the pins causing the imbalance). Things (can) start heating up and will melt in the worst case because of Nvidias design decisions and lack of safe margins.

I assume even Asus engineers understood the problem (check the design of the Matrix/Astral GPUs). And even Corsair engineers do know that (check the avoidance of 12VHPWR/12x2x6 where possible on the higher end PSUs). 600w+ is not ideal terrain for 12VHPWR.

So it's Nvidia itself (and only them) that should come up with a better connector standard OR with improved failsafe technology and pin load management on the GPU.

I must also say: I had no issues so far with my 4090 and the native Corsair 12VHPWR adapter cable from my HXi. But I feel empathy for anybody having those issues and I think, we as customers deserve something better from Nvidia (at least for the flagship GPU price tag nowadays).
 
Last edited:
Only if you define "the problem" as "all cables can fail, especially if damaged, abused, and repeatedly misused". One of those doesn't even appear to be melted, and the other very well may be a 12VHPWR, rather than 12V-2x6. Here, I'll give you a random image to demonstrate how poorly Ferraris are engineered -- they can actually split in two when driven!

View attachment 398533
When someone intentionally risks and drives like idiot, will end up like idiot (or dead) along with card split into 2 pieces.
When someone plugs connector as it should be plugged and then it melts due to uneven distribution of current (design flaw),
because pin mating surface between pins in connector varies that much, that's something different.

12VHPWR and 12V2x6 are practically the same. There's no difference in pin thickness, just those 12 pins are 0.25 mm longer.
That extends mating surface by less than 10% and that is still not enough when manufacturing deviations and tolerances are taken into account.
That's why 12V2x6 pin cables from other makers burn as well.

Ultimately what matters is the overall failure rate, which, by all accounts, is substantially below 1%. I have no doubt that PCI-SIG will continue to refine the standard to reduce that further, but I'd lay odds the only real improvement will come by requiring PSUs to add a 24v or 48v supply specifically for GPUs.
It's not a failure like any other failure. Depends. Even in work risk assesment you have multipliers that represent severity of work accident.
So even <1% failure rate with potential fire hazard outcome is unacceptable. Normally, things that pose life hazard must have occurence less than one in a million.

For cables carrying 600+watts, certainly tolerances are tighter, and the potential for heat damage higher. Not even NVidia can escape the laws of physics. However, I defy you to find an example of another cable -- in PCs or any other industry in the world -- rated for 50 amps with higher reliability. And not just the raw current, but 4 sideband signal channels as well, all in a compact package that size. Is it perfect? No ... but its orders of magnitude better than the Youtube lackwit FUD brigade believes.
I don't care whether this is rated for 50A or not, this cable is not suitable to be delivering 600+ watts at first place. It works on the paper, but in reality can pose a fire hazard.
Current sensing and distribution control is required for this connector to become even far away from being perfect.
May it be whatever compact, whatever current bearing, when there's a hazard, it's unsafe for use. There's no excuse for making bad connector which poses fire hazards.
"Does it melt? Well, yeah, but only in like 2-3% cases, that's not that bad, you know. Show me better connector with similar dimensions and compact factor." There's none. Ask yourself why.

This connector could be safe when paired with appropriate electronics that measure and control amperage. Nvidia decided to cheap out on users and after RTX 3090 Ti, they removed any protection that could handle uneven current distribution. By doing this Nvidia put all "reliability" (to ensure safe current distribution) on cable with connectors with power factor (that compensates for manufacturing deviations) near to non-existent.

Well I didn't ask how many times you can mate them in practice, I asked how many you can based on spec, since you are criticizing the 12vh based on spec.
So, it's 75 cycles for 4.2 pitch mini-fit molex connector for tin and 100 cycles for gold. So, criticizing 12-pin connector for just 30 cycles is actually justified. But let me repeat myself - the problem is that one may get into trouble pretty randomly with 12-pin connector. Most of people have connected card only once, during first installation. Some ended up with melted connectors. Does it mean that connector can have up to 0 cycles?

My personal experience with an original 12vhpwr from cablemod (not the 12v - 2x6), after mating it a gazillion of times (testing the gpu on different machines) 0 issues thus far, and I had the gpu since day one.
Hand on your heart, how often do you look there and check for potential problems?

I didn't write anywhere that Nv makes good cables, and secondly what do connection cycles have to do with it? How many times did new users connect their 4090/5090 GPUs until the plug was damaged? Maybe 3 to 10 times.
My reaction to your post was not of confrontational type, rather extensiononal. A guy before you stated that only 3rd party connectors get melted.

I should enjoy "plug and play" gaming and not watch if the plastic smell comes from the case.
In my opinion and only mine, the plug and connector in the GPU are crap.
Fully agree. Should be plug in and forget, as always was before 12-pin connector was introduced.
A fact that you experience such problems on $2,500 GPU is royally disastrous.
 
So, we've seen a total of SIX confirmed issues (10 total suspicious/unconfirmed). While this is a problem, I'd say it isn't as rampant as some youtubers and forum members make it out to be........


10(6, really) out of THOUSANDS of cards in the wild. I'm more worried about Gigabyte's GPU gloop and my mount in a SUP01 than this, lol.
 
Hand on your heart, how often do you look there and check for potential problems?
Every time im replugging it. And since I changed quite a few PC's i've checked it at least 10 times.

Im not using an atx3 PSU so maybe that helps, since the 12vh is split in the PSU side into 2 slots, each providing 300w max.
 
10(6, really) out of THOUSANDS of cards in the wild. I'm more worried about Gigabyte's GPU gloop and my mount in a SUP01 than this, lol.
10 out of an unknown number of cables possibly melted, the concern is many people could have like what happened with the MSI cables with the yellow connector, burnt yet still functional.
And the only way to know if the cable load is balanced is to have an ROG Astral gpu, most people aren't going to constantly check their cables either. I think the connector should be a plug it in and not worry about it, as was the 8 pin cable because it was robust enough to not worry about plug cycles or bending.
 
10 out of an unknown number of cables possibly melted, the concern is many people could have like what happened with the MSI cables with the yellow connector, burnt yet still functional.
And the only way to know if the cable load is balanced is to have an ROG Astral gpu, most people aren't going to constantly check their cables either. I think the connector should be a plug it in and not worry about it, as was the 8 pin cable because it was robust enough to not worry about plug cycles or bending.
To be fair, it's SIX... the others are unconfirmed and/or confirmed user error so it's not even 10.

We can all speculate until the cows come home about unreported issues (on any part, mind you). Perhaps we'll start to see that pop up in the future.
I think the connector should be a plug it in and not worry about it,
I agree, and, when used properly (12v2x6, connected properly without immediate sharp bends), it is (to me). :)
 
I've said it before and I'll say it again, this screams of designers not considering key variables and different teams making too many assumptions.

If you build one of these cables perfectly to spec (12VHPWR or 12V-2x6), and test it under perfect conditions, it will likely work fantastically and will have no trouble with quite a bit more current too.
Things they didn't seem to consider or factor in on the connector/cable design side:
1. Where they're used...Every PC case and build is slightly different, the GPUs put the connectors in a variety of orientations and locations, often pointed straight at the side panel of the case so they cable needs to bend immediately. It's like they assumed nobody would ever bend the cables, hence the 35mm requirement, which is impossible for some cases. They didn't factor in how hard it is to see/hear/feel/etc. the latch click in when the connector is buried under/inside a 12 pound GPU heatsink.
2. Manufacturing tolerances...The pins, sockets, and even plastic moldings are all made as fast and cheaply as possible and this leads to some pretty wide tolerances. Most of the time everything is fine, but when you put more current on smaller contacts and reduce the contact area (and insertion distance), you increase risk of issues if anything goes out of tolerance in the entire stack (from manufacture, to assembly, to use). They didn't think about how the latch still allows the connector to be pulled backwards in many situations and further reduce said contact area.
3. Who's using these?...regular shmoes is the answer and I see time and time again where people design something in CAD, use it in the lab and say "it worked great for me" even though they designed it (or had direct influence on the design) and they just don't consider the end-user at all or that

Then on the GPU (both Nvidia and AIB sides), they just said "oh, there's this new cable we're supposed to use and it's supposedly great. Let's not do anything with our design to make sure it goes smoothly". Everything about the connector location on the boards, the lack of load balancing, the lack of sensing at all, the single connector for board, etc. all screams that they just assumed the connector/cable is fine and there's nothing to worry about. They didn't consider any limitations, risks, or precautions.

And nobody at a management or corporate level looked at the whole plan and said "hey, these two sides are not considering some things...we might have a problem here". Everybody just said "hey, we have this new cable that we tested thoroughly (under ideal conditions and build-quality), we can market it as a single-cable solution for our high-powered GPUs" and repercussions be damned, they just went full-steam ahead.

You can point at any particular part of the chain and blame any of the pieces, but it's the whole thing. All of it is just something that works 100% perfectly fine...on paper, but as any good engineer knows, the real world is not the same as what's on paper (or CAD). Does any of this mean that every cable will fail or is a ticking time bomb? Nope. Most will be fine. However, there's a much higher risk of problems than we're accustomed to even if you don't have any "user error" because they didn't leave enough room for real world conditions. That wouldn't be as big of a deal if you weren't then stuck in a situation where "oh, the connectors are now melted on my PSU, cable, and $$$$$$ GPU...who's responsible for this and how can I get replacement parts?"
 
Last edited:
Back
Top