I get ir_cow's point, it was a cable that isnt authorised by Nvidia.
Whether a cable is authorized or not misses the point of having a standard in the first place. The whole point of a standard is so that anyone can build a cable to spec and have it be interoperable in the ecosystem.
If the prevailing logic of those defending this nonsense connector is "3rd party cables are bad", that in and of itself is an admission of the failure of both Dell and Nvidia being the two sponsors of the standard. PCI-SIG requires those sponsors to submit the initial draft for the standard and let's be honest, Nvidia was the one pushing this through the whole time.
All third party cables are not bad, many are likely higher quality than Nvidia's stock adapter. The prevailing commonality between all these failures of both 1st party and 3rd party cables / adapters is the crap standard.
If the supply cable is out of spec
You'd first have to prove that it was indeed out of spec. You tried that earlier by saying the cable was crimped until it was pointed out that the official spec calls for crimping and that crimping produces a better connection.
Because -- either today or before it -- there has been on "en masse" melting. However it is, like the SWPE described above, yet another example of mass hysteria, with the proximal cause in both cases being a media campaign aimed at the credulous.
And do you have any proof to back up this additional conspiracy theory?
At this point your comments amount to gish gallop. Making up as many things as possible without proving any of them.
Finally we also talk about power supply units. I mentioned that ages ago already. Nvidia could sell their cards as is if the power supply unit would be properly made and the cables and connectors would be properly made. I do not mind more expensive power supply units for nivida graphic cards only.
Restrict circuit wise the current on each pin on the power supply unit. Psu are cheaply made inside.
Buildzoid made a video a few weeks back on why doing per pin balancing on the PSU makes no sense, it's a good watch. It's linked earlier in this thread.
FYI, the plastic on the PSU end isn't designed to take as high temps as the GPU end, as it's not exposed to as much heat. The PSU end should not be getting hot, that is not as a result of lower quality PSUs.
In any case, that's not even the point I'm refuting, but rather the asinine belief that the lack of "per pin sensing" is some sort of design fault on a peripheral card. If it was required, the proper place for it would be on the PSU itself, since that captures failure modes that occur before the card could sense it. But it's not any sort of requirement.
It's not really of question of if it's useful, clearly it was given Nvidia did it for a long time (3000 series are earlier). The baffling part is why they stopped precisely when power requirements were exploding.
No one said anything about just sensing but balancing. You cannot have one without the other either way, given one has to be able to sense current in order to balance it.
Your calculation is based on ignoring the safety factor designed into the cables and connectors themselves, rather than the specification. That's the proper place for it, anyway.
The safety factor of the individual components is irrelevant, the only number that matters is the specification safety factor as it takes all considerations into account.
If a hypothetical wire has a safety factor of 2.0, it means jack diddly if the connector is only rated for 1.2 or if some other component or factor would reduce that. You are pretty much asking people to ignore the actual safety margin an end user will see in practice and to instead look at safety factors of individual parts that tells them nothing useful. It's like saying an awful tasting cheeseburger is actually delicious because when you remove the stale bun, soggy lettuce, and rancid mayonnaise the meat patty isn't totally a shriveled puck.
FYI, the prior commentor's numbers were generous. 12VHPWR has a palty safety factor of 1.14:
https://overclock3d.net/reviews/gpu_displays/asus-saved-our-bacon-we-had-12vhpwr-12v-2x6-cable-issues/#:~:text=Cable wear is an issue that PC,safety factor of 1.1. That's pitifully small.
Not that 1.2 is much better.
If you need 50A, you specify 50A -- you don't specify 75, on the assumption that parts will fail at the slightest overage.
Both the rated and maximum are specified. The rated value is the customer facing value that everyone sees while the maximum is the value that should not be exceeded, period. The maximum of 12V2X6 of which is not that much higher than the rated spec. You seem to be implying that the maximum value is fine to exceed when that is not the case. You want to stay at or under the rated value.
You're contradicting yourself in your own post. If "the same connector" is rated differently in another application, that proves the manufacturer of that connector is manipulating the rated figure. A specification standard doesn't apply to particular connectors, it merely states that the connector itself should be rated for a specific load.
No silly, that just means different applications have different safety factors due to various conditions and variables.
What you call manipulation is common practice and common sense.
No.... NVIDIA and PCI-SIG.
The spec lacks safety factor. Indivindual vendors don't create specs.
PCI-SIG created the spec for video cards. NVIDIA accepted it and continues to accept it despite the results.
PCI-SIG = part manufacturers and in this instance Nvidia sponsored the connector with Dell. By extension, that means they created the initial draft (as required of sponsors) and were very likely the one that pushed it through to the end.
I think only PCI-SIG could force new rules onto NVIDIA, AMD and Intel. But that's out of my field of knowlegde.
PCI-SIG isn't a separate body with the power to enforce things on it's members. Members can choose whether or not to adapter certain standards, and standards are entirely developed my members. In the case of 12VHPWR and 12V2X6 both Nvidia and Dell, a requirement of them being the two sponsors. Likely with Nvidia being the main one pushing it.
If what you wrote is true, what did Nv do in this direction to issue and check (the operation) a certificate to manufacturers of plugs and cables?
Did the cables and plugs pass stress tests (eighth sweat) for at least a few days?
Should it include with its products for free (taking into account the astronomical prices of GPU) tested and recommended cables by external manufacturers to whom it issued appropriate certificates and other papers?
Should it take responsibility and include repairs or replacement of equipment?
Should it blame users who have been building, upgrading their computers for several decades and have never had as many problems as they do now, at least with GPU 4090, 5090, melting plugs, disappearing ROPs, lack of HS sensor and drivers!!! etc...
Who screwed up? Certainly not the user, but the one who came up with it and did not even bother to test it specifically, correct it and only then allow it for production under strict quality control. You're right, it's a design defect, but why does everyone wash their hands of it and it's hard to prove that it's not the user's fault.
The PC ecosystem is hands off. PCI-SIG (and particularly Nvidia being the sponsor of the connector) designed it and it's up to manufacturers to follow it. This has been a very successful model for the PC industry for a long time. There's no check by Nvidia or anything like that, such a thing would increase costs. It's up to the manufacturer to do the testing.
At the end of the day, it comes down to safety margin and bad card design. The connector doesn't have enough margin and Nvidia stripped it's GPUs of the ability to balance load across pins. Both issues were created by Nvidia. Frankly if only one of those didn't exist, the other likely would have never been found out.
As for who takes the blame, the common party here is Nvidia. It's a standard drafted by Nvidia and a poor card design by Nvidia.
Some are quite able to do so. Others are able -- but prefer to generate higher profits by cutting corners. And even when they don't cut corners, parts and materials do occasionally fail. I'll point out that, in just the USA alone, there are more than a quarter million house fires every year caused by failed 120v cabling, despite that standard being a century old.
This is nothing new and yet this issue only exists with this connector. Pretty obvious what that tells us.
Very true. You just can't get around the laws of physics. Higher current flows requires higher-quality materials and manufacturing. This is why I believe that, at some point in the not-to-distant future, consumer PSUs will include a 24v or 48v supply line. It's just impractical to supply kilowatt-level wattages on 12v lines.
That is simply not going to happen anytime soon. People are not going to install high voltage lines for their PC, that's expensive and very extreme just to get a PC going.
Good question. Why only 3rd party cables are failing and not the Nvidia's ones? How come all 3rd party cables tend to fail and not Nvidia original cable? Either Nvidia is doing cable differently (out of spec), or specification is wrong and all that other brands adhering to specification are releasing in spec product made on bad specification. Anyway ... the truth is ... Nvidia's cable sucks, too:
Nvidia cables are failing too. This hasn't only been 3rd party cables.
There's just many factors to consider:
- How many people are actually using the Nvidia provided adapter as compared to 3rd party / PSU provided cables? I'd imagine very few as a percentage of the whole use the stock adapter, if not justfor the looks. People buying 4090s / 5090s tend to be more picky about those sorts of things.
- Is there a bias in the group of people using the Nvidia adapter vs the stock cable / 3rd party? I'd argue it's possible that the group of people using the Nvidia adapter may be less technically inclined as a whole compared to the other group as thus less capable of narrowing down the issue to the adapter in the first place.
Correlation doesn't equal causation, it'd be akin to saying OEM PCs are better because there are less reported issues with them. The basis for that conclusion may be correct but it ignores the fact that OEM PC owners are less likely to report issues in general because they may not know they have an issue and there are much more unlikely to be able to diagnose the root cause of an issue as well. They may blame software, windows, or something else.
NVME drives are rated at only 60 mating cycles. My system has a hot-swappable bank, and I've had more than one fail after 10-20 swaps. (Interestingly enough; if they don't fail early, they generally last to several hundred.)
You mean M.2. NVMe is a protocol. For example, my U.2 drives use NVMe but they are not rated for the same number of cycles as M.2.