• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 4000 series burning cables thread

Status
Not open for further replies.
I'm sorry, but it's not anecdote, Johny Guru says it all:

Basically connector was tested at 12 volts, 55 amps and 10 hours. Johny says that those are unrealistic conditions, but let's do the math (12 volts * 55 amps) and we have total of 660 watts. 4090 FF is rated at 450 watts and nVidia recommends 850 watt power supply, meanwhile basically any other card maker recommends 1000 watt PSU, but they don't list how much card is expected to consume power. MSI actually states that Suprim Liquid X can consume 480 watts. But let's get back to reality, TPU did test some 4090s and oen of them was Suprim X model (https://www.techpowerup.com/review/msi-geforce-rtx-4090-suprim-x/39.html). It was "only" rated at 480 watts, meanwhile it guzzled a 471 watts maximum with worst spike being 505 watts. Strix OC (https://www.techpowerup.com/review/asus-geforce-rtx-4090-strix-oc/39.html) model had maximum of 517 watt and spike of 547 watts. However, low end Palit 4090 model (https://www.techpowerup.com/review/nvidia-rtx-4090-450w-vs-600w/) if paired with ATX 3.0 PSU consumes nearly 700 watts, yikes. Johny said that at those 660 watts some connectors performed poorly thermally and it seems that sadly 660 watts is becoming a realistic number for cards.

Also, Johny said that connector after 30 plugs and unplugs can have internal damage and some have shorted in lab. Also Johny says that adapters add resistance, therefore higher chances of failure from heat. If connector is twisted by 2 radiuses, then it can have thermal variance too. So maybe not a full meltdown, but suboptimal temperatures only. Anyway, these cards are cutting way too close to spec or over spec as they are and I think that Johny was too optimistic about those new cables/adapters. After all we had burned cables with 3090s , 3090 Tis, however we also had cards like R9 295x2 and they didn't burn connectors. My take is that judge is still out and this thread is more like database of events regarding potentially burning cables/adapters. Anyway, issue is of potentially dangerous nature and it's only natural to be biased a bit too safety side.
I think you need to reread the headline of that youtube.

It's Jonny Guru btw, you debated this point with him earlier here on this very forum. You seem to like misconstruing his points.

@jonnyGURU
 
I think you need to reread the headline of that youtube.
I'm only arguing that what he says isn't exactly as corner case as he thinks it is. I mean, data is out at TPU's own review with ATX 3.0 PSU and cards drew more than 660 watts. At 660 watts it was already concerning, anything more and it should be dangerous. Also that's just 4090, there's rumored 4090 Ti, which is said to consume a whopping 800 watts of power. At that point, Corsair will have to send fire extinguishers with their PSUs. I guess I sound alarming too, but come on, no real investigation about issue that seems to matter to quite a lot of people? Also I don't find it unreasonable to worry about issue that could literally make your PC catch on fire.
 
I'm sorry, but it's not anecdote, Johny Guru says it all:

Basically connector was tested at 12 volts, 55 amps and 10 hours. Johny says that those are unrealistic conditions, but let's do the math (12 volts * 55 amps) and we have total of 660 watts. 4090 FF is rated at 450 watts and nVidia recommends 850 watt power supply, meanwhile basically any other card maker recommends 1000 watt PSU, but they don't list how much card is expected to consume power. MSI actually states that Suprim Liquid X can consume 480 watts. But let's get back to reality, TPU did test some 4090s and oen of them was Suprim X model (https://www.techpowerup.com/review/msi-geforce-rtx-4090-suprim-x/39.html). It was "only" rated at 480 watts, meanwhile it guzzled a 471 watts maximum with worst spike being 505 watts. Strix OC (https://www.techpowerup.com/review/asus-geforce-rtx-4090-strix-oc/39.html) model had maximum of 517 watt and spike of 547 watts. However, low end Palit 4090 model (https://www.techpowerup.com/review/nvidia-rtx-4090-450w-vs-600w/) if paired with ATX 3.0 PSU consumes nearly 700 watts, yikes. Johny said that at those 660 watts some connectors performed poorly thermally and it seems that sadly 660 watts is becoming a realistic number for cards.

Also, Johny said that connector after 30 plugs and unplugs can have internal damage and some have shorted in lab. Also Johny says that adapters add resistance, therefore higher chances of failure from heat. If connector is twisted by 2 radiuses, then it can have thermal variance too. So maybe not a full meltdown, but suboptimal temperatures only. Anyway, these cards are cutting way too close to spec or over spec as they are and I think that Johny was too optimistic about those new cables/adapters. After all we had burned cables with 3090s , 3090 Tis, however we also had cards like R9 295x2 and they didn't burn connectors. My take is that judge is still out and this thread is more like database of events regarding potentially burning cables/adapters. Anyway, issue is of potentially dangerous nature and it's only natural to be biased a bit too safety side.
You are lying, after all Nvidia showed us a blurry screenshot of a green line with the text: RTX 4090, proving to us they massively improved on power spikes.

Ergo, you are wrong and Nvidia is right. These users are all lusers, L2P your 4090 pls, one does not simply walk into Ada, etc.

:rolleyes:
 
We can only assume innocence in that the PCI-SIG usually does due diligence.
No this thread is the type of comment that helps no one. It's baseless hysteria.

There are always going to be a few users with issues. It's also going to be more likely to be dramatic with higher wattages. A couple anecdotal cases and a fake WCCF article does not make a trend.

Want another anecdote? My adapter works fine and has been for months, bends included. Why is one anecdote worth more than another? We need stats.

Let me stop you right there its not baseless hysteria, there is plently of factual evidence to support this being a problem although it may not effect every user.

This is from the Article I linked when I said this was known since last month when PSU vendors / PCI SIG where doing validation testing.
3 different MFG's and 10 different units with melted connectors, also of note this is talking specficly about the 12v HPWR connector itself not improperly made adaptors.

As an electrician who routinely builds high current control systems for industrial machinery I am extremely familiar with failures due to improperply terminated connections so what Buildzoid and WCCFTECH are claiming and what these "leaked slides" are showing holds up to reality.

In a perfect world this isnt an issue but nothing is perfect people and manufacturing are fallible, the problem is this design doesnt take that into account unlike PCIE Power connectors which are oversized and underated and there in lies the problem with 12v HPWR
1666632197044.png

1666632905055.png
 
So the TLDR seems to be a mix of user error and manufacturing to small tolerances (that fails to account for an "acceptable" amount of user error). Plus some people probably using some bootleg looking setups with PSU's that have no business running these cards.
 
As an electrician who routinely builds high current control systems for industrial machinery I am extremely familiar with failures due to improperply terminated connections so what Buildzoid and WCCFTECH are claiming and what these "leaked slides" are showing holds up to reality.
Except they were completely fabricated. That's the issue. You can't claim to know anything when half your sources are outright fabricating data.

Let me stop you right there its not baseless hysteria
This thread pretty much is.

There might be something resembling a small issue with the base port standard at extreme bends (and that's a maybe), but you won't find that discussed here anongst the screeching about adapters.

We also can't really be certain of that courtesy another round of quality muddied waters by wccftech.

You are lying, after all Nvidia showed us a blurry screenshot of a green line with the text: RTX 4090, proving to us they massively improved on power spikes.

Ergo, you are wrong and Nvidia is right. These users are all lusers, L2P your 4090 pls, one does not simply walk into Ada, etc.

:rolleyes:
Yeah, lets summarize the argument with buzzwords, not logic. Great post man.
 
Last edited:
what a well thought out product...
make it more complex than ever and worse in every way possible... GJ Jensen.
View attachment 266906
Well well well if it's not the top scammers at cablemod trying to blame someone else -whether it's nvidia, the user or the even the PSU manufacturer- for THEIR low quality overpriced products
And they "solve" it by milking gullible gamers even more by selling an "adapter" for the cards. Unbelievable.

Why are people still falling for their bs? you can get **the same** Chinese cables on wholesale sites for like 1/10th of the price, again, the same cables.

Another problem is the connector being too small, you're pulling 50+ amps through something that looks designed for 10 and no decent locking mechanism. Given the card is so big they could've gone for a sturdier connector with less pins, but muh standards and all that, I know.
 
Regardless of your Stance.

Surely the simple fact that NO 4090 can be fitted conventionally in The largest majority of PC cases at all by the manufacturer non recommendations.

Due to size plus 35mm relief length making it a big issue.

That's going to make this a possibility for a fair few upgraders.

I would absolutely buy a 90° fitting but Do think it's horse shit to have to.
 
This is kind of a fail, doubled over. The cabling on the 3090 was already annoying as is, now they do this on the 4090. smh
 
Yeah, lets summarize the argument with buzzwords, not logic. Great post man.
Sorry I can't help myself, the product has exceeded a level where I take it seriously. These topics happen every time. If its big, we'll hear it, the best we have now is inconclusive, but its clear we're treading a fine line with all these beautiful improvements in technology lately. 3 generations into RT and we're looking at what is largely unconvincing results and a crapload of sacrifices to make it all 'just' work.

Whatever happened with just solid engineering on the gaming front anyway?
 
Except they were completely fabricated. That's the issue. You can't claim to know anything when half your sources are outright fabricating data.


This thread pretty much is.

There might be something resembling a small issue with the base port standard at extreme bends (and that's a maybe), but you won't find that discussed here anongst the screeching about adapters.

We also can't really be certain of that courtesy another round of quality muddied waters by wccftech.


Yeah, lets summarize the argument with buzzwords, not logic. Great post man.

The official specification calls for 35mm of straight cable from the connector base, which isn't possible without support in most cases given physics pulls the weight of the cable down.

Not sure I'd call it a serious issue but to say it's only a problem in extreme cases remains to be see. We need to see how long the cables last in the wild as a lot of people are likely technically running it out of spec.

In any scenario though, this shouldn't be an issue in the first place. They could have easily added a 35mm boot to the cable to relieve the stress and keep everything in spec.
 
Looking at the new 12 pin connector I would have to ask if the plastic housing is melting then there must be some serious temperature for that to happen. I would have to look at the type of plastic housing that's being used, even if there is a possible issue with pin contacts.
 
Looking at the new 12 pin connector I would have to ask if the plastic housing is melting then there must be some serious temperature for that to happen. I would have to look at the type of plastic housing that's being used, even if there is a possible issue with pin contacts.
The answer is simple. The connector was lose or not seated correctly and the pins were arcing.
 
So new AMD(7000)cards are going to have the exact same problem as they using identical connector right?
 
The answer is simple. The connector was lose or not seated correctly and the pins were arcing.

Poor contact can be the culprit without arc. Equal current through less material means more heat.

So new AMD(7000)cards are going to have the exact same problem as they using identical connector right?

Possibly, if they pull as many watts.
 
So new AMD(7000)cards are going to have the exact same problem as they using identical connector right?

Wouldn't that depend on the location as well as the orientation of the connector on the card?
 
Sorry I can't help myself, the product has exceeded a level where I take it seriously. These topics happen every time. If its big, we'll hear it, the best we have now is inconclusive
Only some problems are inconclusive, otherwise some things are already very clear:
1) Highest wattage card with single GPU with really high PSU requirements
2) Both FE variant and aftermarket ones simply don't fit in most cases or require really too tight fitting of cables, which is a bit problematic
3) It's beyond most people's budgets, it's effectively sold at 2 grands, most people still buy xx50, xx60 and xx70 tier cards and they have been in 300-600 dollar range
4) It's clocked to high or better said way past strong diminishing return point to the point of doubling wattage, meanwhile giving only like 10-20% performance
5) It was launched for ATX 3.0 spec, when no ATX 3.0 PSU exist for sale and they still don't exist
 
Possibly, if they pull as many watts.
Aye...'tho even if they pull a little bit less bending is the more concerning issue here....
Wouldn't that depend on the location as well as the orientation of the connector on the card?
Pff...yeah that's true but even if you put that cable on the end it's still could bend.... that connector just looks to tiny and weak......
 
Pff...yeah that's true but even if you put that cable on the end it's still could bend.... that connector just looks to tiny/weak......

Which is why i also said orientation: location by itself may not be enough.
 
Which is why i also said orientation: location by itself may not be enough.
I get you but I doubt that AMD are going to do anything different as their cards are already being manufactured as we speak......
 
Last edited:
ATX 3.0 spec is unrelated to the cable melting btw.
Sure, but it's a product launched for things that don't exist and you can't buy to make it natively compatible. It's just plain weird and you can't even use it to its full extent now, but parhaps in future. To be compatible with current standard, it keeps power limit much lower than it otherwise could. Basically like RT after RTX 2080 Ti launch. Here's some cool shit, but no games to try it out yourself. It just works! :D

Important update, another 4090 popped:
 
Still has nothing to do with the connector melting. Having a ATX 3.0 wouldn't make any difference.
It does, because due to using adapter, cards have limited TDP and already melt connector, more TDP and even more melted connectors. Just take a look yourself:

Wattage peaks will be even worse than they are now, despite all the fancy marketing saying you that connector is "smart" and will communicate with PSU. TPU's review clearly shows that it's running almost 100 watts over spec with "smart" connector. I mean, it's just laughtable when literal butt plug from porn shop could handle more watts than this "smart" connector. Seriously, nV should release Hard Leather edition with really hard and chonky plugs.
 
Status
Not open for further replies.
Back
Top