• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Proposed new Power Connector

Why's that?
600W at 12V is a total of 50 Amps. A single cable the correct gauge to handle 50A is quite chunky and inflexible - it would need to be 5mm in cross-section. Even if a single pair cable was made with 50A wires and big, strong, chunky connectors, it would be so stiff and hard to bend towards the graphics card's connector that it'd likely just rip the connector off the board at the solder joint.

Splitting the 50A down 6 wire pairs makes it 8.3A per wire, which means you can use much thinner wires and connectors that are easier to manipulate and their bend radius would exert much less force on the connectors. The alarming thing is that the small connectors and AWG16 wire used for 12VHPWR/12V-6X2 are only rated to 9.5A and they're getting up to 8.3A by default, with no other factors considered. The older PCIe 8-pin or 6+2-pin connectors are rated to 13A per wire and only carry 4.2A by default, so there's a huge difference in the safety margin.

What you gain by using more pairs of smaller wires is a cable that's easier to use and far more practical, but the risks are that the current isn't distributed evenly down all of the wire pairs. Using shunt resistors to monitor current on the 12V wire pairs with allows the VRM controllers to load-balance all of the wires, but the GPU designs that have caused melted cables haven't done this, so all 600W of power (50A) is going over fewer wires, or even just one wire, which vastly exceeds that max current rating and gets them hot enough to melt/ignite the plastic connector and wire sheathing.

Edit:
If it hasn't been linked already, this video explains it all rather well:
 
Last edited:
im going to say it "hold me back" why not make GPUS that use less power on the hi end eh? this higher clocks more power runaway has no future i spend most my time undervolting most of me kit just to feel right with the world.
 
Because all you're doing is moving the problem to the motherboard, and nobody wants stupidly long PCIe slots that take up all the space on the board.
I would not call this "stupidly long" it is the same length plus a little x2 slot which is already space you cant use on the motherboard because the GPU is in the way. Literally does not impact anything. And this way, you dont have to worry about the pins in the GPU cable frying the connector and the card. I see no downside here. The issue is the pins in the PSU cable. There is no transferring of issue to the motherboard.

1739545392549.png



Consumers don't want more connectors with thicker wires. They want fewer connectors with thinner wires, that are less hassle to remember to plug in and can be more easily routed.
The BTF design eliminates the wire.
 
I would not call this "stupidly long" it is the same length plus a little x2 slot which is already space you cant use on the motherboard because the GPU is in the way. Literally does not impact anything. And this way, you dont have to worry about the pins in the GPU cable frying the connector and the card. I see no downside here. The issue is the pins in the PSU cable. There is no transferring of issue to the motherboard.
So where do you think the motherboard supposed to get its extra 600W from?
Answer: From a 12V-6X2 connector somewhere on the back of the board, using the same melty cables into the same melty socket. All you're doing is adding extra steps between the PSU cable and the GPU to hide the loop of power cable visible on the front side. BTF boards aren't using some magical wireless power transmission system, they just have all the usual connectors on the other side of the board.

The BTF design eliminates the wire.
Correction: The BTF design simply moves the wire to the back where it's out of sight and jammed in with a tight bend radius against the rear case panel. The wire is still there, of course - whether it's a 12V-6X2 or four 8-pin PCIe wires is up to the motherboard manufacturer to decide.
 
600W at 12V is a total of 50 Amps. A single cable the correct gauge to handle 50A is quite chunky and inflexible - it would need to be 5mm in cross-section. Even if a single pair cable was made with 50A wires and big, strong, chunky connectors, it would be so stiff and hard to bend towards the graphics card's connector that it'd likely just rip the connector off the board at the solder joint.

Splitting the 50A down 6 wire pairs makes it 8.3A per wire, which means you can use much thinner wires and connectors that are easier to manipulate and their bend radius would exert much less force on the connectors. The alarming thing is that the small connectors and AWG16 wire used for 12VHPWR/12V-6X2 are only rated to 9.5A and they're getting up to 8.3A by default, with no other factors considered. The older PCIe 8-pin or 6+2-pin connectors are rated to 13A per wire and only carry 4.2A by default, so there's a huge difference in the safety margin.

What you gain by using more pairs of smaller wires is a cable that's easier to use and far more practical, but the risks are that the current isn't distributed evenly down all of the wire pairs. Using shunt resistors to monitor current on the 12V wire pairs with allows the VRM controllers to load-balance all of the wires, but the GPU designs that have caused melted cables haven't done this, so all 600W of power (50A) is going over fewer wires, or even just one wire, which vastly exceeds that max current rating and gets them hot enough to melt/ignite the plastic connector and wire sheathing.

Edit:
If it hasn't been linked already, this video explains it all rather well:
Thanks, I didn't even know Buildzoid made yet another video about these connectors I had to check the date posted lmao

How much current would be going through 2 8-pins at 600W?
 
How much current would be going through 2 8-pins at 600W?
There are three 12V contacts per 8-pin connector, so 6 in total, against 50A total draw = ~8.33A/contact.
 
Thanks, I didn't even know Buildzoid made yet another video about these connectors I had to check the date posted lmao

How much current would be going through 2 8-pins at 600W?
8-pin connectors use three 12V pairs each, for a total of 6 pairs in two connectors. That's 600W, 6 connectors, so 100W per connector.

100W / 12V = 8.33A, just like 12VHPWR and 12V-6X2.

The difference is that the pins in an 8-pin are larger, stronger, and rated to 13A each, rather than the puny 9.5A of the 12VHPWR and 12V-6X2 connectors.
 
If you wan't more graphics fidelity and frames, you need to increase power consumption in gpu, look at the amount of details in games, 600watts is what you would expect. we will no doubt move up to 1000watt, gonna need to for 8K 120fps. All gpus are power hungry inefficient beasts nowadays since we are seeing the full effects of the end of dennards scaling finally kick in, really im amazed they have gotten this far. when you complain about power hungry gps and expensive gpus you are seeing the slow final demise of moores law, this is the scary shit we have been hearing for years.

history has shown again and again when new standards are introduced they often dont work so well and get botched and bloated by the omni present incompetence and filthy greed in our world.
 
If you wan't more graphics fidelity and frames, you need to increase power consumption in gpu, look at the amount of details in games, 600watts is what you would expect. we will no doubt move up to 1000watt, gonna need to for 8K 120fps. All gpus are power hungry inefficient beasts nowadays since we are seeing the full effects of the end of dennards scaling finally kick in, really im amazed they have gotten this far. when you complain about power hungry gps and expensive gpus you are seeing the slow final demise of moores law, this is the scary shit we have been hearing for years.

history has shown again and again when new standards are introduced they often dont work so well and get botched and bloated by the omni present incompetence and filthy greed in our world.

This has nothing if little to do with "filthy greed" and everything to do with attempting to make their cards unique and push a standard that was poorly implemented and designed.
 
what about keeping it simple and reliable and put two 8 Pins on any GPU and the world moves on.
you can easily draw 350-400W out of a single 8 Pin without any problems (if you don't have a low end junk PSU) and then there are another 66W from the PCIe slot... should be enough.
 
Last edited:
what about keeping it simple and reliable and put two 8 Pins on any GPU and the world moves on.
you can easily draw 350-400W out of a single 8 Pin without any problems (if you don't have a low end junk PSU) and then there are another 66W from the PCIe slot... should be enough.

The issue is with the rapid resistance rise with temperature. Copper has a resistance rise of 24% from 20C to 80C so it only takes a small break in a conductor or even less in fewer pins to cause thermal runaway. Nickel plated contacts are 5X less conductive at their contact point than pure copper. Stainless steel is more than 10X less conductive.
 
what about keeping it simple and reliable and put two 8 Pins on any GPU and the world moves on.
you can easily draw 350-400W out of a single 8 Pin without any problems (if you don't have a low end junk PSU) and then there are another 66W from the PCIe slot... should be enough.
That won't fly in the standards bodies. Intentionally drawing more than 150 watts per 8 pin PCI Express Graphics power connector can cause the card to fail PCI Express certification. That can cost OEM sales. Some OEMs will refuse to buy PCI Express graphics cards that fail certification as a way to deflect defective product lawsuits.
 
That won't fly in the standards bodies. Intentionally drawing more than 150 watts per 8 pin PCI Express Graphics power connector can cause the card to fail PCI Express certification. That can cost OEM sales. Some OEMs will refuse to buy PCI Express graphics cards that fail certification as a way to deflect defective product lawsuits.
Yeah but they're okay with burning your house down if ngreedia gives them a fat stack of gpus lmao
 
Yeah but they're okay with burning your house down if ngreedia gives them a fat stack of gpus lmao
They just go by the certified stats man, they don't have the time or will to watch if they are acurate like we do.
 
The ATX standard needs a complete rework. Motherboard layout and its power connectors are stuck in the past. GPU needs better power delivery solution, as the high-end is moving to the 1kW slowly but steadily. 12V PSU standard is also stuck way in the past, and going for the 48V standard would solve many ongoing problems, as well as the future ones. Everything else feels like pushing dirt under the carpet.
 
5090s are vanishingly rare right now, 4090s are reasonably common but still comfortably under 1% of gamers, and therefore likely to be under 0.1% of the wider home PC industry. I don't have an exact figure but 99.x% are served just fine by existing, compatible, ubiquitous, 12V ATX PSUs.

The problem isn't that 48V wouldn't be a good idea, it's that 48V isn't needed by the mass-market, so they won't shoulder the cost of adoption or change. These are basic, proven economic behaviours that I have no control over, it's just the way the world works.
Keep in mind that 48V is great for marketing - here is a new computer with new PSU that provides native 48V needed by USB-C, which is 4x the 12V you got with your old computer and PSU ! Going with higher voltage to the steppers was very popular for the 3d-printers.

And there's the problem. That scenario of fewer connectors with thinner wires is why they're melting, as per the laws of physics. We MUST have either more wires or thicker wires, otherwise it'll all overheat and melt/burn.
Not necessarily. You need thicker wires only if you transfer more current. But instead you can increase voltage and make wires thinner. That's why interstate power lines run on close to a megavolt.

The ATX standard needs a complete rework. Motherboard layout and its power connectors are stuck in the past. GPU needs better power delivery solution, as the high-end is moving to the 1kW slowly but steadily. 12V PSU standard is also stuck way in the past, and going for the 48V standard would solve many ongoing problems, as well as the future ones. Everything else feels like pushing dirt under the carpet.
Agreed! There are many things that were done right with ATX, but its been a while and it would be good for an update. 48V for more power delivery and USB-C sounds like a very good idea.

I also think the ability to talk to ATX microcontroller and more monitoring would be very nice. The power supply should be able to tell us how much each cable is drawing, any power dropouts, temperatures and fan speed, as well as additional health information like how much capacitance is still there in the big caps.
 
Not necessarily. You need thicker wires only if you transfer more current. But instead you can increase voltage and make wires thinner. That's why interstate power lines run on close to a megavolt.
48V is the easy solution to this thread, but I've said in earlier posts why it's practically impossible. Even less significant changes to ATX that were a good idea have utterly failed in the past - and today the size of the entrenched ATX 12V market share is now far greater than it was when those previous attempts to move to a new standard failed.
The ATX standard needs a complete rework.
That has been true for years, but it has been very resilient against change. The only changes that have taken hold have survived because they offered full backward-compatibility with the existing market at near-zero cost to the manufacturer and end-user.

In an ideal world we'd have a redesign of ATX that gave GPUs priority, since they typically need more power and cooling than CPUs. We could scrap all the various low-voltage nonsense from the SATA and IDE era like ATX12VO did, moving to 20V, 24V or 48V and if we really wanted to get the most out of a standards change, we'd abandon the expansion-slot form-factor for graphics cards and have them be more like second motherboards with access to huge tower coolers and proper mounting holes in all four corners. GPU sag would be a thing of the past, as would the silly problem of GPUs covering up 25% of the motherboard and obstructing access to other slots and ports.

The exsinting ATX standard sucks, but I'm a realist and therefore damn-near certain it will continue to endure with nothing more than minor updates. The negligible number of people that melty connectors affect aren't relevant enough to the market to trigger a change, and the fact that the existing 12V, PCIe, MiniFit Jr standard coexists alongside Nvidia's 12VHPR without issues is all the more reason to just abandon the stupid, pointless, undersized connector and replace it with something that is pinned and wired appropriately for 600W using the existing, non-melting MiniFit Jr.
 
48V is the easy solution to this thread, but I've said in earlier posts why it's practically impossible. Even less significant changes to ATX that were a good idea have utterly failed in the past - and today the size of the entrenched ATX 12V market share is now far greater than it was when those previous attempts to move to a new standard failed.

That has been true for years, but it has been very resilient against change. The only changes that have taken hold have survived because they offered full backward-compatibility with the existing market at near-zero cost to the manufacturer and end-user.

In an ideal world we'd have a redesign of ATX that gave GPUs priority, since they typically need more power and cooling than CPUs. We could scrap all the various low-voltage nonsense from the SATA and IDE era like ATX12VO did, moving to 20V, 24V or 48V and if we really wanted to get the most out of a standards change, we'd abandon the expansion-slot form-factor for graphics cards and have them be more like second motherboards with access to huge tower coolers and proper mounting holes in all four corners. GPU sag would be a thing of the past, as would the silly problem of GPUs covering up 25% of the motherboard and obstructing access to other slots and ports.

The exsinting ATX standard sucks, but I'm a realist and therefore damn-near certain it will continue to endure with nothing more than minor updates. The negligible number of people that melty connectors affect aren't relevant enough to the market to trigger a change, and the fact that the existing 12V, PCIe, MiniFit Jr standard coexists alongside Nvidia's 12VHPR without issues is all the more reason to just abandon the stupid, pointless, undersized connector and replace it with something that is pinned and wired appropriately for 600W using the existing, non-melting MiniFit Jr.

So there is no need to throw out ATX entirely. Just like many newer power supplies have a dedicated sockets for 12VHPWR connector, they could add a two or three 48V ports. Because the ports are higher voltage they need fewer pins and they would require less space.

The first application would be powering the GPU, and then motherboards could be made that use 48V port to power USB-C and talk to the PSU via the data lines in the proposed connector.
 
So there is no need to throw out ATX entirely. Just like many newer power supplies have a dedicated sockets for 12VHPWR connector, they could add a two or three 48V ports. Because the ports are higher voltage they need fewer pins and they would require less space.

The first application would be powering the GPU, and then motherboards could be made that use 48V port to power USB-C and talk to the PSU via the data lines in the proposed connector.
That is the only way it would maybe, possibly, potentially work; slow migration from 12V to 48V.

The problem would be that 48V GPUs would be more expensive and they would need to be sold alongside 12V GPUs, so almost nobody would buy them because they cost more. Nvidia aren't a charity so they're not going to put the money into making two different GPUs to cater for the new 48V standard alongside the 12V ones that 99.999999999999% of the market already want, and dual 12V+48V PSUs would be expensive, so although people who want to future-proof might buy them, the majority of people will keep buying the best bang-for-buck offerings, AKA the 12V ATX 3.x PSUs we already have, that are sitting on shelves, that work with absolutely everything except this new, expensive niche GPU that nobody can buy because it's $3500+, always out of stock and for the mainsteam cards solving a problem that doesn't really exist

Lowest common denominator always wins, no exceptions, with all human history as backing evidence.

Honestly, I like the idea of a 48V system, it's just never going to happen: The laws of economics and capitalism must be obeyed.

Here's a Minifit Jr 10-pin: It's only 25% larger than a single 8-pin:

1739742613891.png

Each pin is rated to 13A, so lets say they can have 6.5A per wire pair for quadruple the safety margin of 12VHPWR and call it 5 x 6.5A x 12V = a 390W connector. It's still a vast improvement on Nvidia's piece of stupid shit. A single connector will handle GPUs up to 450W (thanks to the PCIe slot power) and for the gobsmackingly stupid cards like the Asus Astral OMFG MOFO Championship Edition Turbo II SE OC, you can just use two of them to provide ~850W to your ridiculous nonsense of a GPU.
 
Last edited:
I really hate math.
 
I really hate math.
There's not much of it here. Power in Watts (P) = Voltage (V) x Current in Amps (I)

That's the only equation you need, rearrange as required:
1739743661255.png

As for why current is "I", blame the Frenchies.
 
In an ideal world we'd have a redesign of ATX...
Ideal world would start by not having anti-competition monopoly headed by Mr. Leather-jacket (who was originally telling how open standards are good for everyone) brainwashing his loyal subjects that ever increasing power consumption numbers equals progress and that badly designed fire risk connectors without any safety margins (which is just the tip of iceberg) are good.

That's what sucks ten times more than ATX standard and is the true problem.

Let's hope CPUs will keep getting other kind of progress than competing for who first needs nuclear reactor to power it and another's cooling system to prevent meltdown.


...needed by USB-C...
Now that tiny connector with its microscopic contacts is another example of connector definitely not designed with safety margins...
Also even if higher voltage moves same power at lower current, that higher voltage also increases risk of arcing if something doesn't work perfectly.
And once electric arc ignites, DC is nasty in that it just keeps going and going like Duracell bunny instead of having zero voltage/current moments helping to end arcing.
That's why switches and relay contacts have lower voltage and current ratings for DC than AC.


I really hate math.
Electrical engineering is fun subject:
It's possible to make series circuit with three passive components with higher voltage between legs/poles of two of those components than what you feed into whole circuit. :p

Also if chemistry was something what made you want to sleep, you weren't reading the right book...
Yes, sand can be burned, just like concrete... or asbestos.


There's not much of it here. Power in Watts (P) = Voltage (V) x Current in Amps (I)
In this case we better dig into Ohm's law to see that V = I x R and kick that voltage out of there getting lot more telling P = I² x R

That's why it's bad idea to push higher currents through fewer "connections" without making those beefier.
Instead of just making them flimsier like Mr. Leather-jacket did...

Buildzoid actually forgot that part from his video...
 
  • Like
Reactions: qxp
Now that tiny connector with its microscopic contacts is another example of connector definitely not designed with safety margins...
Also even if higher voltage moves same power at lower current, that higher voltage also increases risk of arcing if something doesn't work perfectly.
And once electric arc ignites, DC is nasty in that it just keeps going and going like Duracell bunny instead of having zero voltage/current moments helping to end arcing.
That's why switches and relay contacts have lower voltage and current ratings for DC than AC.
I used USB-C extensively over the past several years as GaN power adapters are smaller and lighter than manufacturer ones. No problems. Looking on Google I see some strange reports of people having cable melt without a load. Odd.

In this case we better dig into Ohm's law to see that V = I x R and kick that voltage out of there getting lot more telling P = I² x R
Indeed!
 
Back
Top