Monday, July 3rd 2023

12VHPWR Connector Said to be Replaced by 12V-2x6 Connector

According to Igor's Lab, who has gotten their hands on a PCI-SIG draft engineer change notice, it looks like the not entirely uncontroversial 12VHPWR connector won't be long lived. The PCI-SIG is getting ready to replace it with the 12V-2x6 connector, which will be part of the ATX 3.1 spec and the PCI Express 6.0 spec. The new connector doesn't appear to have any major physical changes though, but there have been mechanical modifications, such as the sense pins having been recessed further back, to make sure a proper contact is made before higher power outputs can be requested by the GPU. The good news is that at least in the draft spec, the 12V-2x6 connector will be backwards compatible with 12VHPWR connectors.

One of the bigger changes, at least when it comes to how much power the new connector can deliver, is that there will be new 150 and 300 Watt modes in addition to the 450 and 600 Watt modes for the sense pin detection. The 12V-2x6 connector is rated for at least 9.2 Amps per pin and the new connectors will carry a H++ logo, with th older 12VHPWR connectors getting a H+ logo. The PCI-SIG has also added stricter requirements when it comes to the cable design and quality, which should hopefully prevent some of the issues the 12VHPWR implementations have suffered from. We should find out more details once the PCI-SIG has finalised the 12V-2x6 connector specification. In the meantime, you can hit up the source link for more technical drawings and details.
Source: Igor's Lab
Add your own comment

137 Comments on 12VHPWR Connector Said to be Replaced by 12V-2x6 Connector

#101
Scrizz
BorisDGIs your GB RTX 4090 with NTK or ASTRON?
Do you mean my power supply cable since the card side is pins?
I have no idea, but it is a decent Silverstone unit that has been running fine for more than half a year. :rockout:
Posted on Reply
#102
bug
Am*Except the "kink" in this case is a faulty connector that's a literal fire hazard on a $1600+ dollar GPU. "Ironing out the kinks" should've been done for at least a year before the connector was rolled out -- long before using it on your entire line of products. With that line of thinking, I guess the Galaxy Note 7 should've never been recalled -- after all, it was only a "small number of units" that were affected and Samsung were just "ironing out the kinks" with their batteries, right?

Also it's hilariously ironic how Ngreedia didn't have the "courage" to roll out the latest DisplayPort 2.1 standard on their overpriced $1600 GPU (using outdated DP1.4a ports and pocketing those few pennies), but beta-testing a flammable power connector was perfectly OK.
The "kink" I believe is something that wasn't specced specifically. Most implemented it properly, some decided to cut corners.
When the root cause is the length of a couple of sensing pins, that's almost literally a kink.
Panther_SeraphinWith the size that cards are becoming I am starting to think it would be worht looking at perhaps drawing power off 2 slots next to each other.

The other thing that could be done is an extenstion to the front of the PCI-e slot to give more dedicated power pins so the slot could provide say 150 watts.

I worry about trying to pull 600+ watts on consumer boards just purely due to the extra layers/copper content that would be required. We all complained how DDR5 boards suddenly jumped up $100 due to the tighter restrictions in the signalling. Well trying to pull 600+ watts through the board on top of the CPU power draw will incur more power planes being required and probably a similar jump in pricing again due to the extra layers.
Yeah, I don't have a practical solution either. But that doesn't mean the port/connector doesn't seem to be getting long in the tooth.
Posted on Reply
#103
Panther_Seraphin
bugYeah, I don't have a practical solution either. But that doesn't mean the port/connector doesn't seem to be getting long in the tooth.
I am agreeing with that. I would like to see an ability to keeping backwards compatability if possible by an addition vs a complete change as that would require most likely a platform, gpu and other peripheral changes and we also end up with the PCI and PCI-e era of yesteryear again.
Posted on Reply
#104
TheDeeGee
STSMinerThose look like ASTRON female pins with the dimples, there will be two seams in the pin (unless they have been revised), the NTK female pin only has one seam.



As for powering an RTX 4070 Ti with this cable, it should be fine, most of the RTX 4070 Ti cards are limited to 320 watts max in the BIOS, there are a few that have a max limit of 350 watts though.
Installed the Seasonic cable, it fits as snug as the default 2-way adapter (which also had dimples).

There is zero wiggle, so i'm feeling good about it :)

Maybe my PNY card just uses a better quality connector on the PCB, because the wiggle JayzTwoCents showed with the CableMod adapter is non-existent on my end.
Posted on Reply
#105
BorisDG
ScrizzDo you mean my power supply cable since the card side is pins?
I have no idea, but it is a decent Silverstone unit that has been running fine for more than half a year. :rockout:
I was talking about the stock cable which came with your GPU. From what I'm reading and understanding, the issue is the cable itself and not necessary the connector. Right?
Posted on Reply
#106
jonnyGURU
Didn't have a chance to read all of the replies, but...

It's already been replaced on Nvidia's end. Back maybe 3 months ago. It was submitted to PCI-SIG after the fact to suggest a rolling change to the spec. Someone just managed to see it and "report it" as something phenomenal to get users all riled up just like they did with the whole "four spring vs. three dimple" bullshit.

Slow news day. Nothing here to see.
Posted on Reply
#107
Scrizz
BorisDGI was talking about the stock cable which came with your GPU. From what I'm reading and understanding, the issue is the cable itself and not necessary the connector. Right?
I didn't use the adapter from NVidia because my PSU is an ATX 3.0 PSU with the correct cable.
Posted on Reply
#108
Eskimonster
WirkoSomebody didn't want to pay licensing costs to Amphenol (and they might even get a discount as the technology is at least a decade old).

Up to 36 amps per contact, 200 mating cycles, works in servers (meaning, it works).

www.amphenol-cs.com/product-series/hpce-cable-assembly.html
idea was a smaller interface, and make 2 coords for one gpu obsolete. Not making a monster huge dust trap.
Posted on Reply
#109
Assimilator
ZoneDymopretty amazing stuff, how hard is it to develop a plug....hard apparently.

Im still in favor of an update to the PCI-E slot so it can carry all the power needed.
You can be in favour of whatever you want, it doesn't make it practical. Requiring motherboards to handle up to another 600W on top of the 300+W they're already required to deal with, would massively increase board costs to the point of making them unaffordable - especially for people who never use a discrete GPU, or who use lower-powered ones. While I agree that it would be ideal if GPUs were completely plug-and-play, the actual problem is the need for that much power - we should be focusing on making GPUs more energy-efficient as opposed to faster.
Posted on Reply
#110
chrcoluk
I was thinking about these issues when installing the cable's for my cpu 2 days ago on my new motherboard.

Those cables have strong tape which prevents bending, however the cable can still be bent sharply at either end of the tape of which I had to do due to the horrible placement of the connectors, however due to this issue, I was paranoid and did some adjustments to make the bending a little less sharp.
Posted on Reply
#111
sethmatrix7
AssimilatorYou can be in favour of whatever you want, it doesn't make it practical. Requiring motherboards to handle up to another 600W on top of the 300+W they're already required to deal with, would massively increase board costs to the point of making them unaffordable - especially for people who never use a discrete GPU, or who use lower-powered ones. While I agree that it would be ideal if GPUs were completely plug-and-play, the actual problem is the need for that much power - we should be focusing on making GPUs more energy-efficient as opposed to faster.
Then they'd have to do multiple different chipsets with different motherboard models and prices. Just like they already do today.

Why would we care about power usage over performance? We can't drive 4k at high frame rates and some games don't even run well in 1440p- so why focus on saving $75 a year?
Posted on Reply
#112
R-T-B
caroline!They keep ignoring that for some reason. Bet this new one is gonna melt as well, it's physics.
Yeah, no. Different materials different properties.
Posted on Reply
#113
jonnyGURU
After seeing the new connector and then looking back at the old one, I was like "GOOD LORD! Why did they make those sense pins so long?!?!" You can play them with your thumb nail like a jews harp.
Posted on Reply
#114
pavle
usinameThere is fundamental problem with the 12/16/2x6 pin connector
Indeed - almost 10A per that small pin means it will heat up (especially with dimple-style contacts), and there is plastic around it. Is there any more to be said??
Posted on Reply
#115
80-watt Hamster
TheLostSwedeThe "modular" 6+2-pin connectors has this problem every time...
Interestingly enough, the 6+2 problem is solved, but nobody's bothering to implement the solution. There are 6+2 connectors available (or at least there were) that have sliding engagement slots. It was a bit more common on 20+4 headers, but almost everything's straight-up 24 now. Anyway, the only place I ever saw it on 6+2 was on an EVGA 400N1 of all things. The literal cheapest PSU one could buy from them. Why it didn't become standard (or at least common) I'll never understand.
Posted on Reply
#116
Panther_Seraphin
80-watt HamsterInterestingly enough, the 6+2 problem is solved, but nobody's bothering to implement the solution. There are 6+2 connectors available (or at least there were) that have sliding engagement slots. It was a bit more common on 20+4 headers, but almost everything's straight-up 24 now. Anyway, the only place I ever saw it on 6+2 was on an EVGA 400N1 of all things. The literal cheapest PSU one could buy from them. Why it didn't become standard (or at least common) I'll never understand.
My NZXT 850 watt from Yesteryear has this as well.

I think the rise of purely 8 Pin based GPUs put the manufacturing cost of 6+2s out of the equation for a lot of people. Quite a few reference boards used dual 6 pins which were replaced on custom boards with a single 8 pin.

What I dont understand is why we had to go to this connector design when all we had to do was adopt the EPS12v standard for 8 pins to DOUBLE the available power draw per 8 pin. No remaking the wheel, no massive changes to production. However I think in the longer term there will be a shift to higher voltages to improve power carry capacity and in THEORY power conversion efficency.
Posted on Reply
#117
80-watt Hamster
Panther_SeraphinMy NZXT 850 watt from Yesteryear has this as well.

I think the rise of purely 8 Pin based GPUs put the manufacturing cost of 6+2s out of the equation for a lot of people. Quite a few reference boards used dual 6 pins which were replaced on custom boards with a single 8 pin.

What I dont understand is why we had to go to this connector design when all we had to do was adopt the EPS12v standard for 8 pins to DOUBLE the available power draw per 8 pin. No remaking the wheel, no massive changes to production. However I think in the longer term there will be a shift to higher voltages to improve power carry capacity and in THEORY power conversion efficency.
24V PCI-E power would solve the problem beautifully. However, the solution I'd prefer would be to move away from >300W graphics cards.

The above would, I'd think, require either step-up from a 12V rail on the PSU side (probably trivial, though there's somone here that'll definitely be able to tell me if I'm wrong) or a dedicated 24V rail, and then step-down at the GPU, since the PCIe slot and therefore presumably the rest of the design would remain 12V.
Posted on Reply
#118
Assimilator
80-watt Hamster24V PCI-E power would solve the problem beautifully. However, the solution I'd prefer would be to move away from >300W graphics cards.

The above would, I'd think, require either step-up from a 12V rail on the PSU side (probably trivial, though there's somone here that'll definitely be able to tell me if I'm wrong) or a dedicated 24V rail, and then step-down at the GPU, since the PCIe slot and therefore presumably the rest of the design would remain 12V.
I've proposed replacing 12VDC with 24VDC before, not just for PCIe but for the entirety of the system. As you say, would be a relatively simple change on components that produce 12V to double it, while consumers would need to halve it. 600W @ 12V requires a whopping 50 amps (granted, not per wire) whereas 24V would be a much safer and saner 25A.
Posted on Reply
#119
Panther_Seraphin
80-watt Hamster24V PCI-E power would solve the problem beautifully. However, the solution I'd prefer would be to move away from >300W graphics cards.
I doubt there is going to be a movement away from higher loads. Especially in CPU and most likely in GPU

Reading around what is going on in the datacenter and other things it looks like 400 watt CPUs are going to be the "norm" and up to 1kw accelerators as well.
Posted on Reply
#120
80-watt Hamster
Panther_SeraphinI doubt there is going to be a movement away from higher loads. Especially in CPU and most likely in GPU

Reading around what is going on in the datacenter and other things it looks like 400 watt CPUs are going to be the "norm" and up to 1kw accelerators as well.
There won't, but a body can dream. Data center is it's own animal.
Posted on Reply
#121
Zforgetaboutit
I'm naive. What would be wrong using the same technique as my bread toaster's 110V @ 1,600W power delivery? It has a simple 2-pin connector. Maybe 3 pins if grounded.
Posted on Reply
#122
jonnyGURU
ZforgetaboutitI'm naive. What would be wrong using the same technique as my bread toaster's 110V @ 1,600W power delivery? It has a simple 2-pin connector. Maybe 3 pins if grounded.
Because it's 110V vs 12V.
Posted on Reply
#123
Zforgetaboutit
jonnyGURUBecause it's 110V.
I wrote to propose using the same mechanical attributes, not voltage/current.

E.g. Use a bigger connector with 2-3 pins, PCB-suitable voltage & current, and thicker flexible cable like my toaster.
Posted on Reply
#124
jonnyGURU
ZforgetaboutitI wrote to propose using the same mechanical attributes, not voltage/current.

E.g. Use a bigger connector with 2-3 pins, PCB-suitable voltage & current, and thicker flexible cable like my toaster.
At 12V, the American plug would not support up to 600W.

It's the current that matters. Not the voltage. At 1600W, 110V is only 14A. The 12VHPWR supports 50A.

Now, that said, if your wires and terminals were 10g, your idea would work. But this would make for an incredibly unweildly cable.
Posted on Reply
#125
Zforgetaboutit
I see what you mean. I just looked up a wire gauge explainer. Thanks.

How about, for practical use, a doubling of the cross-sectional area of the current GPU wire cores.

Current (A) = 600 W / 12 V = 50 A.
6 (incoming power) pins -> 50 A / 6 = ~9 A per pin.

Doubling, ~18 A per pin, the cable could be 14 gauge (according to my understanding of the explainer). Better than 10 gauge, and now needs only 3 (x 2) pins.

Still too stiff? I have no sense of this. The next time I go to Home Depot I hope to check it out.

I'm getting ahead of myself - the explainer shows different temperate ranges available per gauge. I don't know the status quo. That could require a thicker gauge than 14.
Posted on Reply
Add your own comment
Jun 11th, 2024 18:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts