• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GC-HCPE Power Connector Can Supply more Than 600 Watts to GPU

This is a great opportunitly for a transition to a higher voltage (or rather, a non-fixed voltage). It's now or never. But obviously it's going to be wasted.

Because you'll still need 12v for CPUs, fans, ece, so you are only introducing more standards that would need new power supplies, motherboards, and additional VRMs added to already expensive components.


This would make more sense in a new form factor altogether, one that uses blade connectors for the PSU as well, much like data center PCS. It eliminates the cable problem altogether.

Im not just talking for GPUs, Im talking for everything. So everything moves over to 24v. You could design in "compatability" with existing 12V supplies into the newer VRM/voltage controllers similar to how PSUs have active correction for voltages between 100 and 250.

Only have an older 12V supply? It draws double the current up to a set amount. Have a newer supply? Then use things like the sideband connectors in the 12VHPWR or sense capabilities in the regulators.

Hell the current 24 pin ATX connectors has a non used pin that could be a sense/communication pin for that exact purpose as the -5v it was intented for was removed in 2004 from the ATX spec. IMO the AXT12VO was a missed opportunity to move us into the 21st century in this aspect.

In the industrial fan world, +24V fans are as common as +12V fans and more efficient -- the datasheets indicate less power is used to power a +24V fan than a +12V fan of the same model type, but this seems to be only true for really powerful industrial fans that you would never use in PC unless you were deaf.

We are not building industrial PCs. We are building consumer PCs. Why not move to 120v equipment, since that is also used in industrial applications?


so, your suggestion to the cost issue is to design transformers into motherboards to step up voltage from 12v to 24v? Do you know how expensive/ space consuming that would be?

Absolute genius! :slap:

Also, hilariously you mention the "21st century" while ignoring that the 24 pin ATX connector is a 21st century occurrence. 1990s machines had either the old 6+6 pin or the 20 pin connector.

I see what TheinsanegamerN is saying, it would have to be a whole new standard to use +24V power. Supposedly switching VRM's are more efficient the higher the voltage differential between source and output and power losses due to i^2*R losses would less at +24V than +12V so maybe it would be better to settle on a newer, higher voltage standard, maybe even higher than +24V. I believe Noctua, Nidec, Delta, Sunon, SanAce already make +24V, PWM speed-controlled fans. At higher voltage you might be able to get away w/less leads too (since the current will be less), so maybe the 24-pin power connector could go back to being 20-pin and the 8-pin supplemental power connector could go back to being a 4-pin.

No. Im saying we design the boards slightly different and source slightly different power stages etc.....

You do realise a LOT of the stuff we have out there is actually fairly wide ranging in terms of voltage input?

AOZ5311NQI-03 - Currently a favoured power stage in 4xxx series GPUs
AOZ5616BQI - A possible drop in replacement to be able to utilise a 24v source.

Same Output ratings, same packaging. PWM signals would have to be reworked from 5V currently to 3V. Which may actually simplify circuitry as there is no 5V source for Graphics cards directly from Mobo or cable currently.

Guess what, both of those above power stages accepts 12 Volts. Only the bottom one would support 24. So its there, just needs a little work from the PSU. Mobo, graphics makers etc etc etc to come to an agreement on a standard to enable a transition from 12V currently to 24V in the future. Maybe the use of sideband connectors on top of existing ones would be viable similar to what Nvidia did in the power delivery of the 4xxx series. No sideband connectors means the board expects 12V, sideband connectors can then be used to confirm voltage supplied from PSU is 24V.

There's a way to avoid changing an ENTIRE industry to get +24V to a single high-wattage device. In fact, we just need to look through ATX spec's history.

Make "-12V" required, not optional.
just invert an independent +12V Rail, already common in PSUs.
Plus, GPUs already have separate power planes; one from the slot, one from auxiliary 6/8/12-pin input(s)

3wiredc.jpeg

I may not be a big fan of Thomas Edison, but he (and the big brains w/ big pockets that standardized US Mains Power) already figured this problem out:
Edison 3-Wire Power Distribution - https://pubmed.ncbi.nlm.nih.gov/17757744/

2wiremidand3wireDC_nowwithmoremaths.JPG
 
Last edited:
@LabRat891 baby-AT PSU's used to have a -12V line -- I think it was required for early RS-232c line drivers.
 
No? This will also really help with SFF setups. I would've appreciated such a solution when I built my pc.
Sorry but what SFF stand for?
 
Power hog GPUs. Ada are by far the most efficient GPU's on planet Earth. But they are power hogs, rofl

Sure, Nvidia's Ada generation is super efficient. I am not talking about efficiency, I am talking about the race to loony tunes level max. power draw. ;) Just look at the linked review. The 4090 has power peaks close to 700W! And he had only a GPU with a power limit of 450W. A 4090 with a power limit of 600W will have even more crazy power spikes. That's why they invented their 12VHPWR connector in the first place.

https://www.techpowerup.com/review/nvidia-rtx-4090-450w-vs-600w/3.html
IMO they should just work within the given limits, like in the past. If they want more performance then they should just engineer more efficient cards. Period.

75W Cards: PCI-E (75W)
225W Cards: PCI-E (75W) + 1x 8-Pin (a 150W)
375W Cards: PCI-E (75W) + 2x 8-Pin (a 150W)
525W Cards: PCI-E (75W) + 3x 8-Pin (a 150W)
 
We are not building industrial PCs. We are building consumer PCs. Why not move to 120v equipment, since that is also used in industrial applications?
120v is an AC power not DC like used in computers and other parts. It is also an extremely uncommon voltage in industrial applications. Mostly just things from 40+ years ago use it. The most common voltage in industrial devices is actually 24v, followed by 12v. If you are going to make hyperbolic posts you should learn more about the topic you are speaking on.




In computer gear, we actually have seen a move in server space to both 24v and even the occasional 48v equipment. It is still rare, but there is trying to be a push for it. Mostly for the reasons already posted in this thread: more efficiently vrms and less copper usage, and in some cases simply more power capability though usually it is an efficiency reason the servers have it.
 
Sure, Nvidia's Ada generation is super efficient. I am not talking about efficiency, I am talking about the race to loony tunes level max. power draw. ;) Just look at the linked review. The 4090 has power peaks close to 700W! And he had only a GPU with a power limit of 450W. A 4090 with a power limit of 600W will have even more crazy power spikes. That's why they invented their 12VHPWR connector in the first place.

https://www.techpowerup.com/review/nvidia-rtx-4090-450w-vs-600w/3.html
IMO they should just work within the given limits, like in the past. If they want more performance then they should just engineer more efficient cards. Period.

75W Cards: PCI-E (75W)
225W Cards: PCI-E (75W) + 1x 8-Pin (a 150W)
375W Cards: PCI-E (75W) + 2x 8-Pin (a 150W)
525W Cards: PCI-E (75W) + 3x 8-Pin (a 150W)
But what difference does that make to you? I can't fathom these types of arguments. If you agree that it's super efficient but you don't like the power draw, then guess what....you can lower it. Then you get an even more efficient product with lower power draw, which seems to be what you want. So where if the problem here? What am I missing? My 4090 is perma limited to 70% = 320w. So what is the fuss all about?

Regarding the power spikes maybe you should actually spend 3 minutes of your time and read the sources you yourself are linking. The card is a 500w model and it's spiking on furmark btw, because it's vrms are pretty bad.
 
Last edited:
@LabRat891 baby-AT PSU's used to have a -12V line -- I think it was required for early RS-232c line drivers.
-12V is an Optional Power Rail even in recent ATX spec.

ATX Version 3.0 Multi Rail Desktop Platform Power Supply Design Guide
Revision 2.01 February 2023



ATXv3_minus12volt.JPG


Problem (@TM), it's there for reasons like you mention: Very low-power, for fairly noise/transient-tolerant Legacy/Industrial signaling.

Spec would need revision to 'tighten' regulation and standardize on much higher ampacity.
 
Sure, Nvidia's Ada generation is super efficient. I am not talking about efficiency, I am talking about the race to loony tunes level max. power draw. ;) Just look at the linked review. The 4090 has power peaks close to 700W! And he had only a GPU with a power limit of 450W. A 4090 with a power limit of 600W will have even more crazy power spikes. That's why they invented their 12VHPWR connector in the first place.

https://www.techpowerup.com/review/nvidia-rtx-4090-450w-vs-600w/3.html
IMO they should just work within the given limits, like in the past. If they want more performance then they should just engineer more efficient cards. Period.
WRT your last comment, how do you know the engineers at Nvidia aren't trying to engineer more efficient and performant videocards? Do you think you could do better?
 
WRT your last comment, how do you know the engineers at Nvidia aren't trying to engineer more efficient and performant videocards? Do you think you could do better?
With respect they are, Nvidia brought very efficient cards for a few generations, that was never questioned!!, they just clocked them to the moon to slide everything up a notch so that the 4050 could be the new 4060, and similar, though even that isn't new.
it's the segmentation and pricing every time really.
 
This proprietary rubbish for people obsessed with vanity again...
I would prefer to see graphics card manufacturers implement some sort of ubiquitous standard, like CCS Combo 1 connectors. 350kW should be enough for the next one or two, maybe even three GPU generations, the way things seem to be going.
 
With respect they are, Nvidia brought very efficient cards for a few generations, that was never questioned!!, they just clocked them to the moon to slide everything up a notch so that the 4050 could be the new 4060, and similar, though even that isn't new.
it's the segmentation and pricing every time really.
As someone already pointed out the 4090 has superior perf. to a 3090ti while using LESS power.
 
As someone already pointed out the 4090 has superior perf. to a 3090ti while using LESS power.
The 4090 is the most efficient card on the planet, but apparently it's not efficient enough. Makes you wonder...
 
As someone already pointed out the 4090 has superior perf. to a 3090ti while using LESS power.
And. ..

So you're reply is that a 4090 is better than a 3090?!, For real ?!?.

Re read my post, efficiency is not the problem.

It's the cost per sku.

IMHO and others.
 
And. ..

So you're reply is that a 4090 is better than a 3090?!, For real ?!?.

Re read my post, efficiency is not the problem.

It's the cost per sku.

IMHO and others.
It's the halo card for the Ada series, just like the Titan X pascal was for the pascal series and it wasn't cheap either: 1199 US Dollars, which, thanks to inflation, is $1527 in today's dollars.
 
But what difference does that make to you? I can't fathom these types of arguments. If you agree that it's super efficient but you don't like the power draw, then guess what....you can lower it. Then you get an even more efficient product with lower power draw, which seems to be what you want. So where if the problem here? What am I missing? My 4090 is perma limited to 70% = 320w. So what is the fuss all about?

Regarding the power spikes maybe you should actually spend 3 minutes of your time and read the sources you yourself are linking. The card is a 500w model and it's spiking on furmark btw, because it's vrms are pretty bad.

The problem here isn't high power draw or high power spikes. ;) The problem is that they build cards that exceed the power limits of current connector standards, which is the reason why they created a new connector, just to be the "king of the hill" in benchmark scores for a few months. Which is just dumb.

Like you said, "you can lower it". Tell that Nvidia. It's just irresponsible behaviour to release such a product in today's day and age. With a small power limit it would be a way more round product. Plus they could shrink the cooler (which is oversized anyways, it was designed for a even more extreme chip). This would also bring the price of the 4090 way down. Win/win for everyone.
 
Before commenting, I wanted to research this connector in depth. After a good look, came to a conclusion.

To me, this design seems like an excellent step forward as long as the motherboard is designed properly. Based on the example in the photo's, this seems like a MUCH better design than the janky PCIe connector NVidia has used. The industry needs to switch over to this replacement design ASAP!
 
Wouldn't this motherboard design require a new case paradigm as well? The standoffs in my case won't clear those bottom-mounted power connectors on a bet.
 
Wouldn't this motherboard design require a new case paradigm as well? The standoffs in my case won't clear those bottom-mounted power connectors on a bet.
That one would. But they can make boards with connectors that don't stick out the back of the board.
 
There's a way to avoid changing an ENTIRE industry to get +24V to a single high-wattage device. In fact, we just need to look through ATX spec's history.

Make "-12V" required, not optional.
just invert an independent +12V Rail, already common in PSUs.
Plus, GPUs already have separate power planes; one from the slot, one from auxiliary 6/8/12-pin input(s)

View attachment 311359

I may not be a big fan of Thomas Edison, but he (and the big brains w/ big pockets that standardized US Mains Power) already figured this problem out:
Edison 3-Wire Power Distribution - https://pubmed.ncbi.nlm.nih.gov/17757744/

View attachment 311358
Isn't the -12V on a PC PSU just a negative polarity? It's really just 12v going back to the PSU isn't it? I thought the ground on a PSU was actual ground, not a floating ground at 12v with the PCs 12v power coming from the 24v that's above the 12v floating ground.
 
Isn't the -12V on a PC PSU just a negative polarity? It's really just 12v going back to the PSU isn't it? I thought the ground on a PSU was actual ground, not a floating ground at 12v with the PCs 12v power coming from the 24v that's above the 12v floating ground.
I believe the original -12V was a seperately generated rail that had -12V potential from ground. It was primarily for RS232 signaling and ISA cards back in the day but obviously got phased out as nothing used it beyond some niche sound cards in the later years.
 
I believe the original -12V was a seperately generated rail that had -12V potential from ground. It was primarily for RS232 signaling and ISA cards back in the day but obviously got phased out as nothing used it beyond some niche sound cards in the later years.
Ah, I wondered why it had been pretty much abandoned. Be weird to have it on any 24 pin ATX supply really.
 
Ah, I wondered why it had been pretty much abandoned. Be weird to have it on any 24 pin ATX supply really.
In the 80's, simple and cheap voltage converters didn't exist. But various chips required multiple voltages for power (DRAM, processors, possibly ROM). The RS-232 transmitter needed +12V and -12V because those were signal voltages. So the PSU's task was to provide +5, -5, +12 and -12 volts to the motherboard, which in turn carried all those voltages to the ISA slots.
 
Back
Top