• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

It's happening again, melting 12v high pwr connectors

Status
Not open for further replies.
I wish, there was a way to make EPS 8-pin connector revision (EPS PCIe v2/2025 or whatever), where someone puts four 12V AWG16 lines in one row, and another four AWG16 lines in bottom (GND), rate whole thing at 200W, add a F+ mark on plastic (for compatibility reasons), and call it day by saving this as "standard requirements".
No pin/key changes, no drama on too low safety margins... it could have been so easy :(

Note : No key changes, because using "new" connector in old port would cause a dead short to ground on 12V (same thing, if old connector get's used in new ports).
PSU simply won't run, due to simple short (if this kind of user error kills any PC component - PSU's manufacturer should be responsible for paying that RMA LOL). Not counting dummies that force PSU to be on, during such event though.
 
I have run a circuit simulation using ISIS Professional Release 7.10 by Labcenter Electronics. On the schematic RPSU is the resistance internal to the PSU. This would include the current sense resistance as well as the trace resistance. The resistors are numbered 1 to 12 to correspond to the 12 power pins on the 12VHPWR connector. Pins 1 to 6 are connected to +12 and pins 7 to 12 are connected to DC common. I prefer to use this term instead of ground as I associate ground (earth) to be connected to the power supply case. The resistance for RPSUPIN (1to 12) would be the contact resistance between the two pins in the PSU connector end. Resistor RWIRE (1 to 12) would be the resistance of the wire. Resistor RGPUPIN (1to 12) would be the contact resistance between the two pins in the GPU connector end. Resistor RGPU (1 to 12) is the resistance within the CPU card between the connector and the DC capacitors. All of the resistance is in milliohms.

Back in the day when I checked contact resistance on Molex Mini-Fit HCS pins by putting 10 amps of current through them and measuring voltage drop between the pins the resistance was typically 3 to 5 milliohms. According to Amphenol the maximum contact resistance for their 12VHPWR Connector System is 5 milliohms.

I have posted two pictures of the simulation, the first with the maximum imbalance I can think of with contact resistance within specifications with four of the contacts on each end the maximum of 5 milliohms and two of 1 milliohm. This is lower resistance than I recall ever seeing with the Molex HCS pins. The maximum difference would also be with the minimum wire resistance. I selected 4 milliohms as this would correspond to 12 inches of 16 gage wire. I have also posted a simulation with a wire resistance of 8 milliohms. Note the imbalance goes down with higher wire resistance.

12VHPWRWire4m.jpg





12VHPWRWire8m.jpg





Looking online, there are videos of current imbalance higher than shown in simulation. This brings up the question of how the cables are being tested. Are cable manufacturers testing total cable resistance by for example putting 10 amps of current through each wire and measuring voltage drop? When I investigated such a test system in the past, they seemed both expensive and overly complicated and we ended up building our own.

Also, the +12-volt wires seem to be the most likely to melt. Is this because some of the current on the DC common side is going through the edge connector? If you have a clamp on Hall effect probe please measure each wire individually, all six +12-volt wires, all six DC-common wires, and all wires on the cable and post the results.
 
I wish, there was a way to make EPS 8-pin connector revision (EPS PCIe v2/2025 or whatever), where someone puts four 12V AWG16 lines in one row, and another four AWG16 lines in bottom (GND), rate whole thing at 200W, add a F+ mark on plastic (for compatibility reasons), and call it day by saving this as "standard requirements".
No pin/key changes, no drama on too low safety margins... it could have been so easy :(

Note : No key changes, because using "new" connector in old port would cause a dead short to ground on 12V (same thing, if old connector get's used in new ports).
PSU simply won't run, due to simple short (if this kind of user error kills any PC component - PSU's manufacturer should be responsible for paying that RMA LOL). Not counting dummies that force PSU to be on, during such event though.
The EPS12V connector already does 300 watts according to https://support.exxactcorp.com/hc/e...PCIe-8-pin-vs-EPS-12V-8-pin-power-connections . Derating it to 200 watts would be counterproductive.
 
Last edited:
Maybe, but it's simply an effort to be ahead of dummies that want to run "a bit above spec".
Also having safety margin should be considered good/welcomed, and not something to be sad about.
Man tons of ng's coming on here suddenly.

Better to have it than a fire hazard like the rtx4000 and 5000 are now...
 
I haven't seen this brought up anywhere yet in discussions, so I was thinking, what about a cable that has a thermistor installed? Wouldn't it be able to prevent meltdowns? Or am I missing something?

Right now, my checklist is:

1. 16 AWG
2. All pins wired and connected in the cable
3. Securely install it with lock and latch firmly in position
4. No acute bending

Basically can't get the cable any better than this? I wonder how widespread is the issue, since Nvidia doesn't seem to have issued a statement yet
 
Maybe, but it's simply an effort to be ahead of dummies that want to run "a bit above spec".
Also having safety margin should be considered good/welcomed, and not something to be sad about.
Actually, the EPS12V connector that is already specified at 300 watts has a higher safety factor than the 8 pin PCI Express Graphics connector which is specified at 150 watts when I did the math.

For the EPS12V connector, looking up https://edc.intel.com/content/www/u...pply-design-guide/2.1a/-12-v-power-connector/ shows that the connector per pin as seen in https://www.molex.com/en-us/products/part-detail/444761112 has a maximum of 11 amps per pin. So to calculate the safety factor, I multiplied 11 A x 12 V x 4 pins to get 528 watts across all 4 12V pins. Dividing that by the 300 watt maximum gives a safety factor of 1.76.

For 8-pin PCI Express Graphics, someone did the calculation and put it into the Wikipedia page at https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector with 7 amps per 12V pin and a maximum specified power of 150 watts, giving a safety factor of 1.68.

EPS12V at 300 watts has a higher safety factor than 8-pin PCI Express Graphics, and the 8-pin PCI Express Graphics is already legendarily safe if properly used and can take a lot of abuse. So 2 x EPS12V connectors can take over for the 12VHPWR/12V-2x6 connector without taking too much board space. Downrating the spec to 200 watts would require 3 EPS12V connectors to keep 600 watts within the 200 watt per connector spec, which would take way too much board space.
 
I haven't seen this brought up anywhere yet in discussions, so I was thinking, what about a cable that has a thermistor installed? Wouldn't it be able to prevent meltdowns? Or am I missing something?

Right now, my checklist is:

1. 16 AWG
2. All pins wired and connected in the cable
3. Securely install it with lock and latch firmly in position
4. No acute bending

Basically can't get the cable any better than this? I wonder how widespread is the issue, since Nvidia doesn't seem to have issued a statement yet
Or maybe 8 to 10 AWG + gold plated...

Like the thermistor idea though!

Screenshot_20250215_212114_Chrome.jpg
 
Actually, the EPS12V connector that is already specified at 300 watts has a higher safety factor than the 8 pin PCI Express Graphics connector which is specified at 150 watts when I did the math.

For the EPS12V connector, looking up https://edc.intel.com/content/www/u...pply-design-guide/2.1a/-12-v-power-connector/ shows that the connector per pin as seen in https://www.molex.com/en-us/products/part-detail/444761112 has a maximum of 11 amps per pin. So to calculate the safety factor, I multiplied 11 A x 12 V x 4 pins to get 528 watts across all 4 12V pins. Dividing that by the 300 watt maximum gives a safety factor of 1.76.

For 8-pin PCI Express Graphics, someone did the calculation and put it into the Wikipedia page at https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector with 7 amps per 12V pin and a maximum specified power of 150 watts, giving a safety factor of 1.68.

EPS12V at 300 watts has a higher safety factor than 8-pin PCI Express Graphics, and the 8-pin PCI Express Graphics is already legendarily safe if properly used and can take a lot of abuse. So 2 x EPS12V connectors can take over for the 12VHPWR/12V-2x6 connector without taking too much board space. Downrating the spec to 200 watts would require 3 EPS12V connectors to keep 600 watts within the 200 watt per connector spec, which would take way too much board space.

The plastic housings on 8-pin EPS and 8-pin PEG are keyed differently to prevent mixing them up, but other than that they're fundamentally the same Mini-Fit Jr connectors. Both of them can use either a standard crimp contact or a HCS crimp contact. In 2x4 (8-pin) connectors the standard contacts are rated at 7A and the HCS are rated at 10A.

You're SUPPOSED to use HCS contacts on 8-pin EPS, but I'm fairly sure not every manufacturer/seller actually does.

With a 7A contact the 8-pin EPS has safety factor of 1.12; with a 7A contact the 8-pin PEG has safety factor of 1.68.

With a 10A HCS contact the 8-pin EPS is at a safety factor of 1.6 and the 8-Pin PEG is at a safety factor of 2.4.

I would always recommend using HCS contacts if you're making your own custom cables. If you buy ready-made cables, you can't easily be sure what they used.
 
Let's take this EVGA 3090Ti as example. This PCB uses 24+3 phases design. This specific card uses only 21+3 phases.

The 12V is split in three circuits monitored by a voltmeter connected to the shunts on the upper right side.
From there the circuits are going in the copper layers inside PCB.

All three circuits are connected to a block of 8 down step MOSFET convertors.
The 3 separate ones are powered from the PCIE bus and are powering the RAM.

The circuits in the bottom right are the shunt voltage monitors and PWM controllers for the fans.

This circuit is balancing the 12V line into 3 separate VRM's controlled by the same IC. The control IC is the one making the balance based on shunt voltage drop and input + output voltage on the MOSFET's.

View attachment 385067
Ahhh...EVGA...that alone is worth a thumbs up.:D
 
A good practise for choosing cable for the graphic card:

You should not use a cable, which you damaged yourself, or significantly misused - as used a cable meant to build a computer in a fixed installation to test many dozens of GPUs.

If you use a cable once or just couple of times, it is very unlikely that you make it unusable this way.

Do not use a second hand cable you know nothing about.

Do not use a cable that seems worn or dirty.

Do not use a new cable from the early years after this cable has been introduced to the market. The more recent cable, the better.

For example, I tested and examined the new cable I bought now for my power supply and compared it to the new cable supplied with the PSU, which has been manufatured 1,5 year ago. The plug looks much better and also the connection to the PSU are much more robust type of connectors. I am really confident that this cable would be able to reliably pull 600W 24/7 for long years.

https://www.techpowerup.com/forums/threads/12-pin-gpu-connector-survey.332590/#post-5449918
 
I have posted two pictures of the simulation, the first with the maximum imbalance I can think of with contact resistance within specifications with four of the contacts on each end the maximum of 5 milliohms and two of 1 milliohm.
Why not 5 and 1, instead of 4 and 2? The connector would still be within spec and the current would be a lot higher. An oxidized (but still within spec) connector when inserted can scrape off said oxidation and it’s just luck how many of the contacts get a good connection. It may even be just one.

I also don’t see how 1 mOhm would be a realistic lower bound for contact resistance, but I guess you have more experience in that. Something like 0.2-0.3mOhm is a value that I thought to be in the realm of possibility. https://sumitomoelectric.com/sites/default/files/2020-12/download_documents/71-06.pdf
 
Last edited:
Downrating the spec to 200 watts would require 3 EPS12V connectors to keep 600 watts within the 200 watt per connector spec, which would take way too much board space.
If you need three EPS12V to power a GPU, you shouldn't have a problem with low PCB space for three connectors.
600W GPUs are meant to be BIG (NV simply squished one into 2 slot, but overall dimensions are still large), I fail to understand why having 3x 8PIN is an board space issue in this case.
Make less power hungry GPUs, and safe board space on power connectors required (if you really can't accommodate them). It's not hard to fix this "issue", it's just lack of critical thinking and change of mentality that is required.

PS. I went with 200W to be in-line with PCIe 8-pin spec (which is 150W on 3 pairs of 12V/GND).
Would also like to prevent or at least slow down power creep of GPUs, by forcing manufacturers to think a bit before putting 800W ones out. 2x EPS 8-pin v2 = 475W on PCI-e GPU, 3x 8-pin = 675W on PCI-e GPU.
Both seem perfectly reasonable values to me. You can add a 6-pin to get additional +75W over that :D
OH, having more than one 8-pin EPS v2 should be complimented with proper power balancing in actual spec (and EPS 8-pin v2 should be required to use "HCS contacts" since they are 10A rated apparently).
 
Last edited:
Where did someone assume a 0 Ohm wire? I do not understand you. Maybe you've had a beer too many, it would explain the sassy attitude and typos too.

Low enough for it not to matter in this case. I did round the other values too.

Serious? Quote me - although I pointed to the basics in phyiscs and electronics. Facts do not change. You may complain to those books, that these are wrong. The main point why I mentioned the Siemens unit and not Ohm. of course you could use the proper prefixes for ohm to show those values - or use Siemens instead

the proper unit for that should be anyway

The only mistake he made was in assuming any connector can be truly 0 resistance...there's always going to be a few uOhms at least.

AS said there is a modell for a wire. It is not only Ohms or Siemens. I doubt the basics from expensive books have changed in past 25 years in regards of high frequency and electronics. Someone already mentioned the skin effect which is something else, but mentioned also in one of those books. It's not a mediocre wikipedia knowledge.

When you imply it has 0 ohms, you have something which "maybe" does not exists. Not sure if it's achievable on the moon or somewhere else. Just because a cheap 500€ true rms multimeter shows you 0 Ohm does not mean the wire has 0 Ohm. It is about accuracy and knowing the basics in physics.

--

I just read a bit but not all https://www.igorslab.de/groundhog-d...nd-schieflasten-was-wir-wissen-und-was-nicht/

I just spend a while just checking that spec sheet. You may read carefully point 3 on the right side of Specification, pay attention to the units in use. Pay attention to point 1 - the max current rating also

03-Astron.png


Units are important 10mOhm is also "near" Zero - but a lot. To explain the units for those who do not understand 0.010 Ohm

--

Let's assume this is ASUS - not sure - I do not care for Nvidia

9.46 Amps should be green - when igor posted the correct Specs for the connector. 9.46A is smaller as 9.5 A
When I take this screenshot again - 3 wires are close or over the the max of the 9.5A. I wonder why that ASUS card does not shutdown instantly. Pin 5 is definitely out of range. Pin 6 is up to definition. Pin 4 is questionable.

Comparing again with Igors Spec sheet. Do you really want to use a connector with all 6 wires at the maximum of 9.5 A at 100% of the time while playing games for several hours?

I doubt that ASUS card has any accuracy given for the measurement method. Any true rms multimeter comes with a spec sheet with accuracy. How is that for the ASUS card? You need to factor in the inaccuracy in the measurement for the ASUS card also. Just because you see a current or temperature rating, does not mean it is accurate or correct.

old-cable-zoom-jpg.384891
 
Last edited:
I haven't seen this brought up anywhere yet in discussions, so I was thinking, what about a cable that has a thermistor installed? Wouldn't it be able to prevent meltdowns?
Ignoring other issues with inline thermistors, all the damage I've seen indicates the issue isn't too little resistance (cables melting from overamperage) but too much -- contact pins mate poorly, and thus become victim to (I^2)r. A 50cm, 12A wire with 20 mOhm resistance can dissipate 3 watts along its entire length , but if that power is all focused on a portion of one pin contact, the surrounding plastic will melt.

Serious? Quote me - although I pointed to the basics in phyiscs and electronics. Facts do not change. You may complain to those books, that these are wrong. The main point why I mentioned the Siemens unit and not Ohm. of course you could use the proper prefixes for ohm to show those values - or use Siemens instead...(spec sheet deleted)
You made a few wildly inaccurate statements, ones that even a first-year EE major would disown. Your own spec sheet diagram demonstrates he was using the correct units. I suggest you give it a rest.
 
although I pointed to the basics in phyiscs and electronics.
You did?

of course you could use the proper prefixes for ohm to show those values
Great. Now where did I go wrong with the calculus? I assume you understood it?

ps.
still waiting for your calculation on the max amps for an in spec cable.
 
Last edited by a moderator:
I asked myself: What is ASUS up to? What's going on in their minds introducing shunt resistors only on the Astral? Why did they implement them in the first place? This isn’t an easy question to answer, I fear.

Here is my analysis:

1. Shunt resistors - Why in the first place?
The shunt resistors (of course) don’t solve any of the underlying problems, not stopping anything from overheating (without further features). They simply provide a software readout on the pins - probably not even error-corrected, just factory-calibrated. So far, they serve no real value in protecting the hardware. Let's look further.

2. The readout and load handling
From the picture posted by Roman (showing 9.7 amps on one of the pins of the Astral), we see that the software did nothing to stop the GPU at that load. That's 0.2 amps over spec—not a big deal in practice. Let’s assume a) error correction isn’t needed because reading errors at this load level are negligible (I’m not an electrician, so I could be wrong). b) The additional heat from feeding 0.2-0.5 amps per pin should also be negligible on the connector, pin surroundings, cable materials, sleeve texture etc. c) most 'normal' readouts can produce 9.6 or 9.7A on a single pin from time to time. If this slight overload is the new normal, we’re dealing with a very fragile standard. Because it's over the spec. Is the spec on point or is it defensive (like most specs their is a unofficial headroom over the official headroom - but this is an unknown territory with many variables like ambient temperature, age of cables, no. of seatings etc. etc.).
But here’s the real question: at what point will the ASUS software flag an issue? Can we assume that 60 amps total (or 10 amps per pin fully balanced) will be acceptable to Asus? Being over the spec seems to be fine for them, but how far over spec are they willing to go? How far is not ok? We have no information about this.

3. Why no automatic shutdown?
From what I’ve read, Asus detects a load issue on one or more pins but doesn’t shut the GPU down. Why, in the name of all that is holy would you do that?
Asus! Would you really leave this in the hands of the user? What if they’re away from keyboard when the message pops up? Why won’t the Software shut the GPU down immediately when it detects a dangerous load? They could - the Astral has direct readouts on the pins, or immediately downclock it. What is the mission statement for the shunts?

4. An msgbox as a failsafe?
So, instead of a proper failsafe, Asus simply displays a message box to the user. That feels wildly inadequate. What if the message goes unseen until it's too late? Even if the user isn't AFK, does it overlay properly ingame? Will Asus provide more details on how this alert system actually works?

5. Why only the Astral?
I know shunt resistors add some cost, but we're talking peanuts in the grand scheme of things. So why only the fastest card?
- Do lower-tier cards not suffer from the same problem (assuming there is one serving the reason for implementing the shunts)?
- Are they too cheap to include this feature? Is the feature too expensive?
- Are the amp loads on the pins just an interesting KPI rather than a critical safety feature?
Did Asus think that adding this only to their flagship card would make it seem like just another high-end gimmick, avoiding any questions about the real reason for their inclusion? Because, let’s be honest, these resistors didn’t just appear out of nowhere.


My Take: Asus Knows (yeah, I know it sounds like a conspiracy theory:D)
Asus 100% saw the Nvidia connector issue at high loads during lab testing. That's why we have a readout on the Astral now. Let’s do the math:
  • At 50 amps, each of the six pins should handle 8.33A (50/6).
  • At 55 amps, it’s slightly over 9.1 per pin.
  • 9.5A is the absolute max when you take Nvidia seriously
  • Something over 9.5A is the absolute max, if you take Asus seriously
I bet Asus regularly saw single pins exceeding 9.5 amps, maybe hitting 10 amps on the edge pins (5 and 6) in some scenarios and didn't know what to do. In theory the GPU is defect, if you take the spec sheet and look up what's allowed and what's not. Maybe they asked Nvidia. Maybe Nvidia told them to ignore it. Maybe they didn’t ask at all. Who knows?

Instead, Asus quietly implemented a soft solution that won't be too in your face: Shunt resistors on the GPU (in a bridged circuit). A software dashboard. A message box alert. No shutdown. No hard limits. No transparency. Now, even Astral 5090 owners are asking Asus for more details on this in the Asus forum. I bet they won't get an answer so soon.

But anyway. WTF, Asus? Did you think you could slip this hint to us without upsetting Nvidia? Drop a hint about the problem, then pretend everything’s fine? Asus, blink twice if Nvidia is breathing down your neck.

Sorry if I am being too hard on Asus. It's just a means to an end. The real Problem is of course Nvidia and their 12VHPWR standard.

The more I read, the more it’s clear to me - the issue lies with Nvidia and their fully-loaded (no headroom) 12VHPWR standard. Any small problem in the power chain PSU ⟷ Cable(s) ⟷ GPU like

  • foreign debris
  • slight cable defects
  • pin contact issues
  • issues from multiple cable reseatings or
  • simple material aging
and suddenly, you're in a high risk territory! Thank you for that, Nvidia, just what my house needs.

IMHO we need a complete redesign of 12VHPWR - or better(!) an entirely new standard. Nvidia is actually ghosting the community, as usual. Maybe they wake up next week. But you can bet they’re fully aware of the problem by now. I hope they will come up with a statement at least. Personally, I don’t want an update to the connector in the next gen or more efficient GPUs with smaller power draws (albeit, this is a good thing to focus on independently). I want something beefier than 12VHPWR, thicker wires, with more headroom, 50x-100x reseatable until it breaks. I don’t care if that means buying a new PSU or pricier cables. This toy connector should have been a proprietary solution for SFF enthusiasts and mid tier GPUs in the first place - not the de facto flagship connector for 575W/600W GPUs.

And before anyone brings up the Galax HOF 4090 again - No, I don’t want to connect 2 or 3 of those toyish connectors on my GPU and multiple 12V-2x6 cables in my rig. That’s a cheapo band-aid, not a fix.
 
Last edited:
I asked myself: What is ASUS up to? What's going on in their minds introducing shunt resistors only on the Astral? Why did they implement them in the first place? This isn’t an easy question to answer, I fear.

Here is my analysis:

1. Shunt resistors - Why in the first place?
The shunt resistors (of course) don’t solve any of the underlying problems, not stopping anything from overheating (without further features). They simply provide a software readout on the pins - probably not even error-corrected, just factory-calibrated. So far, they serve no real value in protecting the hardware. Let's look further.

2. The readout and load handling
From the picture posted by Roman (showing 9.7 amps on one of the pins of the Astral), we see that the software did nothing to stop the GPU at that load. That's 0.2 amps over spec—not a big deal in practice. Let’s assume a) error correction isn’t needed because reading errors at this load level are negligible (I’m not an electrician, so I could be wrong). b) The additional heat from feeding 0.2-0.5 amps per pin should also be negligible on the connector, pin surroundings, cable materials, sleeve texture etc. c) most 'normal' readouts can produce 9.6 or 9.7A on a single pin from time to time. If this slight overload is the new normal, we’re dealing with a very fragile standard. Because it's over the spec. Is the spec on point or is it defensive (like most specs their is a unofficial headroom over the official headroom - but this is an unknown territory with many variables like ambient temperature, age of cables, no. of seatings etc. etc.).
But here’s the real question: at what point will the ASUS software flag an issue? Can we assume that 60 amps total (or 10 amps per pin fully balanced) will be acceptable to Asus? Being over the spec seems to be fine for them, but how far over spec are they willing to go? How far is not ok? We have no information about this.

3. Why no automatic shutdown?
From what I’ve read, Asus detects a load issue on one or more pins but doesn’t shut the GPU down. Why, in the name of all that is holy would you do that?
Asus! Would you really leave this in the hands of the user? What if they’re away from keyboard when the message pops up? Why won’t the Software shut the GPU down immediately when it detects a dangerous load? They could - the Astral has direct readouts on the pins, or immediately downclock it. What is the mission statement for the shunts?

4. An msgbox as a failsafe?
So, instead of a proper failsafe, Asus simply displays a message box to the user. That feels wildly inadequate. What if the message goes unseen until it's too late? Even if the user isn't AFK, does it overlay properly ingame? Will Asus provide more details on how this alert system actually works?

5. Why only the Astral?
I know shunt resistors add some cost, but we're talking peanuts in the grand scheme of things. So why only the fastest card?
- Do lower-tier cards not suffer from the same problem (assuming there is one serving the reason for implementing the shunts)?
- Are they too cheap to include this feature? Is the feature too expensive?
- Are the amp loads on the pins just an interesting KPI rather than a critical safety feature?
Did Asus think that adding this only to their flagship card would make it seem like just another high-end gimmick, avoiding any questions about the real reason for their inclusion? Because, let’s be honest, these resistors didn’t just appear out of nowhere.


My Take: Asus Knows (yeah, I know it sounds like a conspiracy theory:D)
Asus 100% saw the Nvidia connector issue at high loads during lab testing. That's why we have a readout on the Astral now. Let’s do the math:
  • At 50 amps, each of the six pins should handle 8.33A (50/6).
  • At 55 amps, it’s slightly over 9.1 per pin.
  • 9.5A is the absolute max when you take Nvidia seriously
  • Something over 9.5A is the absolute max, if you take Asus seriously
I bet Asus regularly saw single pins exceeding 9.5 amps, maybe hitting 10 amps on the edge pins (5 and 6) in some scenarios and didn't know what to do. In theory the GPU is defect, if you take the spec sheet and look up what's allowed and what's not. Maybe they asked Nvidia. Maybe Nvidia told them to ignore it. Maybe they didn’t ask at all. Who knows?

Instead, Asus quietly implemented a soft solution that won't be too in your face: Shunt resistors on the GPU (in a bridged circuit). A software dashboard. A message box alert. No shutdown. No hard limits. No transparency. Now, even Astral 5090 owners are asking Asus for more details on this in the Asus forum. I bet they won't get an answer so soon.

But anyway. WTF, Asus? Did you think you could slip this hint to us without upsetting Nvidia? Drop a hint about the problem, then pretend everything’s fine? Asus, blink twice if Nvidia is breathing down your neck.

Sorry if I am being too hard on Asus. It's just a means to an end. The real Problem is of course Nvidia and their 12VHPWR standard.

The more I read, the more it’s clear to me - the issue lies with Nvidia and their fully-loaded (no headroom) 12VHPWR standard. Any small problem in the power chain PSU ⟷ Cable(s) ⟷ GPU like foreign debris, slight cable defects, contact issues, issues from multiple reseatings or even simple material aging and suddenly, you're in a high risk territory. Thank you for that, just what my house needs.

IMHO we need a complete redesign of 12VHPWR - or an entirely new standard. Nvidia is acutally ghosting the community, as usual. Maybe they wake up next week. But you can bet they’re fully aware of the problem by now. I hope they will come up with a statement at least. Personally, I don’t want an update to the connector in the next gen or more efficient GPUs with smaller power draws (albeit, this is a good thing to focus on independently). I want something beefier than 12VHPWR, thicker, with more headroom. I don’t care if that means buying a new PSU or pricier cables. This toy connector should have been a proprietary solution for SFF enthusiasts and mid tier GPUs in the first place - not the de facto flagship connector for 575W/600W GPUs.

And before anyone brings up the Galax HOF 4090 again - No, I don’t want to connect 2 or 3 of those toyish connectors on my GPU and multiple 12V-2x6 cables in my rig. That’s a cheapo band-aid, not a fix.
So why, pray tell is this only implemented on the Astral?
 
Ignoring other issues with inline thermistors, all the damage I've seen indicates the issue isn't too little resistance (cables melting from overamperage) but too much -- contact pins mate poorly, and thus become victim to (I^2)r. A 50cm, 12A wire with 20 mOhm resistance can dissipate 3 watts along its entire length , but if that power is all focused on a portion of one pin contact, the surrounding plastic will melt.


You made a few wildly inaccurate statements, ones that even a first-year EE major would disown. Your own spec sheet diagram demonstrates he was using the correct units. I suggest you give it a rest.
I have to agree with this since its kinda basic thermodynamics with the connector having uneven/unreliable contact with the pins from the looks of it.
 
According to my testing of two different new and like new cables probably manufactured 1 to 1,5 year apart from each other, the plugs on the earlier looking worse (soft plastic bodies, contact tubes moving in the plug body a bit) the current measured with 400W load was on all 6 wires from 5,6 to 5,8 A (at 575W it would be 8,05 - 8,34 A) - I found no problem.

What is your belief that the problem exists based on? Probably not your own measurement. Is it based on DerBauers measurement of his faulty cable - that should not be used at all? That was not a manufacturing error, he destroyed it by misusing it.

Every electrical appliance should be used ONLY with a cable in good condition. That is a common rule and every common man is expected to follow it in his daily life. Any damage caused by a faulty cable would be blamed on the user or installer. Right?

So how can anyone get an idea he can make any judgements based on the premise, that an installer makes a grave error of using a faulty cable?

Every cable supplied with a PSU of reputable brand should be tested in manufacture and should be fault free when new.

What if the problems caused by new cables supplied with PSUs are so rare, that all other problems that may occur in the GPUs (as electronic component failures, etc) are more prevalent? What would be the meaning of all this circus and scandal then?
 
Last edited:
I asked myself: What is ASUS up to? What's going on in their minds introducing shunt resistors only on the Astral? Why.
Micro-fit+ is rated 12,5 A and supporting 16 AWG.
So 900 W no problem. Just have to treat it gently and consciously. There is no chance in hell we're going back to Mini-fit ever let alone something bigger like Mega- fit, that would be my first choice with 26A 12AWG. But expanding to 16-20 pin is not impossible. And about the shunts, if you're thinking it someone is probably doing it and it could be brought to MSRP. Protection should be triggered in firmware, not in bloat ware. I find it hard to believe that the connection could degrade if it was initially fine and the connector was brand new, only if left unchecked at the first boot.
 
I asked myself: What is ASUS up to? What's going on in their minds introducing shunt resistors only on the Astral?
It is a gimmick, that is not needed. If is was crucial, it would be on all models, and capable of shutting the card down, and not only of lighting up some indicators in an optional software. Users of lesser models will just need to follow common sense and use good quality new cables, and they will be fine.
 
Micro-fit+ is rated 12,5 A and supporting 16 AWG.
So 900 W no problem. Just have to treat it gently and consciously. There is no chance in hell we're going back to Mini-fit ever let alone something bigger like Mega- fit, that would be my first choice with 26A 12AWG. But expanding to 16-20 pin is not impossible. And about the shunts, if you're thinking it someone is probably doing it and it could be brought to MSRP. Protection should be triggered in firmware, not in bloat ware. I find it hard to believe that the connection could degrade if it was initially fine and the connector was brand new, only if left unchecked at the first boot.

For Molex Micro-Fit+ connectors, 12.5A is the rating with only 2 loaded circuits. For a 2x6 (12-circuit) connector it's derated to 9A with tin-plated contacts or 8.5A with gold-plated contacts. Less than the 12V-2x6 spec of 9.5A on all circuits.

The 12V-2x6 connector was originally produced by Amphenol, but Molex do now produce their own version. (Link) The interesting thing is that Molex DO NOT rate it to 9.5A, only 9.2A; and even then, they have clear design notes about making sure the entire circuit can handle the load:

Micro-Fit+ PCIe.gif
 
If you need three EPS12V to power a GPU, you shouldn't have a problem with low PCB space for three connectors.
600W GPUs are meant to be BIG (NV simply squished one into 2 slot, but overall dimensions are still large), I fail to understand why having 3x 8PIN is an board space issue in this case.
Make less power hungry GPUs, and safe board space on power connectors required (if you really can't accommodate them). It's not hard to fix this "issue", it's just lack of critical thinking and change of mentality that is required.

PS. I went with 200W to be in-line with PCIe 8-pin spec (which is 150W on 3 pairs of 12V/GND).
Would also like to prevent or at least slow down power creep of GPUs, by forcing manufacturers to think a bit before putting 800W ones out. 2x EPS 8-pin v2 = 475W on PCI-e GPU, 3x 8-pin = 675W on PCI-e GPU.
Both seem perfectly reasonable values to me. You can add a 6-pin to get additional +75W over that :D
OH, having more than one 8-pin EPS v2 should be complimented with proper power balancing in actual spec (and EPS 8-pin v2 should be required to use "HCS contacts" since they are 10A rated apparently).
I If I recall, it's not "300 watts" for max of the 8 pin EPS it's 336 watts absolute max. 300 watts is the normal for server 24-hour load at X temps, but peak max is slightly higher
The old 4 Pin cpu plug was rated for 250 watts.
which puts it at 672 watts for 2 which would still be higher than 1 on these crap connectors
We already have motherboards in threadripper side with daul being normal & triple being protection. Those boards also have to have 6 pin or sometimes dual 8 pin PCI-express plugs. Just to give enough power to all 7 pci-express slots for the 75 watts for each slot (525 watts).
 
I If I recall, it's not "300 watts" for max of the 8 pin EPS it's 336 watts absolute max. 300 watts is the normal for server 24-hour load at X temps, but peak max is slightly higher
The old 4 Pin cpu plug was rated for 250 watts.
which puts it at 672 watts for 2 which would still be higher than 1 on these crap connectors
We already have motherboards in threadripper side with daul being normal & triple being protection. Those boards also have to have 6 pin or sometimes dual 8 pin PCI-express plugs. Just to give enough power to all 7 pci-express slots for the 75 watts for each slot (525 watts).
Always go by the extended load, not the spike.
 
Status
Not open for further replies.
Back
Top