• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The Curious Case of the 12-pin Power Connector: It's Real and Coming with NVIDIA Ampere GPUs

Nvidia requires more power on the topend max tuned cards
Uh, why does NV suddenly require (much) more power for the topend GPUs? Are we back to Fermi times?
 
I understand the desire to have a single connector for a graphics card instead of a bundle of them, especially for more power hungry hardware. What I don't get is why on earth Nvidia feels the need to draw up their own bloody connector.

There already exist a 12-circuit header housing (literally from the exact same connector series that the ATX24 pin, EPS 8 pin and PCIe 6 pin comes from, specifically, this part here: https://www.molex.com/molex/products/part-detail/crimp_housings/0469921210). I don't even buy the argument about insufficient power handling capability. Standard Minifit JR crimp terminals and connector housings are rated up to 9A per circuit pairs, so for the 12-circuit part, you're looking at 9A*12V*6 = 648W of power. This is around the same as the rough '600W' figure quoted in the article, not to mention, you can get higher . You don't need a new connector along with new crimp terminal design and non-standard keying for this. THE EXISTING ATX 12 PIN CONNECTOR WILL DO THE JOB JUST FINE. Not to mention, you can get Minifit-Jr terminals that are rated for even higher amperage (13A for copper/tin crimp terminals when used with 18AWG or thicker conductors, IIRC, in which case the standard ATX 12-pin cable will carry even more power). This is literally just Nvidia trying to reinvent the wheel for no apparent reason.

As someone who crimps their own PSU cables, this is a hecking pain in the butt, and I hope whoever at Nvidia came up with this idea gets 7 years of bad luck for this. Just use the industry-standard part that already exist FFS.

/rant
 
3dfx_voodoo5_6k_2.jpg


The 3dfx actually came with it's own power brick to be plugged into the wall to feed the GPU.

Apart from that, weird decision. 2x8 pins should be more then enough for most GPU's unless this thing is supposed to feed GPU's in enterprise markets or machines.

Lot of wires are overrated as well. One single yellow wire on a good build PSU could easily push 10A up to 13A.
that 3dfx board is a fraction of the power consumption of a modern gpu, that's why it used an external brick, there was no pcie power connector and there was no standard for internal power connectors for boards, the most they could do was a molex connector that aren't very reliable or high power.
As i've said, an external power brick for a modern gpu would be a PC PSU in an external case(thus costing upwards of 100USD) as it would require 500W of output, plus you'd need a multitude of thick cables and connector occupying and entire slot on the back (there's no room for the power connector plus the regular video output), and that's another fan making noise and getting clogged.

2x8 pins may be barely enough for ampere, but not for the future, or even not enough for 3080ti(or whatever name it comes out), there are already 2080ti cards with 3x8 pin.

Yes a wire can push 10+A, but they have to take into account the voltage drop and heating on the wire and contact resistance of the connector, it's not that simple(otherwise we'd use one big thick cable instead of 2x8 pins for example), and also how flexible it is
 
I understand the desire to have a single connector for a graphics card instead of a bundle of them, especially for more power hungry hardware. What I don't get is why on earth Nvidia feels the need to draw up their own bloody connector.

There already exist a 12-circuit header housing (literally from the exact same connector series that the ATX24 pin, EPS 8 pin and PCIe 6 pin comes from, specifically, this part here: https://www.molex.com/molex/products/part-detail/crimp_housings/0469921210). I don't even buy the argument about insufficient power handling capability. Standard Minifit JR crimp terminals and connector housings are rated up to 9A per circuit pairs, so for the 12-circuit part, you're looking at 9A*12V*6 = 648W of power. This is around the same as the rough '600W' figure quoted in the article, not to mention, you can get higher . You don't need a new connector along with new crimp terminal design and non-standard keying for this. THE EXISTING ATX 12 PIN CONNECTOR WILL DO THE JOB JUST FINE. Not to mention, you can get Minifit-Jr terminals that are rated for even higher amperage (13A for copper/tin crimp terminals when used with 18AWG or thicker conductors, IIRC, in which case the standard ATX 12-pin cable will carry even more power). This is literally just Nvidia trying to reinvent the wheel for no apparent reason.

As someone who crimps their own PSU cables, this is a hecking pain in the butt, and I hope whoever at Nvidia came up with this idea gets 7 years of bad luck for this. Just use the industry-standard part that already exist FFS.

/rant

That Molex is a mini-fit. The one in the drawing is a micro-fit. Look at the dimensions. And it's not "made up". It exists. Has for a long time. I can buy one from Digikey, Mouser, etc. from four or five different manufacturers.

Any way.. This isn't a news story. This is someone downloading a drawing from a connector supplier and posting it as "news". Real news would be seeing the connector on the card itself. Am I right?

Now look what's popping up in my Google ads!

1595098117981.png


See... Not a "made up" connector.
 
That Molex is a mini-fit. The one in the drawing is a micro-fit. Look at the dimensions. And it's not "made up". It exists. Has for a long time. I can buy one from Digikey, Mouser, etc. from four or five different manufacturers.

Any way.. This isn't a news story. This is someone downloading a drawing from a connector supplier and posting it as "news". Real news would be seeing the connector on the card itself. Am I right?

Now look what's popping up in my Google ads!

View attachment 162532

See... Not a "made up" connector.

Thanks for the reference, I didn't recognise the keying of the connector housing and the wording of the article made me think they designed a fancy new connector housing. This invalidates my previous point. What's the technical improvement you get from moving from the Minifit Jr to Minifit then? Most of the other connectors commonly used for ATX PCs are Minifit Jr, what do you gain by going to Micro-fit? Is it really that much better in terms of mechanical strength or electrical/current handling?
 
Thanks for the reference, I didn't recognise the keying of the connector housing and the wording of the article made me think they designed a fancy new connector housing. This invalidates my previous point. What's the technical improvement you get from moving from the Minifit Jr to Minifit then? Most of the other connectors commonly used for ATX PCs are Minifit Jr, what do you gain by going to Micro-fit? Is it really that much better in terms of mechanical strength or electrical/current handling?

Smaller size. Nothing more. If this is real, then they're only doing it because they ran out of PCB space.
 
Can somebody explain this stupid decision?
 
Hi,
Most psu's have six 8 pin vga ports on them so I don't see this as an issue and just speculation needing a new psu when most likely just needs a new adapter or cable at worst.
30 series Titan rtx may need more but those aren't exactly mainstream cards either.

Guess we'll find out sooner or later :-)
 
Pissing off TSMC and going with Samsung has backfired.
That,s the shorter way of putting it, basically, Nvidia is anticipating higher TDP graphic cards in the near future.

This might be because of the fact that the improvement from moving to Samsung 8nm is less than they expected, or simply that they intend to leave less performance on the table and place their cards more towards the right side of the voltage/frequency curve, sacrificing some efficiency for more raw power.

To be fair, we know nothing of the TDPs from team red, they might've gone up too, in spite of using TSMC's 7nm EUV.
 
Spec allows up to 600W. So Ampere 3080 Ti might draw 400W+!! This seems like Nvidia dropping the ball going with Samsung 8nn fab process. Or Ampere as an arch is a bit of a turd. Likely a bit of both.

RDNA2 on TSMC's 7nm seems to be quite formidable.

To be fair, we know nothing of the TDPs from team red, they might've gone up too, in spite of using TSMC's 7nm EUV.

We know a little. PS5 RDNA2 GPU is 2.2Ghz in a ~100W package offering near 2080 performance.

A 350W RDNA2 GPU is going to give a 2080 Ti a run for its money.
 
We know a little. PS5 RDNA2 GPU is 2.2Ghz in a ~100W package offering near 2080 performance.
That's only 36CU's and the TDP is not 100W, the reason the PS5 is so big is to allow better ventilation. So it indeed is more efficient than RDNA1, but it's unclear by exactly how much.
A 350W RDNA2 GPU is going to give a 2080 Ti a run for its money.
It better do more than that, it will launch at the same time with Ampere, the 2080Ti is a generation old now. It has to give the 3080 a run for its money, be it at 350W or 500W.
 
That's only 36CU's and the TDP is not 100W, the reason the PS5 is so big is to allow better ventilation. So it indeed is more efficient than RDNA1, but it's unclear by exactly how much.

It better do more than that, it will launch at the same time with Ampere, the 2080Ti is a generation old now. It has to give the 3080 a run for its money, be it at 350W or 500W.

I'm talking about the TDP of the PS5 GPU cores only, not the entire system. 100W was a guess.

And yes latest rumours from sources I trust state the top Big Navi will be around 40% faster than a 2080 Ti, so it will give that a run for its money as stated and challenge the 3080 Ti.
 
I'm talking about the TDP of the PS5 GPU cores only, not the entire system. 100W was a guess.
Yes, I reckoned you would be guesstimating, but I think you're overoptimistic, I imagine the smart shifts in these APU's gives more umph to the GPU part, so I think the TDP is about or above 200W. I'm guesstimating, too :)
And yes latest rumours from sources I trust state the top Big Navi will be around 40% faster than a 2080 Ti, so it will give that a run for its money as stated and challenge the 3080 Ti.
Here's a recap from Tom that points out that we will see high TDP's from Nvidia this fall:
 
Hi,
Most psu's have six 8 pin vga ports on them so I don't see this as an issue and just speculation needing a new psu when most likely just needs a new adapter or cable at worst.
30 series Titan rtx may need more but those aren't exactly mainstream cards either.

Guess we'll find out sooner or later :)
Most have 2 6+2 pin connectors... some (typically with a bit higher power, ~700W+) have four. A few, over the 1KW range, have six. Most do NOT have six, but two or four.

And yes latest rumours from sources I trust state the top Big Navi will be around 40% faster than a 2080 Ti, so it will give that a run for its money as stated and challenge the 3080 Ti.
So, essentially, what you are saying is that a RDNA2 card will be 'close enough' to a 300-400W Ampre card to give it a "run for its money"? What does that mean, exactly? Like 10% behind? I'm curious how you came up with that conclusion...

What I see is this.....2080 Ti(FE) currently leads 5700XT (Nitro+) by 42% (1440p). With all the rumors about the high power use, even with a significant node shrink (versus AMD who is tweaking) and a new architecture, you still think that is true?

I mean, I hope you're right, but from what we've seen so far, I don't understand how that kind of math even works out. You're assuming that with a die shrink and new arch from Ampre will only be around the same amount faster 2080Ti was over 1080Ti (~25-30%) while a new arch and node tweak will gain upwards of 70% performance? What am I missing here?

If NV Ampre comes in at 300W+, I don't see RDNA2 coming close (split the difference between 2080Ti and Ampre flagship)
 
Last edited:
Here's a recap from Tom that points out that we will see high TDP's from Nvidia this fall:
Tom dropped off the radar for me these last few months, I guess he fell off my feed. Still, I believe he's credible and smart with a reasonable track record and he is really certain that 1st-gen Ampere will be an overheating, power-hungry dumpster fire on Samsung 8nm.

Even if he's only half-right, that doesn't bode well for the 3000-series.

It will also mean that Nvidia play dirty (using DLSS and black-box dev tools that hinder AMD/Intel performance to get the unfair advantage their inferior hardware needs). In the case of this generation, I suspect that means that proprietary RTX-specific extensions are heavily pushed, rather than DX12's DXR API. Gawd, I hope I'm wrong....
 
Tom dropped off the radar for me these last few months, I guess he fell off my feed. Still, I believe he's credible and smart with a reasonable track record and he is really certain that 1st-gen Ampere will be an overheating, power-hungry dumpster fire on Samsung 8nm.

Even if he's only half-right, that doesn't bode well for the 3000-series.

It will also mean that Nvidia play dirty (using DLSS and black-box dev tools that hinder AMD/Intel performance to get the unfair advantage their inferior hardware needs). In the case of this generation, I suspect that means that proprietary RTX-specific extensions are heavily pushed, rather than DX12's DXR API. Gawd, I hope I'm wrong....

Lol inferior hardware. Inferior hardware that's better. Wut?
 
It will also mean that Nvidia play dirty (using DLSS and black-box dev tools that hinder AMD/Intel performance to get the unfair advantage their inferior hardware needs). In the case of this generation, I suspect that means that proprietary RTX-specific extensions are heavily pushed, rather than DX12's DXR API. Gawd, I hope I'm wrong....
I haven't seen the edit.

That's always the marketing battle to be fought and AMD has been very bad at this in the past. And Nvidia are masters at this game, just ask 3DFX, S3 and Kyro...

But this time there are AMD APU's in all the consoles that matter, and the whole Radeon marketing team has been changed. So I reckon they might have something resembling a strategy.
 
Lol inferior hardware. Inferior hardware that's better. Wut?
It's based on the video. Tom is putting his guesses that Big Navi will perform at 50-60% better than a 2080Ti, whilst his same evidence points towards Nvidia needing insane power draw and cooling just to hit 40% more than a 2080Ti.

If he's right, it means AMD will have the superior hardware.

I hope you realise this is a speculation thread though and there's no hard evidence on Ampere or Big Navi yet.

Me? I'm sitting on the fence and waiting for real-world testing and independent reviews. I'm old enough to have seen this game between AMD/ATi and Nvidia played out over and over again for 20+ years. It would not be the first time that either company had intentionally leaked misleading performance numbers to throw off the other team.
 
When was the last time AMD had better hardware than Nvidia? From a performance standpoint.
 
The best way to find the answer to your own question, go to this page:
and use the relative performance graph to find top of the line cards launched in the same year. I found this for you:
 
When was the last time AMD had better hardware than Nvidia? From a performance standpoint.
If you're asking seriously, it depends what you mean:

In terms of performance per dollar or performance per transistor?

Currently they do. AMD's Navi10 is a reasonable competitor for TU106 but it is faster and cheaper than either the 2060S or the 2070 for a lower transistor count. It can't do raytracing, but arguably, TU106 is too weak to do it too. I've been wholly disappointed by my 2060S's raytracing performance, even with DLSS trying desperately to hide the fact it's only rendering at 720p. Heck, my 2060S can barely run Quake II or Minecraft ;)

In terms of halo/flagships?
  • In 2008, Terascale architecture (HD 4000 series) ended a few years of rubbish from ATi/AMD and was better than the 9800GTX in every way.
  • In 2010, Fermi (GTX 480) was a disaster that memes were born from.
  • In 2012 Kepler (GTX680) had an edge over the first iteration of GCN (HD7970) because DX11 was too common. As DX12 games appeared, Kepler fell apart badly.
  • In 2014 Kepler on steroids (GTX780Ti and Titan) tried to make up the difference but AMD just made Hawaii (290X) which was like an HD7970 on steroids, to match.
Nvidia has pretty much held the flagship position since Maxwell (900-series), and generally offered better performance/Watt and performance/Transistor even before you consider that they made consumer versions of their huge enterprise silicon (980Ti, 1080Ti, 2080Ti). The Radeon VII was a poor attempt to do the same and it wasn't a very good product even if you ignore the price - it was just Vega's failures but clocked a bit higher and with more VRAM that games (even 4K games) didn't really need.

So yeah, if you don't remember that the performance crown has been traded back and forth a lot over the years, then you need to take off your special Jensen-Huang spectacles and revisit some nostaligic youtube comparisons of real games frequently being better on AMD/ATi hardware. Edit, or just look at BoBoOOZ's links

I don't take sides, If Nvidia puts out rubbish, I'll diss it.
If AMD puts out rubbish, I'll diss that too.
I just like great hardware, ideally at a reasonable price and power consumption.
 
Last edited:
When was the last time AMD had better hardware than Nvidia? From a performance standpoint.
7970 clawed performance back, the original r5870 had it's contemporary beat.
And later this year :p

Moore's law is dead might as well have quoted me verbatim, though I can't remember where on here I called it.

And if the many rumours are as true as usual, ie bits are but 50% balls then it still doesn't look Rosey for Nvidia this generation regardless.
 
Last edited:
Back
Top