Wednesday, August 26th 2020

NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

NVIDIA today shared the design philosophy behind the cooling solution of its next-generation GeForce "Ampere" RTX 3080 / 3090 graphics cards, which we'll hopefully learn more about on September 1, when NVIDIA has scheduled a GeForce Special Event. Part of the new video presentation shows the evolution of NVIDIA's cooling solutions over the years. NVIDIA explains the four pillars behind the design, stressing that thermals are at the heart of its innovation, and that the company looks to explore new ways to use air-cooling more effectively to cool graphics cards. To this effect, the cooling solution of the upcoming GeForce Ampere Founders Edition graphics cards features an airflow-optimized design focused on ensuring the most effective way to take in fresh air, transfer heat to it, and exhaust the warm air in the most optimal manner.

The next pillar of NVIDIA's cooling technology innovation is mechanical structure, to minimize the structural components of the cooler without compromising on strength. The new Founder Edition cooler introduces a new low profile leaf spring that leaves more room for a back cover. Next up is reducing the electrical clutter, with the introduction of a new 12-pin power connector that is more compact, consolidates cabling, and yet does not affect the card's power delivery capability. The last pillar is product design, which puts NVIDIA's innovations together in an airy new industrial design. The video presentation includes commentary from NVIDIA's product design engineers who explain the art and science behind the next GeForce. NVIDIA is expected to tell us more about the next generation GeForce Ampere at a Special Event on September 1.
Although the video does not reveal any picture of the finished product, the bits and pieces of the product's wire-frame model, and the PCB wire-frame confirm the design of the Founders Edition which has been extensively leaked over the past few months. NVIDIA mentioned that all its upcoming cards that come with 12-pin connector include free adapters to convert standard 8-pin PCIe power connectors to 12-pin, which means there's no additional cost for you. We've heard from several PSU vendors who are working on adding native 12-pin cable support to their upcoming power supplies.

The promise of backwards compatibility has further implications: there is no technical improvement—other than the more compact size. If the connector works through an adapter cable with two 8-pins on the other end, its maximum power capability must be 2x 150 W, at the same current rating as defined in the PCIe specification. The new power plug will certainly make graphics cards more expensive, because it is produced in smaller volume, thus driving up BOM cost, plus the cost for the adapter cable. Several board partners hinted to us that they will continue using traditional PCIe power inputs on their custom designs.
The NVIDIA presentation follows.

Add your own comment

143 Comments on NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

#51
theoneandonlymrk
bug
What increase? This is still equivalent to using 2 8-pin connectors which we've had for years.
What the same two eight pin power wires that caused Vega to get ripped in forums the world over.
Is that ok now, right.

As for the Oop, Details my asssss, we know about a spring now ,cheers.
Posted on Reply
#52
Chrispy_
chodaboy19
People are always afraid of change, the initial reaction always follows the Kübler-Ross model with the following order emotions: denial, anger, bargaining, depression, and acceptance. :)
I'm not sure I'm ever going to accept a $1400 graphics card. I'll probably end up buying a dozen for work but that's not something I'd willingly spend from my own funds - I am definitely a "performance/Watt" sweet-spot seeker.
Posted on Reply
#53
phanbuey
I hope they release a stubby watercooled version
Posted on Reply
#54
Fluffmeister
phanbuey
I hope they release a stubby watercooled version
And give it a meanish but cute sounding name.... Furry X!
Posted on Reply
#55
ppn
The full cover water block would look realy nice on the fish tail short PCB.
Posted on Reply
#56
mouacyk
ppn
The full cover water block would look realy nice on the fish tail short PCB.
Don't worry, EK would have machined it to match. ;)
Posted on Reply
#57
FreedomEclipse
~Technological Technocrat~
Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.

They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.
Posted on Reply
#58
Chrispy_
iO
It's right there at ~2:25. Also no crazy airflow apocalypse in the case as some suggested. Back fan is in pull config. Edit: Or they revised the design and put the second fan on the front like a sane person would do...

OMG, I must have looked away for 5 seconds. He was talking about computation fluid dynamics with an unrelated single-fan blower from 10-series and prior on screen, then the money shot which I missed and then another 35 seconds of unrelated 2080Ti.

I guess it's not technically clickbait, just a video that has five seconds of content (arguably that one screenshot is all that matters) and 8 minutes, 14 seconds of "padding"....
Posted on Reply
#60
KarymidoN
chodaboy19
No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.

But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W
mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?
Posted on Reply
#61
steen
M2B
According to the leaked benchmarks (Time Spy Extreme) the 3090 is up to 56% faster than 2080Ti, even if it consumes 350W it should end up being 15% or so more efficient
Closer to 12% if 2080ti FE is 250W. If 3090 TBP is 320W, then we're looking at 22%. TSE is probably not a great metric here.
which is not bad if Samsung's 10nm is being used, which is miles behind TSMC's N7P process in performance and efficiency.
If their cooling solution can indeed handle the 350W TDP with acceptable thermals and noise levels then they've done an amazing job.
You think so? If the node shrink was from TSMC 16(12N) to Samsung 10(8LPP), they've bumped TBP 40% (350W speculated) to get a 55% increase in TSE, with commensurate increase in BOM, etc. The majority of the gain is from increased power draw. Naively, a die shrunk TU102 would've yielded similar results. I suspect the 24x GDDR6X modules are adding disproportionately & don't benefit most games yet. Will be great to see TPU perf/watt gaming results of GA102/4 SKUs esp AIB models.
Posted on Reply
#62
nangu
FreedomEclipse
Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.

They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.
I tought the same.

So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.
Posted on Reply
#63
Fluffmeister
Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.
Posted on Reply
#64
dragontamer5788
nangu
So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.
No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.

Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.

Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.
Posted on Reply
#65
theoneandonlymrk
nangu
I tought the same.

So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.
I like how one whole tier is dedicated to the way stuff is assembled and they only now tidied up power delivery with a new plug, those adapter's are not going to be tidier, are not very hide able.
Surely they can't patent a molex.

Seasonic could be leaning on a properly built single rail psu designed to handle more current per wire=850 watt, most PSU of a quality would be upto that.

All mine could/have.
Posted on Reply
#66
nangu
dragontamer5788
No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.

Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.

Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.
You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.
Posted on Reply
#67
theoneandonlymrk
nangu
You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.
That's been a tuber rumour for a while now, I could point you to them, allegedly because it's not performant enough to what Nvidia expect big Navi to be, win win since they're Titan can come later for more money obviously.
Posted on Reply
#68
Chrispy_
Fluffmeister
Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.
Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.

I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.

I mean, even when raytracing settings were set unrealistically high by DICE - so high that a 2080Ti was required to hit 60fps at 1080p - it was still only marginally better than faking it with shaders. Yes, if you stopped playing the game and actually just zoomed in on fine details, the DXR renderer was better looking. It's just that the cost was too high for such a subtle improvement.

You only have to play QuakeII RTX and experiment with the temperal filtering, GI ray count, and de-noiser to get an idea of just how basic an aproximation of raytracing current DXR implementations are. There's almost as much fakery and guesswork going on with DXR as the shader-based fakery we're already used to.
Posted on Reply
#69
steen
nangu
So, Nvidia "invented" a new slim power connector,
Heh, 12-pin Molex micro-fit 3.0. They will have specced the pinout, though.
don't know if they are affraid of RDNA2 or what else
There may be something to this, but I think Nv built 3090 because they could (& price it accordingly). More product tiers. Renxun is keen on Ferrari analogies...
dragontamer5788
No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices
Instead...? Titan will likely release in the Super refresh cycle once 16Gbit GDDR6X modules are available. I wouldn't be surprised if Titan/Quadro are released with 48GB GDDR6 @ ~800GB/s initially.
Posted on Reply
#70
JustAnEngineer
Rumor puts that huge RTX 3090 cooler at 310 mm long, which is 5mm more than I have available in my new case.
Posted on Reply
#71
Jinxed
Chrispy_
Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.

I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.

I mean, even when raytracing settings were set unrealistically high by DICE - so high that a 2080Ti was required to hit 60fps at 1080p - it was still only marginally better than faking it with shaders. Yes, if you stopped playing the game and actually just zoomed in on fine details, the DXR renderer was better looking. It's just that the cost was too high for such a subtle improvement.

You only have to play QuakeII RTX and experiment with the temperal filtering, GI ray count, and de-noiser to get an idea of just how basic an aproximation of raytracing current DXR implementations are. There's almost as much fakery and guesswork going on with DXR as the shader-based fakery we're already used to.
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
Posted on Reply
#72
medi01
Jinxed
Even such a small sample rate is already noticeably better that traditional rasterization.
Oh, is it?
Better than "traditional rasterization" (I guess it means non DXR) in which game?

Posted on Reply
#73
theoneandonlymrk
Jinxed
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.

He probably based that on trying it because that's my opinion as an owner.

It's the software equivalent of 3d TV at the moment , initially oohhh nice, then two weeks later max, meh bothered and nothing to watch
Posted on Reply
#74
M2B
medi01
Oh, is it?
Better than "traditional rasterization" (I guess it means non DXR) in which game?


That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.
Posted on Reply
#75
kiriakost
Jism
The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol.

But if you think about it, why Nvidia introduced this "one" connector, it's designed for the enterprise market, and simply pushed over to the gaming part. They no longer have to make 2 different model(s) of cards for both enterprise and / or consumer. The cards that do not pass the enterprise quality are moved over to the gamer ones. Nothing really gets lost in these markets.
Electrically they are two major hazards when the cable harness this working at it limits.
a) severe voltage fluctuation which can drive the card to freeze at gaming.
b) Molex pins they can overheat and even get burned.

PSU over current protection does not include molex pins sparkling, that is an instant extremely high current event.
Any way, I am not planning to be a joy killer, all I am saying this is that extreme stress of electrical parts this is a bad idea.
Posted on Reply
Add your own comment