• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

So all heats will be dumped into CPU? wow what a innovative !
 
What increase? This is still equivalent to using 2 8-pin connectors which we've had for years.
What the same two eight pin power wires that caused Vega to get ripped in forums the world over.
Is that ok now, right.

As for the Oop, Details my asssss, we know about a spring now ,cheers.
 
People are always afraid of change, the initial reaction always follows the Kübler-Ross model with the following order emotions: denial, anger, bargaining, depression, and acceptance. :)
I'm not sure I'm ever going to accept a $1400 graphics card. I'll probably end up buying a dozen for work but that's not something I'd willingly spend from my own funds - I am definitely a "performance/Watt" sweet-spot seeker.
 
I hope they release a stubby watercooled version
 
Last edited:
The full cover water block would look realy nice on the fish tail short PCB.
 
The full cover water block would look realy nice on the fish tail short PCB.
Don't worry, EK would have machined it to match. ;)
 
Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.

They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.
 
It's right there at ~2:25. Also no crazy airflow apocalypse in the case as some suggested. Back fan is in pull config. Edit: Or they revised the design and put the second fan on the front like a sane person would do...
View attachment 166751
OMG, I must have looked away for 5 seconds. He was talking about computation fluid dynamics with an unrelated single-fan blower from 10-series and prior on screen, then the money shot which I missed and then another 35 seconds of unrelated 2080Ti.

I guess it's not technically clickbait, just a video that has five seconds of content (arguably that one screenshot is all that matters) and 8 minutes, 14 seconds of "padding"....
 
The 12pin still looks ridiculously stupid and a huge waste given the freakishly large size. Like they couldn't find the real estate on there?
Come again?
qomfBuAPQNeLW8kB.jpg
 
No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.

But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W

mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?
1598465258429.png
 
According to the leaked benchmarks (Time Spy Extreme) the 3090 is up to 56% faster than 2080Ti, even if it consumes 350W it should end up being 15% or so more efficient
Closer to 12% if 2080ti FE is 250W. If 3090 TBP is 320W, then we're looking at 22%. TSE is probably not a great metric here.

which is not bad if Samsung's 10nm is being used, which is miles behind TSMC's N7P process in performance and efficiency.
If their cooling solution can indeed handle the 350W TDP with acceptable thermals and noise levels then they've done an amazing job.
You think so? If the node shrink was from TSMC 16(12N) to Samsung 10(8LPP), they've bumped TBP 40% (350W speculated) to get a 55% increase in TSE, with commensurate increase in BOM, etc. The majority of the gain is from increased power draw. Naively, a die shrunk TU102 would've yielded similar results. I suspect the 24x GDDR6X modules are adding disproportionately & don't benefit most games yet. Will be great to see TPU perf/watt gaming results of GA102/4 SKUs esp AIB models.
 
Am i the only one that thought the video didnt explain anything of substance?? I was waiting for them to talk more about the cooling solution and they did but only how vapour chambers work not how it would work in relation to the new GPu.

They used big fancy complicated words to make what they were doing sound majorly impressive but they didnt exactly give a whole lot away. I dont if they did this deliberately to keep people on the edge of their seat or if they are talking down to me because they dont think i know how the science works.

I tought the same.

So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.
 
Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.
 
So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.

Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.

Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.
 
I tought the same.

So, Nvidia "invented" a new slim power connector, and at the same time designed a three slot monstruosity. I don't know if they are affraid of RDNA2 or what else, but I think something is happening here because they try to focus a lot (with fancy words ala Apple of course) on heat dissipation and power consumption for performance in that video.

I hope the medium tier cards will offer a good performance bump at the same power as current 2070/2080 at least.
I like how one whole tier is dedicated to the way stuff is assembled and they only now tidied up power delivery with a new plug, those adapter's are not going to be tidier, are not very hide able.
Surely they can't patent a molex.

Seasonic could be leaning on a properly built single rail psu designed to handle more current per wire=850 watt, most PSU of a quality would be upto that.

All mine could/have.
 
No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices.

Same thing with Intel inventing the "i9", when they've called their "i7" the top-end for nearly a decade. By changing the name, you screw with the psychology of humans, and make people think they can afford something higher. After all, the 3080 isn't the high end anymore. That's "just" a $1200 GPU, you're reasonable. The unreasonable people are the ones buying a $2000 3090.

Classic marketing trick. Its called decoy pricing, and NVidia (and really everyone: Intel / AMD / etc. etc.) have been doing it for decades.

You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.
 
You're right, didn't see the 3090 launch as a Titan, and it has a lot of sense that way.
That's been a tuber rumour for a while now, I could point you to them, allegedly because it's not performant enough to what Nvidia expect big Navi to be, win win since they're Titan can come later for more money obviously.
 
Let's hope it's RDNA2, but the next game reveals looked surprisingly mediocre, but I appreciate that isn't AMD's fault.
Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.

I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.

I mean, even when raytracing settings were set unrealistically high by DICE - so high that a 2080Ti was required to hit 60fps at 1080p - it was still only marginally better than faking it with shaders. Yes, if you stopped playing the game and actually just zoomed in on fine details, the DXR renderer was better looking. It's just that the cost was too high for such a subtle improvement.

You only have to play QuakeII RTX and experiment with the temperal filtering, GI ray count, and de-noiser to get an idea of just how basic an aproximation of raytracing current DXR implementations are. There's almost as much fakery and guesswork going on with DXR as the shader-based fakery we're already used to.
 
So, Nvidia "invented" a new slim power connector,

Heh, 12-pin Molex micro-fit 3.0. They will have specced the pinout, though.

don't know if they are affraid of RDNA2 or what else
There may be something to this, but I think Nv built 3090 because they could (& price it accordingly). More product tiers. Renxun is keen on Ferrari analogies...

No. NVidia has just decided to call the "Titan" the 3090 instead. While still commanding "Titan" class prices
Instead...? Titan will likely release in the Super refresh cycle once 16Gbit GDDR6X modules are available. I wouldn't be surprised if Titan/Quadro are released with 48GB GDDR6 @ ~800GB/s initially.
 
Rumor puts that huge RTX 3090 cooler at 310 mm long, which is 5mm more than I have available in my new case.
 
Raytracing is so expensive even with Ampere's supposedly 4x DXR performance increase, we're still looking at faking it being a decent option.

I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase.

I mean, even when raytracing settings were set unrealistically high by DICE - so high that a 2080Ti was required to hit 60fps at 1080p - it was still only marginally better than faking it with shaders. Yes, if you stopped playing the game and actually just zoomed in on fine details, the DXR renderer was better looking. It's just that the cost was too high for such a subtle improvement.

You only have to play QuakeII RTX and experiment with the temperal filtering, GI ray count, and de-noiser to get an idea of just how basic an aproximation of raytracing current DXR implementations are. There's almost as much fakery and guesswork going on with DXR as the shader-based fakery we're already used to.
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
 
Even such a small sample rate is already noticeably better that traditional rasterization.
Oh, is it?
Better than "traditional rasterization" (I guess it means non DXR) in which game?

 
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.

He probably based that on trying it because that's my opinion as an owner.

It's the software equivalent of 3d TV at the moment , initially oohhh nice, then two weeks later max, meh bothered and nothing to watch
 
Oh, is it?
Better than "traditional rasterization" (I guess it means non DXR) in which game?


That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.
 
Back
Top