Wednesday, August 26th 2020

NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

NVIDIA today shared the design philosophy behind the cooling solution of its next-generation GeForce "Ampere" RTX 3080 / 3090 graphics cards, which we'll hopefully learn more about on September 1, when NVIDIA has scheduled a GeForce Special Event. Part of the new video presentation shows the evolution of NVIDIA's cooling solutions over the years. NVIDIA explains the four pillars behind the design, stressing that thermals are at the heart of its innovation, and that the company looks to explore new ways to use air-cooling more effectively to cool graphics cards. To this effect, the cooling solution of the upcoming GeForce Ampere Founders Edition graphics cards features an airflow-optimized design focused on ensuring the most effective way to take in fresh air, transfer heat to it, and exhaust the warm air in the most optimal manner.

The next pillar of NVIDIA's cooling technology innovation is mechanical structure, to minimize the structural components of the cooler without compromising on strength. The new Founder Edition cooler introduces a new low profile leaf spring that leaves more room for a back cover. Next up is reducing the electrical clutter, with the introduction of a new 12-pin power connector that is more compact, consolidates cabling, and yet does not affect the card's power delivery capability. The last pillar is product design, which puts NVIDIA's innovations together in an airy new industrial design. The video presentation includes commentary from NVIDIA's product design engineers who explain the art and science behind the next GeForce. NVIDIA is expected to tell us more about the next generation GeForce Ampere at a Special Event on September 1.
Although the video does not reveal any picture of the finished product, the bits and pieces of the product's wire-frame model, and the PCB wire-frame confirm the design of the Founders Edition which has been extensively leaked over the past few months. NVIDIA mentioned that all its upcoming cards that come with 12-pin connector include free adapters to convert standard 8-pin PCIe power connectors to 12-pin, which means there's no additional cost for you. We've heard from several PSU vendors who are working on adding native 12-pin cable support to their upcoming power supplies.

The promise of backwards compatibility has further implications: there is no technical improvement—other than the more compact size. If the connector works through an adapter cable with two 8-pins on the other end, its maximum power capability must be 2x 150 W, at the same current rating as defined in the PCIe specification. The new power plug will certainly make graphics cards more expensive, because it is produced in smaller volume, thus driving up BOM cost, plus the cost for the adapter cable. Several board partners hinted to us that they will continue using traditional PCIe power inputs on their custom designs.
The NVIDIA presentation follows.

Add your own comment

143 Comments on NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

#76
londiste
Chrispy_
I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase
While you are right about fidelity not being worth the performance hit in a multiplayer title, "at its highest quality" is rather misleading. It will be very difficult to see the differences between then and now, the optimizations were primarily technical, not giving back in image quality. By the way, in many if not most of these scenes do show clear artifacting in screenspace reflections.
M2B
That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.
Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.
Posted on Reply
#77
theoneandonlymrk
kiriakost
Electrically they are two major hazards when the cable harness this working at it limits.
a) severe voltage fluctuation which can drive the card to freeze at gaming.
b) Molex pins they can overheat and even get burned.

PSU over current protection does not include molex pins sparkling, that is an instant extremely high current event.
Any way, I am not planning to be a joy killer, all I am saying this is that extreme stress of electrical parts this is a bad idea.
It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.
Posted on Reply
#79
Krzych
KarymidoN
Basically they're saying the Max Power draw from a card with 1x 12pin new connector is 375W? (2x 150W 8pin + 75W From PCIE Power) right?

lets see what the real power draw is (after reviews), i hope they just left a lot more capacity on the connector to be used.
The two 8-pin connections of 12-pin cable go into the PSU, this is different than 150W 8-pin you plug into the GPU. These are the slots that normally power your 2x8-pin cable, rated up to 300W. So theoretically 12-pin is up to 600W.

This doesn't necessarily need to be a hint at anything about Ampere's power draw, but it could mean that even Founders Edition is going to be able to pull over 375W. Most likely not with reference TDP, but after raising power target to the cap. Theoretically there would be no need for dual 8-pin if it was to be capped at 320W like 2080 Ti. Using two slots on the PSU instead of one is certainly some kind of compatibility concern, they wouldn't go for it if it wasn't needed. I wonder if there is going to be single 8-pin version for lower end cards like 3070, assuming they get 12-pin too.
Posted on Reply
#80
Jinxed
theoneandonlymrk
Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.

He probably based that on trying it because that's my opinion as an owner.
Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video:

It will still take a while to get photorealistic real-time output of course, as that may require an order of magnitude more samples (rays) per pixel. But there's nothing fake about it even now. That's just a lie from someone in denial of the technology.
Posted on Reply
#81
M2B
londiste
Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.
I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
Apparently it's different to the triangle RT solution from Nvidia.
Posted on Reply
#82
Jinxed
M2B
I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
Apparently it's different to the triangle RT solution from Nvidia.
Actually it's just an extension of light probes. You can see the typical artifacts of light probes (changes in brightness of surfaces) when moving through the tunnel from the big cave. The only difference is that the shading on the triangles looks much more realistic. But that is due to their high-poly-count feature in lumen with the triangles being almost sub-pixel sized, currently supported only on the PS 5, because it has such a ridiculously fast custom-made SSD. The light is still incorrect. It's the illusion that is significantly better.
Posted on Reply
#83
M2B
Jinxed
Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video:

It will still take a while to get photorealistic real-time output of course, as that may require an order of magnitude more samples (rays) per pixel. But there's nothing fake about it even now. That's just a lie from someone in denial of the technology.
Ray Traced Shadows, Ambient Occlusion and Global illumination don't need that many samples to look good because of their softer look and nature. 1 or 2 sample per pixel should do the job with good enough denoising. When it comes to reflections though story is a bit different and more samples are needed for a more convincing look.
Nvidia is apparently working on more efficient denoising methods which could potentially improve performance and even visuals.
Posted on Reply
#84
kiriakost
theoneandonlymrk
It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.
I will disagree, from the moment that NVIDIA suggests the use of double 8 Pin ( 6+2) wires adaptor, the industry it is not pushed to follow their footsteps.
PSU development and manufacturing this is not happening in just few months.

As gossip or speculation, I will say that at NVIDIA road-mad, the next GPU after this it will use less power than that.
But this is material for a conversation no sooner than May 2021.
Posted on Reply
#85
Chrispy_
Jinxed
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
Basing that on the example I specifically singled out, because it lets you mess around with settings and turn off the fakery to see what's really going on under the hood.

Raytracing a scene fully on my 2060S at native resolution still takes 20 seconds to get a single, decent-quality frame, so there are two main tricks used to generate a frame in the fraction of a second required for a single convicing frame:
  1. Temporal denoiser + blur
    This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
  2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
    Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
    A - rendered at 1/4 resolution
    B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
    C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.

So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

Don't get me wrong, QuakeII RTX is a technological marvel - it's truly incredible how close to a realtime raytraced game we can get with so many hacks and fakery to spread that absolutely minimal, almost insignificant amount of true raytracing around. Focus on the bits that matter, do it at a fraction of the game resolution and only in areas that are visibly detailed. Blur the hell out of the rest using tens of previous frames and a library of pre-baked ray tiles to approximate a raytraced result until you have hundreds of frames of data to actually use for real result.

We're just not at a level where we can afford to do it at full resolution, for the whole screen at once, and for regions offscreen so that movement doesn't introduce weird visual artifacts. 10x faster than a 2080Ti might get us those constraints, and another couple of orders magnitude might allow us to bring the temporal filter down from a hundred frames for a useful image, to single digit numbers of frames. It's still not realtime, but if people can run games at 100fps, 25fps raytraced data with temporal interpolation should be very hard to notice.

So yeah, real raytracing is going to need 1000x more power than a 2080Ti, but even with what we have right now, it's enough to get the ball rolling if you don't look too closely and hide the raytracing between lots of shader-based lies too. Let's face it, shader based lies get us 90% there for almost free, and if the limited raytracing can get us 95% of the way there without hurting performance, people are going to be happy that there's a noticeable improvement without really caring about how it happened - they'll just see DXR on/off side by side and go "yeah, DXR looks nicer".
Posted on Reply
#86
Jinxed
Chrispy_
Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region.
That is a complete lie. A "library of tile-based approximation" is completely made up. There are denoisers at work, which you are obviously unable or unwilling to comprehend. The noisy ground truth output you posted is exactly the noisy ground truth that you can see in this video:

There is nothing fake about it. There is no tile-based whatever thing you made up used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame. And there is nothing fake or weird about VRS either. If you have a constrained budget, which rays per pixel is at the moment and will be for the foreseable future, you spend where it matters the most. So of course the areas where there are more noticable details get more rays per pixel. Why the hell not?

And worst of all for you, there is actually an introductory video by Nvidia themselves with a Bethesda dev going into detail about the Quake RTX:

The dev even said in the video: "No tricks, this is actually real. We're not faking it."

Nice try, but you failed.
Posted on Reply
#87
mouacyk
Nice to know that a $1200 2080 Ti renders RTX at 320x200 up to 60fps, for true 1080p raster resolutions. Intel already did this with Q2 in late 2000 but around 20fps at 480p native res. Of course, they didn't have a hybrid rendering pipeline then, so no raster tricks to fill in the gaps. That's what DXR is for, and NVidia exploited it quite well.
Posted on Reply
#88
theoneandonlymrk
Jinxed
There is no tile-based whatever made up thing used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame
Tiles =older frames ? Err.

All rasterization and all raytraced graphics are fake by remit.
Posted on Reply
#89
dragontamer5788
Jinxed
It uses denoisers.
For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.

Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.

---------

Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do NOT absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.
Posted on Reply
#90
Jinxed
theoneandonlymrk
Tiles =older frames ? Err.

All rasterization and all raytraced graphics are fake by remit.
Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.
Posted on Reply
#91
M2B
What the hell does "Fake Ray Tracing" even mean lol. Everybody knows if you want to do Real-Time RT you have to sacrifice the Ray-Count and rely on denoising to fill the damn scene. There is no such a thing as 'Fake Ray Tracing".
Hundreds or even thousands of rays/px are needed to do Real Time RT without the need of denoising which is practically impossible to achieve.
Posted on Reply
#92
Jinxed
dragontamer5788
For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.

Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.

---------

Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do NOT absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.
Of course there is physical basis for temporal denoising. But not where you are looking for it. It's on the observer side - the human eye. We are doing temporal denoising all the time. Lighbulbs are actually pulsing - depending on your electricity network frequency, which is different in defferent countries. In Europe it is 50/60 Hz. It is blinking so fast, that the eye averages the blinks and percieves it as a constant light source. The same goes for computer screens - old CRTs and even new LCD/IPS/ whatever screens. The individual pixels are either blinking or traversing from one color to another. That is the pixel response time everyone is talking about. Our eyes are averaging that as well. And it has many side effects.

Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:
Posted on Reply
#93
Caring1
chodaboy19
No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.

But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W
And in my opinion the power consumption will be closer to two 6 pin connectors plus PCI-e slot.
KarymidoN
mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?
It's not the Watts it's the Amps that require the bigger capacity PSU.
Posted on Reply
#94
dragontamer5788
Jinxed
Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:
You clearly don't understand Raytraced AO.

Lets look at an actual picture of an actual corner of a room. (Particularly, this blogpost: www.nothings.org/gamedev/ssao/).



Literally, a picture of the upper corner of some dude's house. This is a real photograph.

Now lets look at AO at 2:20. Not the 2d Screen-space AO image, but the NVidia "Raytraced AO" image:




AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.

AO exaggerates the shadows of corners. Sometimes its correct: some corners in reality are very similar to AO corners. Take this corner from the photograph:



This corner is what AO is trying to replicate. However, corners don't always look like this in reality.

EDIT: Besides, this corner is cooler and more interesting to look at. So lets make all video game corners look like this, even if its not entirely reality. Making things look cool is almost the point of video games anyway.
Posted on Reply
#95
Jinxed
dragontamer5788


AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.
This actually shows that is it you who does not understand how global ilumination works. The amount and location of light and shadow depends also on the materials. You cannot compare the reflection of a corner in some random dude's house and the one in the Nvidia demo, because you have no way of knowing if the materials are even remotely similar, with similar luminance etc. Take it to the extreme and imagine a corner of a room made completely from mirrors. Would that look anything like the random dude's corner? No.

The images in that demo can only be compare between themselves - the Screen Space Ambient Occlusion (SSAO, rasterization) to the raytraced ambient occlusion - because they are based on the same materials.

Also the fact that classic AO sometimes looks right is the same thing - materials. For some materials, it may actually be almost correct.
Posted on Reply
#96
theoneandonlymrk
Lol
Jinxed
Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.
You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..

I saw all of that already I assure you.

I had a gaming pc with six GPU in once, just cos(Batman eek).
Posted on Reply
#97
Jinxed
theoneandonlymrk
Lol

You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..

I saw all of that already I assure you.

I had a gaming pc with six GPU in once, just cos.
It does not seem so from your posts.
Posted on Reply
#98
theoneandonlymrk
Jinxed
It does not seem so from your posts.
Straw's being clutched, meeeoow.

It's late , you're lucky.

.soo in short were all getting on board with GPU developer's deciding ,via ai Supersampling Rtx etc what the game developers actually wanted to show you?.

I'll try it but probably only like it online competitive.
Posted on Reply
#99
Jinxed
theoneandonlymrk
Straw's being clutched, meeeoow.

It's late , you're lucky.
So are you saying quantity > quality? Like the amount of posts you make is actually more important that WHAT'S IN THOSE POSTS? Cute. FYI I've been in tech for a very long time. But this is internet. Anyone can say anything, be it the truth or completely made up. Believe me at your own peril. For the same reason I do not believe you, as the quality of your posts does not support your claims.
Posted on Reply
#100
dragontamer5788
Jinxed
This actually shows that is it you who does not understand how global ilumination works. The amount and location of light and shadow depends also on the materials. You cannot compare the reflection of a corner in some random dude's house and the one in the Nvidia demo, because you have no way of knowing if the materials are even remotely similar, with similar luminance etc.
Look, I know you're getting egged on by some other users right now. So I'll try to cut you some slack here.

Let me just give you a few links on this issue:

* docs.blender.org/manual/en/2.79/render/blender_render/world/ambient_occlusion.html
Ambient Occlusion is a sophisticated ray-tracing calculation which simulates soft global illumination shadows by faking darkness perceived in corners and at mesh intersections, creases, and cracks, where ambient light is occluded, or blocked.

There is no such thing as AO in the real world; AO is a specific not-physically-accurate (but generally nice-looking) rendering trick. It basically samples a hemisphere around each point on the face, sees what proportion of that hemisphere is occluded by other geometry, and shades the pixel accordingly.
* rmanwiki.pixar.com/display/REN/PxrOcclusion
PxrOcclusion is a non-photorealistic integrator that can be used to render ambient occlusion, among other effects.
* docs.arnoldrenderer.com/display/A5AFMUG/Ambient+Occlusion
Ambient occlusion is an approximation of global illumination that emulates the complex interactions between the diffuse inter-reflections of objects. While not physically accurate (for that use full global illumination), this shader is fast and produces a realistic effect.
All three 3d programs above are Raytracers, implementing raytraced ambient occlusion. All three claim that the effect is "fake" to some degree. No one who knows what they're talking about would ever claim that ambient occlusion is physically accurate.
Posted on Reply
Add your own comment