• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Shares Details About Ampere Founders Edition Cooling & Power Design - 12-pin Confirmed

Joined
Dec 24, 2008
Messages
1,614 (0.37/day)
Location
Volos, Greece
System Name ATLAS
Processor Q6600 QUAD
Motherboard ASUS P5QC
Cooling ProlimaTech Armageddon
Memory HYPER-X KHX1600C8D3T1K2 /4GX PC3-12800 1600MHz
Video Card(s) Sapphire HD 5770 VAPOR-X
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) X-Fi Music + mods Audigy front Panel (full working)
Power Supply HIPER 4M780 PE 980W Peak
Mouse MX510
Keyboard Microsoft Digital media 3000
Software Win 7 Pro x64 ( Retail )
The PCI-E specification are so super safe, that you can push alot more through then intended. A capable PSU, connectors, wires and video cards can pull way more then the advertised 75/150W. I mean even my oc'ed 580 did manage to pull 22A from one single 8 pin connector, it got warm, yes, lol.

But if you think about it, why Nvidia introduced this "one" connector, it's designed for the enterprise market, and simply pushed over to the gaming part. They no longer have to make 2 different model(s) of cards for both enterprise and / or consumer. The cards that do not pass the enterprise quality are moved over to the gamer ones. Nothing really gets lost in these markets.
Electrically they are two major hazards when the cable harness this working at it limits.
a) severe voltage fluctuation which can drive the card to freeze at gaming.
b) Molex pins they can overheat and even get burned.

PSU over current protection does not include molex pins sparkling, that is an instant extremely high current event.
Any way, I am not planning to be a joy killer, all I am saying this is that extreme stress of electrical parts this is a bad idea.
 
Joined
Feb 3, 2017
Messages
2,702 (1.94/day)
Processor i5-8400
Motherboard ASUS ROG STRIX Z370-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-3200 CL16
Video Card(s) Gainward GeForce RTX 2080 Phoenix
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Logitech G700
Keyboard Corsair K60
I always like to refer back to this video, when BF5's raytracing was at its highest quality. DICE later improved performance by dialling the RTX quality back a bit, and the patched version was definitely worth the small fidelity loss for such a significant performance increase
While you are right about fidelity not being worth the performance hit in a multiplayer title, "at its highest quality" is rather misleading. It will be very difficult to see the differences between then and now, the optimizations were primarily technical, not giving back in image quality. By the way, in many if not most of these scenes do show clear artifacting in screenspace reflections.
That's not traditional rasterization, that demo uses some form of Tracing for the global illumination system in fact.
Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.
 
Joined
Mar 10, 2010
Messages
8,409 (2.15/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R7 3800X@4.350/525/ Intel 8750H
Motherboard Crosshair hero7 @bios 2703/?
Cooling 360EK extreme rad+ 360$EK slim all push, cpu Monoblock Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in two sticks./16Gb
Video Card(s) Sapphire refference Rx vega 64 EK waterblocked/Rtx 2060
Storage Samsung Nvme Pg981, silicon power 1Tb samsung 840 basic as a primocache drive for, WD2Tbgrn +3Tbgrn,
Display(s) Samsung UAE28"850R 4k freesync, LG 49" 4K 60hz ,Oculus
Case Lianli p0-11 dynamic
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Iksu force fx
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Electrically they are two major hazards when the cable harness this working at it limits.
a) severe voltage fluctuation which can drive the card to freeze at gaming.
b) Molex pins they can overheat and even get burned.

PSU over current protection does not include molex pins sparkling, that is an instant extremely high current event.
Any way, I am not planning to be a joy killer, all I am saying this is that extreme stress of electrical parts this is a bad idea.
It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.
 
Joined
Dec 29, 2010
Messages
1,548 (0.43/day)
Joined
Aug 31, 2016
Messages
73 (0.05/day)
Basically they're saying the Max Power draw from a card with 1x 12pin new connector is 375W? (2x 150W 8pin + 75W From PCIE Power) right?

lets see what the real power draw is (after reviews), i hope they just left a lot more capacity on the connector to be used.
The two 8-pin connections of 12-pin cable go into the PSU, this is different than 150W 8-pin you plug into the GPU. These are the slots that normally power your 2x8-pin cable, rated up to 300W. So theoretically 12-pin is up to 600W.

This doesn't necessarily need to be a hint at anything about Ampere's power draw, but it could mean that even Founders Edition is going to be able to pull over 375W. Most likely not with reference TDP, but after raising power target to the cap. Theoretically there would be no need for dual 8-pin if it was to be capped at 320W like 2080 Ti. Using two slots on the PSU instead of one is certainly some kind of compatibility concern, they wouldn't go for it if it wasn't needed. I wonder if there is going to be single 8-pin version for lower end cards like 3070, assuming they get 12-pin too.
 
Last edited:
Joined
Jan 21, 2020
Messages
108 (0.35/day)
Err it's real now, no it's alllllll fake, were quite far out from real and will need way more than Rtx DxR for that.

He probably based that on trying it because that's my opinion as an owner.
Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video:

It will still take a while to get photorealistic real-time output of course, as that may require an order of magnitude more samples (rays) per pixel. But there's nothing fake about it even now. That's just a lie from someone in denial of the technology.
 
  • Like
Reactions: M2B

M2B

Joined
Jun 2, 2017
Messages
284 (0.22/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
Epic has been intentionally vague about whether raytracing was used. Lumen definitely does support raytracing and it is highly optimized in a way similar to CryEngine's Neon Noir demo - raytraced GI or AO that falls back to voxel-based solution as soon as it can. There have been reports that the demo was not using hardware acceleration for RT which is kind of strange considering PS5 is supposed to have that.
That was not the point of demo - enormous amounts of polygons and streaming them in real-time from fast storage was the point and showoff feature.
I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
Apparently it's different to the triangle RT solution from Nvidia.
 
Joined
Jan 21, 2020
Messages
108 (0.35/day)
I'm honestly not even sure if it's possible for the GI system in that demo (in its current form) to utilize the RT units to accelerate the process.
Apparently it's different to the triangle RT solution from Nvidia.
Actually it's just an extension of light probes. You can see the typical artifacts of light probes (changes in brightness of surfaces) when moving through the tunnel from the big cave. The only difference is that the shading on the triangles looks much more realistic. But that is due to their high-poly-count feature in lumen with the triangles being almost sub-pixel sized, currently supported only on the PS 5, because it has such a ridiculously fast custom-made SSD. The light is still incorrect. It's the illusion that is significantly better.
 

M2B

Joined
Jun 2, 2017
Messages
284 (0.22/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
Like I said, nothing fake about it. Raytracing is in fact quite simple. The same logic, light/luminance equations and PBR materials apply for professional renderers and real-time raytracing in games. It's no coincidence that you can accelerate raytracing in professional renderers using Turing GPUs. You can see how the noisy ground truth output looks like in this video:

It will still take a while to get photorealistic real-time output of course, as that may require an order of magnitude more samples (rays) per pixel. But there's nothing fake about it even now. That's just a lie from someone in denial of the technology.

Ray Traced Shadows, Ambient Occlusion and Global illumination don't need that many samples to look good because of their softer look and nature. 1 or 2 sample per pixel should do the job with good enough denoising. When it comes to reflections though story is a bit different and more samples are needed for a more convincing look.
Nvidia is apparently working on more efficient denoising methods which could potentially improve performance and even visuals.
 
Last edited:
Joined
Dec 24, 2008
Messages
1,614 (0.37/day)
Location
Volos, Greece
System Name ATLAS
Processor Q6600 QUAD
Motherboard ASUS P5QC
Cooling ProlimaTech Armageddon
Memory HYPER-X KHX1600C8D3T1K2 /4GX PC3-12800 1600MHz
Video Card(s) Sapphire HD 5770 VAPOR-X
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) X-Fi Music + mods Audigy front Panel (full working)
Power Supply HIPER 4M780 PE 980W Peak
Mouse MX510
Keyboard Microsoft Digital media 3000
Software Win 7 Pro x64 ( Retail )
It's typically Nvidia bullshit, they heard 12V PSU were going to be a thing and decided to gerzump everyone's arses again by inventing it! Toot sweet.
Same as they invented raytracing after sometime after the first guy's did And after Microsoft announced DxR.
I will disagree, from the moment that NVIDIA suggests the use of double 8 Pin ( 6+2) wires adaptor, the industry it is not pushed to follow their footsteps.
PSU development and manufacturing this is not happening in just few months.

As gossip or speculation, I will say that at NVIDIA road-mad, the next GPU after this it will use less power than that.
But this is material for a conversation no sooner than May 2021.
 
Joined
Feb 20, 2019
Messages
1,735 (2.68/day)
System Name PowerEdge R730 DRS Cluster
Processor 4x Xeon E5-2698 v3
Cooling Many heckin screamy bois
Memory 480GB ECC DDR4-2133
Video Card(s) Matrox G200eR2
Storage SD Card. Yep, really no other local storage.
Display(s) It's probably a couple of boring Dell Ultrasharps and a sacrificial laptop.
Case 39U 6-rack server room with HEVC and 44KVA UPS
Mouse Maybe
Keyboard Yes!
Software ESXi 6.5 U3
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
And you are basing that on what? There's nothing fake about the current raytracing implementation. It is and always was about the resolution. Just like old gen graphics were starting at 320x240, going through 640x480 all the way up to the 4k we have now, raytracing is going through that same path. It's about how many rays per pixel you can cast. Essentially you get a low res, high noise picture, which is the basis for GI, reflections or shadows. There's nothing fake about it, you're just dealing with the lack of data and noise, just like the low resolutions in the old times of gaming. Newer gens of cards will have more power, will be able to cast more rays per pixel, improving the "resolution", the actual quality of the raytraced output. Raytracing can produce photorealistic output if you don't need real time output. That means you can cast hundreds of rays per pixel and wait for it to be computed. Metro Exodus was if I remember correctly 1 ray per pixel due to their checkerboarding approach. Denoising makes that into something useful. Even such a small sample rate is already noticeably better that traditional rasterization. Now imagine 4 rays per pixel. That's gonna be a massive improvement.
Basing that on the example I specifically singled out, because it lets you mess around with settings and turn off the fakery to see what's really going on under the hood.

Raytracing a scene fully on my 2060S at native resolution still takes 20 seconds to get a single, decent-quality frame, so there are two main tricks used to generate a frame in the fraction of a second required for a single convicing frame:

  1. Temporal denoiser + blur
    This is based on previous frame data, so with the textures turned off and the only image you're seeing is what's raytraced. Top image was taken within a few frames of me moving the camera, bottom image was the desired final result that took 3-5 seconds to 'fade' in as the temporal denoiser had more previous frames to work from. Since you are usually moving when you're actually playing a game, the typical image quality of the entire experience is this 'dark smear', laggy, splotchy mess that visibly runs at a fraction of your framerate. It's genuinely amazing how close to a useful image it's generating in under half a second, but we're still a couple of orders of magnitude too slow to replace baked shadowmaps for full GI.
    1598470147007.png


  2. Resolution hacks and intelligent sampling zones to draw you eye to shiny things at the cost of detail accuracy (think of it as a crude VRS for DXR)
    Here's an image from the same room, zoomed a lot, and the part of the image I took it from for reference:
    A - rendered at 1/4 resolution
    B - tranparency, this is a reflection on water, old-school 1995 DirectX 3.0 dither hack rather than real transparency calculations
    C - the actual resolution of traced rays - each bright dot in region C is a ray that has been traced in just 4-bit chroma and all the dark space is essentially guesswork/temporal patterns tiled and rotated based on the frequency of those ray hits. If you go and look at a poorly-lit corner of the room you can clearly see the repeated tiling of these 'best guess' dot patterns and they have nothing to do with the noisier, more random bright specs that are the individual ray samples.
    1598470939384.png

    1598471157624.png
So, combine those two things together. Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region. If I was going to pick a rough ballpark figure, I'd probably say that 3% of the frame data in that last image is raytraced samples and 97% of it is faked interpolation between regions and potato-stamped to fill in the gaps with an approximation. This works fine as long as you just want an approximation, because the human brain does great work in filling in the gaps, especially when it's all in motion. Anyway, once it's tile-stamped a best-guess frame together out of those few ray samples, each of those barely-raytraced frames are blurred together in a buffer over the course of several hundred frames. There will be visual artifacts like in my first point anywhere you have new data on screen, because temporal filtering of on-screen data only means that anything that has appeared from offscreen is a very low-resolution, mostly fake mess for the first few dozen frames.

Don't get me wrong, QuakeII RTX is a technological marvel - it's truly incredible how close to a realtime raytraced game we can get with so many hacks and fakery to spread that absolutely minimal, almost insignificant amount of true raytracing around. Focus on the bits that matter, do it at a fraction of the game resolution and only in areas that are visibly detailed. Blur the hell out of the rest using tens of previous frames and a library of pre-baked ray tiles to approximate a raytraced result until you have hundreds of frames of data to actually use for real result.

We're just not at a level where we can afford to do it at full resolution, for the whole screen at once, and for regions offscreen so that movement doesn't introduce weird visual artifacts. 10x faster than a 2080Ti might get us those constraints, and another couple of orders magnitude might allow us to bring the temporal filter down from a hundred frames for a useful image, to single digit numbers of frames. It's still not realtime, but if people can run games at 100fps, 25fps raytraced data with temporal interpolation should be very hard to notice.

So yeah, real raytracing is going to need 1000x more power than a 2080Ti, but even with what we have right now, it's enough to get the ball rolling if you don't look too closely and hide the raytracing between lots of shader-based lies too. Let's face it, shader based lies get us 90% there for almost free, and if the limited raytracing can get us 95% of the way there without hurting performance, people are going to be happy that there's a noticeable improvement without really caring about how it happened - they'll just see DXR on/off side by side and go "yeah, DXR looks nicer".
 
Last edited:
Joined
Jan 21, 2020
Messages
108 (0.35/day)
Firstly we have very low ray density that is used as a basis for region definitions that can then be approximated per frame using a library of tile-based approximations that aren't real raytracing, just more fakery that's stamped out as a best guess based on the very low ray coverage for that geometry region.
That is a complete lie. A "library of tile-based approximation" is completely made up. There are denoisers at work, which you are obviously unable or unwilling to comprehend. The noisy ground truth output you posted is exactly the noisy ground truth that you can see in this video:

There is nothing fake about it. There is no tile-based whatever thing you made up used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame. And there is nothing fake or weird about VRS either. If you have a constrained budget, which rays per pixel is at the moment and will be for the foreseable future, you spend where it matters the most. So of course the areas where there are more noticable details get more rays per pixel. Why the hell not?

And worst of all for you, there is actually an introductory video by Nvidia themselves with a Bethesda dev going into detail about the Quake RTX:

The dev even said in the video: "No tricks, this is actually real. We're not faking it."

Nice try, but you failed.
 
  • Like
Reactions: M2B
Joined
Feb 1, 2013
Messages
633 (0.22/day)
System Name Gentoo64 /w Cold Coffee
Processor 9900K 5.2GHz 1.288v
Motherboard EVGA Z370 Micro
Cooling Custom 480mm H2O, Raystorm Pro, Nemesis GTX, EK-XRES
Memory 2x8GB Trident Z 4133-C16-2T 1.45v
Video Card(s) MSI Seahawk EK X 1080Ti 2114/12420 1.081v
Storage Samsung 970 EVO 500GB, 860 QVO 2TB
Display(s) XB271HU 165Hz
Case FT03-T
Audio Device(s) SBz
Power Supply SS-850KM3
Mouse G502
Keyboard G710+
Software Gentoo 64-bit, Windows 7/10 64-bit
Benchmark Scores Always only ever very fast
Nice to know that a $1200 2080 Ti renders RTX at 320x200 up to 60fps, for true 1080p raster resolutions. Intel already did this with Q2 in late 2000 but around 20fps at 480p native res. Of course, they didn't have a hybrid rendering pipeline then, so no raster tricks to fill in the gaps. That's what DXR is for, and NVidia exploited it quite well.
 
Joined
Mar 10, 2010
Messages
8,409 (2.15/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R7 3800X@4.350/525/ Intel 8750H
Motherboard Crosshair hero7 @bios 2703/?
Cooling 360EK extreme rad+ 360$EK slim all push, cpu Monoblock Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in two sticks./16Gb
Video Card(s) Sapphire refference Rx vega 64 EK waterblocked/Rtx 2060
Storage Samsung Nvme Pg981, silicon power 1Tb samsung 840 basic as a primocache drive for, WD2Tbgrn +3Tbgrn,
Display(s) Samsung UAE28"850R 4k freesync, LG 49" 4K 60hz ,Oculus
Case Lianli p0-11 dynamic
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Iksu force fx
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
There is no tile-based whatever made up thing used to process that. It uses denoisers. In fact in most games those denoisers are driven by the Turing tensor cores. Also what you're missing is the fact the the denoisers are temporal, taking advantage of data from multiple older frames to produce each new frame
Tiles =older frames ? Err.

All rasterization and all raytraced graphics are fake by remit.
 
Joined
Apr 24, 2020
Messages
557 (2.56/day)
It uses denoisers.
For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.

Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.

---------

Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do NOT absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.
 
Joined
Jan 21, 2020
Messages
108 (0.35/day)
Tiles =older frames ? Err.

All rasterization and all raytraced graphics are fake by remit.
Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.
 

M2B

Joined
Jun 2, 2017
Messages
284 (0.22/day)
Location
Iran
Processor Intel Core i5-8600K @4.9GHz
Motherboard MSI Z370 Gaming Pro Carbon
Cooling Cooler Master MasterLiquid ML240L RGB
Memory XPG 8GBx2 - 3200MHz CL16
Video Card(s) Asus Strix GTX 1080 OC Edition 8G 11Gbps
Storage 2x Samsung 850 EVO 1TB
Display(s) BenQ PD3200U
Case Thermaltake View 71 Tempered Glass RGB Edition
Power Supply EVGA 650 P2
What the hell does "Fake Ray Tracing" even mean lol. Everybody knows if you want to do Real-Time RT you have to sacrifice the Ray-Count and rely on denoising to fill the damn scene. There is no such a thing as 'Fake Ray Tracing".
Hundreds or even thousands of rays/px are needed to do Real Time RT without the need of denoising which is practically impossible to achieve.
 
Last edited:
Joined
Jan 21, 2020
Messages
108 (0.35/day)
For those in the graphic arts community, there's a term called Unbiased Rendering. Why? Because even Raytracers are "fake" to some degree. Unbiased rendering is closest to a physical simulation by my understanding. However, biased-rendering (which includes many raytracing effects), are faster, and in many cases, converge faster as well. This leads to realistic-looking simulated drawings, but nothing like the reality of actually simulating 10,000+ unbiased rays per pixel.

Temporal denoising is solidly in the "biased rendering" camp, no matter how you look at it. There's no physical reality that says we should smear light particles backwards in time. Yes, the effect looks good on modern systems and its efficient to do, but there's no physical principle to temporal denoising. Light just doesn't "time travel" and average with future light photons that hit the same area.

---------

Ambient Occlusion is another funny biased-rendering technique. Its completely fake. Corners do NOT absorb light into invisible black holes. But we use AO techniques because it makes shadows look deeper and with more contrast, which aids the video game player significantly.
Of course there is physical basis for temporal denoising. But not where you are looking for it. It's on the observer side - the human eye. We are doing temporal denoising all the time. Lighbulbs are actually pulsing - depending on your electricity network frequency, which is different in defferent countries. In Europe it is 50/60 Hz. It is blinking so fast, that the eye averages the blinks and percieves it as a constant light source. The same goes for computer screens - old CRTs and even new LCD/IPS/ whatever screens. The individual pixels are either blinking or traversing from one color to another. That is the pixel response time everyone is talking about. Our eyes are averaging that as well. And it has many side effects.

Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:
 
Joined
Oct 22, 2014
Messages
10,138 (4.55/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel i5-9600KF
Motherboard NZXT N7 Z370 Black
Cooling Cooler Master 240 RGB AIO / Stock
Memory Thermaltake Toughram 16GB 4400MHz DDR4 or Gigabyte 16GB 3600MHz DDR4 or Adata 8GB 2133Mhz DDR4
Video Card(s) Asus Dual 1060 6GB
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
No, Nvidia hasn't released any power consumption data. These numbers are just what people are guessing.

But it's assumed the power consumption can reach: (150W x 2 ) + 75W = 375W
And in my opinion the power consumption will be closer to two 6 pin connectors plus PCI-e slot.

mb i made a typo. 300W Connector + 75W from PCIE, i don't understand Why the box with the adaptor that seasonic was shipping said "850W PSU recommended" that led me to believe this cards would be more power hungry, most 650W Gold level PSU's will do just fine if you're not overclocking this cards then. why 850W recomendation from seasonic?
It's not the Watts it's the Amps that require the bigger capacity PSU.
 
Joined
Apr 24, 2020
Messages
557 (2.56/day)
Ambient Occlusion is not a ray tracing technique. That is a classic rasterization thing. Raytraced ambient occlusion, which is in fact the global illumination everyone talks about, replaces it with actual real results. You can see the difference in this video at 2:20:
You clearly don't understand Raytraced AO.

Lets look at an actual picture of an actual corner of a room. (Particularly, this blogpost: https://www.nothings.org/gamedev/ssao/).

1598476738383.png


Literally, a picture of the upper corner of some dude's house. This is a real photograph.

Now lets look at AO at 2:20. Not the 2d Screen-space AO image, but the NVidia "Raytraced AO" image:


1598476889165.png


AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.

AO exaggerates the shadows of corners. Sometimes its correct: some corners in reality are very similar to AO corners. Take this corner from the photograph:

1598477102916.png


This corner is what AO is trying to replicate. However, corners don't always look like this in reality.

EDIT: Besides, this corner is cooler and more interesting to look at. So lets make all video game corners look like this, even if its not entirely reality. Making things look cool is almost the point of video games anyway.
 
Last edited:
Joined
Jan 21, 2020
Messages
108 (0.35/day)
View attachment 166793

AO is an approximation, something that works pretty good in a lot of cases, but kind of fails if you know how its "fakery". However, regardless of how "fake" AO is, it looks cinematic and "cool". People like seeing corners with higher levels of contrast.
This actually shows that is it you who does not understand how global ilumination works. The amount and location of light and shadow depends also on the materials. You cannot compare the reflection of a corner in some random dude's house and the one in the Nvidia demo, because you have no way of knowing if the materials are even remotely similar, with similar luminance etc. Take it to the extreme and imagine a corner of a room made completely from mirrors. Would that look anything like the random dude's corner? No.

The images in that demo can only be compare between themselves - the Screen Space Ambient Occlusion (SSAO, rasterization) to the raytraced ambient occlusion - because they are based on the same materials.

Also the fact that classic AO sometimes looks right is the same thing - materials. For some materials, it may actually be almost correct.
 
Last edited:
Joined
Mar 10, 2010
Messages
8,409 (2.15/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R7 3800X@4.350/525/ Intel 8750H
Motherboard Crosshair hero7 @bios 2703/?
Cooling 360EK extreme rad+ 360$EK slim all push, cpu Monoblock Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in two sticks./16Gb
Video Card(s) Sapphire refference Rx vega 64 EK waterblocked/Rtx 2060
Storage Samsung Nvme Pg981, silicon power 1Tb samsung 840 basic as a primocache drive for, WD2Tbgrn +3Tbgrn,
Display(s) Samsung UAE28"850R 4k freesync, LG 49" 4K 60hz ,Oculus
Case Lianli p0-11 dynamic
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Iksu force fx
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Lol
Yes, older frames. Because the rays are intentionally not cast every frame to the same position in the pixel. Have you ever heard about MSAA stochastic sampling? I guess not. If you ignore the still images, which Chrispy is trying to use in a fallacious way to convince people that don't know any better, and instead look at the noisy ground truth output in a video like the one I've been posting, you can see what's going on. While the pattern in one still frame looks like checker board, it the next frame it will be offset a bit, in simplified terms, to sample data from the areas that were not samples in the previous frame. You can send rays to different points in the area represented by a pixel to get a better information of how the pixel looks like. That is the "samples/rays per pixel" we are talking about. But if you have the motion vectors for the scene, along with the raytracing samples from previous frames, you can also you use those. The downside is that the result may look a bit mopre blurry if you move the camera around very fast, since there may not be data for the temporal denoiser to work with. Lucklily this is not such a problem because how humans perceive motion. And in fact many game engines were taking advantage of this for decades - rendering scenes or parts of scenes in lower resolution when you move the camera around.
You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..

I saw all of that already I assure you.

I had a gaming pc with six GPU in once, just cos(Batman eek).
 
Joined
Jan 21, 2020
Messages
108 (0.35/day)
Lol

You realise to gain that long term badge I have happily haunted every bit of tech news here, anywhere else , and some genuine hands on why the. F##£ not actually doing, and with tech, even though I efffin hate Nvidia's marketing tactics and company buying too, I still own an Rtx card too, gits..

I saw all of that already I assure you.

I had a gaming pc with six GPU in once, just cos.
It does not seem so from your posts.
 
Joined
Mar 10, 2010
Messages
8,409 (2.15/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R7 3800X@4.350/525/ Intel 8750H
Motherboard Crosshair hero7 @bios 2703/?
Cooling 360EK extreme rad+ 360$EK slim all push, cpu Monoblock Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in two sticks./16Gb
Video Card(s) Sapphire refference Rx vega 64 EK waterblocked/Rtx 2060
Storage Samsung Nvme Pg981, silicon power 1Tb samsung 840 basic as a primocache drive for, WD2Tbgrn +3Tbgrn,
Display(s) Samsung UAE28"850R 4k freesync, LG 49" 4K 60hz ,Oculus
Case Lianli p0-11 dynamic
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Iksu force fx
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
It does not seem so from your posts.
Straw's being clutched, meeeoow.

It's late , you're lucky.

.soo in short were all getting on board with GPU developer's deciding ,via ai Supersampling Rtx etc what the game developers actually wanted to show you?.

I'll try it but probably only like it online competitive.
 
Joined
Jan 21, 2020
Messages
108 (0.35/day)
Straw's being clutched, meeeoow.

It's late , you're lucky.
So are you saying quantity > quality? Like the amount of posts you make is actually more important that WHAT'S IN THOSE POSTS? Cute. FYI I've been in tech for a very long time. But this is internet. Anyone can say anything, be it the truth or completely made up. Believe me at your own peril. For the same reason I do not believe you, as the quality of your posts does not support your claims.
 
Top