• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Unveils Adaptive Temporal Anti-Aliasing with Ray-Tracing

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA published the first documentation of Adaptive Temporal Anti-Aliasing (ATAA), an evolution of TAA that incorporates real-time ray-tracing, or at least the low light-count method NVIDIA implemented with RTX. Its "adaptive" nature also lets it overcome many of the performance challenges users encounter with TAA in high framerate and rapidly changing 3D scenes, such as in games. Non-gaming scenes, such as those used by real-estate developers, don't face these challenges.

To developers, ATAA promises image quality comparable to 8x supersampling at a cost of under 33 ms frame delay. These numbers were derived on a TITAN V ("Volta"), using Unreal Engine 4. It could take a while for ATAA to make it to games, as developers will need a few months to learn the technique before implementing them in their ongoing or future projects. NVIDIA will introduce ATAA support through driver updates.



View at TechPowerUp Main Site
 
probably going to be an 11 series only feature
 
>adding a whole frame worth of delay to the frame being shown, due to the temporal nature of this AA method

thanks, but no thanks.
 
SSAA 4x looks better than ATAA 8x IMO. Why keep reinventing the wheel?

>adding a whole frame worth of delay to the frame being shown, due to the temporal nature of this AA method

thanks, but no thanks.
And that's on a Titan V. How bad is it on a 1060 GTX (you know, a card gamers actually use)? Even if it was intended for real estate renders, why use the more expensive ATAA when you can get a better results with SSAA? I think NVIDIA just built another bridge to no where to sell GameWorks.
 
ATAA promises image quality comparable to 8x supersampling at a cost of under 33 ms frame delay.
keep out.png
 
I think NVIDIA just built another bridge to no where to sell GameWorks.
:) they sell it and there was me thinking they shoved it down your throat for free (bundled)
 
ATAA promises image quality comparable to 8x supersampling at a cost of under 33 ms frame delay.

Would be nice if they spoke English, coz this tells me exactly nothing. How much delay 8x, 6x, 4x and 2x make? Then we can judge if "at a cost of under 33ms" is good or not.

Also, why even bother with this stuff. First it'll only be available from one vendor, probably supported by their latest flagship only, waiting 2 years for game engine teams to catch up and 2 more for actual games to even get it. Pointless. Post-process AA is the future imo. Just look at SMAA. It's doing great at smoothing edges without blurring details and hardly has any performance hit. Why not build on top of that and add it to the NVIDIA Control Panel so you can just force it on any game and call it a day. I don't want to wait 5 years for features to appear in 3 games and that's it. Instead we still only have garbage FXAA which works but there are far better options and they refuse to add them. I guess that ancient looking NV CP is to much of a hassle t work with...

Basically the same situation as with MSAA which was awesome as they made better versions of it till Deferred rendering became defacto. Then it basically became useless since it didn't work anywhere anymore. No one is investing in MSAA anymore, so why bother with these game specific proprietary edge smoothing algorithms? Just take SMAA and make it even better. It's proven tech that users can actually use NOW.
 
Heh, Their Conclusion

Nvidia Research ATA said:
Our method’s performance is dominated by the ray trace. We cannot advocate it for immediate wide-spread deployment in games at current performance, but that is not concerning given that mainstream gaming GPUs have not yet appeared that support the DXR API. The real-time ray tracing ecosystem of drivers, GPUs, and algorithms must emerge together over the next few years.
 
SSAA 4x looks better than ATAA 8x IMO.
That's what my eyes tell me as well.
Why keep reinventing the wheel?
Because people want 8xSSAA quality (or better) with zero performance impact and nobody has managed to do that so far?

And that's on a Titan V. How bad is it on a 1060 GTX (you know, a card gamers actually use)? Even if it was intended for real estate renders, why use the more expensive ATAA when you can get a better results with SSAA? I think NVIDIA just built another bridge to no where to sell GameWorks.
It's done with RTX, it doesn't work on current generation hardware.
 
Last edited:
Even TAA looks better.
 
I'm not seeing the progress here.

But nice try, selling a crippling AA that equals an SSAA level nobody ever really needs.

Ray tracing so far is a buzz word that brings us nothing substantial. Its not new. And its still crippling performance. So where's the progress?
 
I'm not seeing the progress here.

But nice try, selling a crippling AA that equals an SSAA level nobody ever really needs.

Ray tracing so far is a buzz word that brings us nothing substantial. Its not new. And its still crippling performance. So where's the progress?
If it's closer to real time, that makes ray tracing a more compelling proposition for content creators. They are used to leave their rendering farms crunching on their own. If the result in the same amount of time can be better (by means of ray tracing), that's progress.

And I disagree about SSAA levels "nobody ever really needs". I could easily spot AA differences to the point 2xMSAA didn't make any sense to me. The quality difference is there. But it depends on what you're playing. While playing Witcher, there's a lot of stuff you can stop and admire. While playing Fortnite, not so much.
 
If it's closer to real time, that makes ray tracing a more compelling proposition for content creators. They are used to leave their rendering farms crunching on their own. If the result in the same amount of time can be better (by means of ray tracing), that's progress.

And I disagree about SSAA levels "nobody ever really needs". I could easily spot AA differences to the point 2xMSAA didn't make any sense to me. The quality difference is there. But it depends on what you're playing. While playing Witcher, there's a lot of stuff you can stop and admire. While playing Fortnite, not so much.

Come on man, nobody plays with SSAA x8, ever. Even for 1080p it barely makes a difference.

And even the SSAA x4 comparison up there looks much better than any of the ATAA alternatives. In the end you're still tied to the physical pixels and that can only go two ways: blurry or sharp with aliasing.
 
Even TAA looks better.

Though it's hard to compare AA methods on still images. But yeah if it's too expensive to use it will fail no matter what.
Come on man, nobody plays with SSAA x8, ever. Even for 1080p it barely makes a difference.

And even the SSAA x4 comparison up there looks much better than any of the ATAA alternatives. In the end you're still tied to the physical pixels and that can only go two ways: blurry or sharp with aliasing.

Well all that depends on how expensive it is to use; but no one uses 8xSSAA on 1080p because very few have PC capable of rendering 8k resolution. And again really should see them running on a live to make comparisons of different AA methods.
 
Last edited:
Nvidia , historically , managed to shove a lot of graphics technology that isn't particularly useful or advantageous as opposed to other alternatives. I'm afraid that will be case here as well , to our discontent.
 
Come on man, nobody plays with SSAA x8, ever. Even for 1080p it barely makes a difference.

And even the SSAA x4 comparison up there looks much better than any of the ATAA alternatives. In the end you're still tied to the physical pixels and that can only go two ways: blurry or sharp with aliasing.
Well, I just told you that I did when I had the occasion. And I have acknowledged this ATAA looks very poor to me.

AA doesn't have to be a choice between blurry and sharp. If done right it can alter just the right pixels without altering sharpness. But these methods are always more intensive, that why everyone is scrambling for alternatives.
 
So a "feature" that no current existing gaming card support yet in hardware. Nice!

tenor.gif
 
33ms lag, no way, too high. I prefer pure 4K, noAA, 16AF.
 
So a "feature" that no current existing gaming card support yet in hardware. Nice!

tenor.gif
That will change once the 11 series is out. On the other hand, name one feature about 3D rendering that has debuted with pre-installed, widespread support.
 
I'm sure they'll abandon this soon enough. Why anyone would want to use something that looks so ugly and adds significant delay?

Edit: Is there a typo in the article or did they choose to say it compares to 8X SSAA and then not provide a comparison image?
 
I'm sure they'll abandon this soon enough. Why anyone would want to use something that looks so ugly and adds significant delay?

Edit: Is there a typo in the article or did they choose to say it compares to 8X SSAA and then not provide a comparison image?
If it can't do better than what that image shows, yes, it will probably die.
About the added delay, any form of post-processing adds delay, it's not like can look at 2-8 million pixels in zero time. Much less compute something about them. The 33ms is a meaningless number as long as we don't know what delay other AA techniques introduce.
 
We are going to get Volta for GeForce cards. Yes, not "gaming optimized" Turing (which is in my opinion a 7nm successor to Volta in HPC) , but the AI volta, wtih tensor cores, only in some nerfed config (like half of what tesla equivalent of a given GPU has). Instead of rearranging the chip,nvidia would rather push Volta for games on some sort of trumped up promises of AI and ray tracing in games.
I'm not saying that this new ATAA and RTX is bogus, it isn't. I'm sure they'll look friggin amazing. It's just that early adopters will see very little benefit compared to performance hit it will take.
 
We are going to get Volta for GeForce cards. Yes, not "gaming optimized" Turing (which is in my opinion a 7nm successor to Volta in HPC) , but the AI volta, wtih tensor cores, only in some nerfed config (like half of what tesla equivalent of a given GPU has). Instead of rearranging the chip,nvidia would rather push Volta for games on some sort of trumped up promises of AI and ray tracing in games.
I'm not saying that this new ATAA and RTX is bogus, it isn't. I'm sure they'll look friggin amazing. It's just that early adopters will see very little benefit compared to performance hit it will take.

That's different then ... say ... gameworks ... how, exactly?

They'll be "happy" to take the performance hit, so long as any AMD cards have a BIGGER performance hit.
 
Back
Top