• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Unveils Adaptive Temporal Anti-Aliasing with Ray-Tracing

I'm not saying it's different, I'm even convinced that RTX and ATAA will be under the gameworks brand. And why bring AMD into the discussion,no one mentioned AMD. I can name a few problems worse than gameworks they have just off the top of my head.

BTW you have old info on gameworks, like most of red team supporters you like to complain more than you like to absorb information



 
Last edited:
I'm not saying it's different, I'm even convinced that RTX and ATAA will be under the gameworks brand. And why bring AMD into the discussion,no one mentioned AMD. I can name a few problems worse than gameworks they have just off the top of my head.

BTW you have old info on gameworks, like most of red team supporters you like to complain more than you like to absorb information




Because it's pertinent. If nVidia did this sort of thing to benefit us consumers, @ the cost of some FPS, then i'd agree with you but, unfortunately for us, they are making stuff like this more to hurt the competition than to benefit us, and that's why i take exception.

Want some old info? Here's some:

https://www.extremetech.com/extreme...over-optimizations-to-the-witcher-3-wild-hunt

How about some more recent:

https://www.tomshardware.com/reviews/final-fantasy-xv-performance-analysis,5513-7.html

nVidia WILL hurt their OWN users if, doing so, hurts competition users more.
 
That's different then ... say ... gameworks ... how, exactly?

They'll be "happy" to take the performance hit, so long as any AMD cards have a BIGGER performance hit.

Yes because you totally can't turn most of those 'features' off...

Just because a setting's there doesn't mean it needs to be used. This type of AA falls right into that category.

At the same time Gameworks and the funding poured into it has also brought good stuff to games. As long as its optional I really don't see the issue.
 
Yes because you totally can't turn most of those 'features' off...

Just because a setting's there doesn't mean it needs to be used. This type of AA falls right into that category.

At the same time Gameworks and the funding poured into it has also brought good stuff to games. As long as its optional I really don't see the issue.
He wouldn't have an issue either. If AMD did it.
 
He wouldn't have an issue either. If AMD did it.

Wrong: i most certainly would. It's the practice that's wrong, dude!
 
If it can't do better than what that image shows, yes, it will probably die.
About the added delay, any form of post-processing adds delay, it's not like can look at 2-8 million pixels in zero time. Much less compute something about them. The 33ms is a meaningless number as long as we don't know what delay other AA techniques introduce.
I presume they'd be all too happy to tout ms comparisons to other PP options if this is faster. Fair point nonetheless.
 
It's not wrong the way it is now, don't be silly, it's not hurting amd's performance when it's off. People are always making this into a huge conspiracy thing, lol. It's extra features, that's it. MSAA or any other option can hurt one brand more or and the other less in a given game, is that a conspiracy to hurt them ? People are really friggin silly.

However, if nvidia inflates the prices of new gpus to push tensor cores to help with with new, rtx based gameworks then I guess they're not getting my money this time. Used 1080Ti it is. or this friggin rad Titan jedi I've been eyeballing

https://allegro.pl/nvidia-titan-xp-star-wars-collectors-edition-jedi-i7473397682.html
 
temporal blurring with delay (because temporal) ... no thanks
 
I presume they'd be all too happy to tout ms comparisons to other PP options if this is faster. Fair point nonetheless.
At the same time, is 33ms would be embarrassing, they wouldn't have mentioned it at all ;)
We can assume anything, without numbers we're either venting or just keeping ourselves busy.

Wrong: i most certainly would. It's the practice that's wrong, dude!
The practice of what? Of features not being perfectly equally taxing on all hardware?
 
TXAA is still completely broken (headache inducingly blurry), so time to release some new fast method garbage.
 
At the same time, is 33ms would be embarrassing, they wouldn't have mentioned it at all ;)
We can assume anything, without numbers we're either venting or just keeping ourselves busy.


The practice of what? Of features not being perfectly equally taxing on all hardware?

This article explains the problem nicely.

In our investigation we found that across a sample of Tessellation heavy games and benchmarks that the AMD Radeon R9 285 was either ahead of the NVIDIA GeForce GTX 960 or at most 13% behind. Seeing as how Tessellation x32 is what’s used for HairWorks we were anticipating a similar 13% delta between the R9 285 and GTX 960.
But what we have found with HairWorks is that the performance cost on the R9 285 is double that of the NVIDIA GPUs and it’s nearly three times for the R9 290X. If AMD’s tessellation performance was the real reason behind this huge penalty running HairWorks as NVIDIA claims, then we would would’ve seen it reflected in our other tessellation tests but that’s clearly not what we’re seeing.
A 13% delta between the R9 285 and GTX 960 turns into a whopping 206% delta, in both instances running tessellation x32. The main difference is that in the first instance both cards were running a standard piece of code that both companies can optimize while in the second instance the R9 285 is running code developed by NVIDIA that AMD cannot even see.

This video also shows something interesting: start watching from around 10 minute mark.
 
TXAA is still completely broken (headache inducingly blurry), so time to release some new fast method garbage.
No it works fine, best aa method atm hands down as far as image quality (excluding resolution scaling or pixel density increase), just heavily taxing.

Boy,you have major anger issues.
 
33 ms is a whole frame at 30 fps. In other words, really bad latency (the frame you're seeing is always one second old). And again, this is on a Titan V, the fastest card on the planet that costs like $2000. No game developer is going to waste their time on this. The cost is too high and the reward is not competitive.


If you're just doing real estate virtual tours, that one second delay is fine (probably why they felt like announcing that detail). Not for gaming.
 
No game developer is going to waste their time on this. The cost is too high and the reward is not competitive.
And no game studio/developer would want to ruin their name and Reputation with that BAD performance Issue
 
D.S.R. up the resolution and apply FXAA.
just use dsr and dick around with the gaussian blur filter, fxaa makes things far too blurry for my liking, personally i want SMAA implemented at a driver level
 
SSAA 4x looks better than ATAA 8x IMO. Why keep reinventing the wheel?


And that's on a Titan V. How bad is it on a 1060 GTX (you know, a card gamers actually use)? Even if it was intended for real estate renders, why use the more expensive ATAA when you can get a better results with SSAA? I think NVIDIA just built another bridge to no where to sell GameWorks.
Again, it is not a fix for 'spatial' aliasing, so still image comparisons don't apply in this case.
 
This article explains the problem nicely.



This video also shows something interesting: start watching from around 10 minute mark.
Well, if your best argumet is wtftech, then I guess my job here is done.
 
SSAA 4x looks better than ATAA 8x IMO. Why keep reinventing the wheel?


And that's on a Titan V. How bad is it on a 1060 GTX (you know, a card gamers actually use)? Even if it was intended for real estate renders, why use the more expensive ATAA when you can get a better results with SSAA? I think NVIDIA just built another bridge to no where to sell GameWorks.
When a claim of this nature is proported, one asks even if it is possible...
See, this whole chain of postprocessing filters are revolutionary because they contrive a better compromise than +2xMSAA. MSAA is still needed for Z testing, as postprocess filters just skip this part, however once a 'boundary' is 'marked' it is useless to test further the ratio of the approximate geometry - postprocessing can do that better by looking at the morphology of where the boundary edges extend to: a.k.a MLAA/SMAA.
Beyond that which postprocessing filters do for the geometry, there is 'literally' zero benefit to them because you are just writing over very expensive anisotropically filtered texture data. It is plain nonsense to think SSAA - which has an equal part of anisotropical filtering as well as antialiasing - can be superseded by a cheap shot approximate.
 
Last edited:
  • Like
Reactions: bug
Well, if your best argumet is wtftech, then I guess my job here is done.

Did you bother to read the article? It's a bit big but worth it.

Just because this article was released from a "click bait friendly" site doesn't make it bad.
 
At the same time, is 33ms would be embarrassing, they wouldn't have mentioned it at all ;)
We can assume anything, without numbers we're either venting or just keeping ourselves busy.

When you think most games run at 30fps

Nvidia Research ATA said:
The algorithm removes blurring and ghosting artifacts associated
with standard temporal antialiasing and achieves quality approaching
8× supersampling of geometry, shading, and materials while
staying within the 33ms frame budget required of most games.

You throw it out there.
 
When you think most games run at 30fps

Nvidia Research ATA said:
The algorithm removes blurring and ghosting artifacts associated
with standard temporal antialiasing and achieves quality approaching
8× supersampling of geometry, shading, and materials while
staying within the 33ms frame budget required of most games.

You throw it out there.
Again, bad marketing spin on the wrong concept - there is zero anisotropic component(textures), hence cannot be supersampling

What else, the algorithm is plain wrong. When comparing 4 subpixels, you 'can' use the normal template of box-quad arrangement. When doing 8 subpixels, you are literally comparing against the neighbouring pixels, a.k.a bilateral filtering(weighted sample). I couldn't imagine why it will turn out blurry.
 
33ms lag, no way, too high. I prefer pure 4K, noAA, 16AF.
I haven't used AA in games in a long time. After 1080p there was no real need. Still isn't.

This looks promising. The example shows that it seems to do better than standard AA techniques. And for those who didn't read the entire article, NVidia will be including this in future drive updates, which implies that it will likely be supported in hardware through adjustments to the coding.
 
Last edited:
There you go Nvidia, AMD fans have spoken... they love 4K.

So just keep releasing faster GPUs.

Done.
 
If it can't do better than what that image shows, yes, it will probably die.
About the added delay, any form of post-processing adds delay, it's not like can look at 2-8 million pixels in zero time. Much less compute something about them. The 33ms is a meaningless number as long as we don't know what delay other AA techniques introduce.
Actually, THE BEST postprocessing AA method outhere, SMAA, adds only 2.5ms delay to a 60fps scene. So you can understand the 33ms horror....
 
Of course, there is no ray tracing involved.

Go home Nvidia your drunk, stop investing and start counting all that cash.
 
Back
Top