Tuesday, September 21st 2021

NVIDIA Prepares to Deliver Deep Learning Anti-Aliasing Technology for Improved Visuals, Coming first to The Elder Scrolls Online

Some time ago, NVIDIA launched its Deep Learning Super Sampling (DLSS) technology to deliver AI-enhanced upscaling images to your favorite AAA titles. It uses proprietary algorithms developed by NVIDIA and relies on the computational power of Tensor cores found in GeForce graphics cards. In the early days of DLSS, NVIDIA talked about an additional technology called DLSS2X, which was supposed to be based on the same underlying techniques as DLSS, however, just to do image sharpening and not any upscaling. That technology got its official name today: Deep Learning Anti-Aliasing or DLAA shortly.

DLAA uses technology similar to DLSS, and it aims to bring NVIDIA's image-sharpening tech to video games. It aims to use the Tensor cores found in GeForce graphics cards, and provide much better visual quality, without sacrificing the performance, as it runs on dedicated cores. It is said that the technology will be offered alongside DLSS and other additional anti-aliasing technologies in in-game settings. The first game to support it is The Elder Scrolls Online, which now has it in the public beta test server, and will be available to the general public later on.
Source: via Tom's Hardware
Add your own comment

50 Comments on NVIDIA Prepares to Deliver Deep Learning Anti-Aliasing Technology for Improved Visuals, Coming first to The Elder Scrolls Online

#26
Krzych
Will be interesting to see how it compares to DSR+DLSS. DSR has a bit of it's own softening and not all games work well with it so maybe just plain DLAA on top of native res is going to be better. DLSS should have had resolution scale slider integrated from day one though, it is a bit silly that we need to use DSR to do this. Guess they decided that anything more than typical Low-Mid-High is going to be too complex for average user :P
Posted on Reply
#27
Ravenas
londisteFSR is markedly inferior. It might get a 2.0 but improvements there are quite likely to follow what DLSS and XeSS are currently doing.
Wide support/adoption is a quality in itself but visually, there is a difference.
Sure extremely noticeable if I stop my game play, take a picture of the current scene, and then zoom in on plant pedals to be able to come on this forum and say, “Yup, just like I thought, markedly inferior to DLSS!”

In regards to adoption, assume 20-25% AMD deployment on desktops and laptops based off Steam stats, all modern NVIDIA cards, and last but not least all PS5 and Xbox S/X.
Posted on Reply
#28
RH92
CrackongIsn't DLSS already had AA effects built-in ?
Why doing the same thing twice ?
Or it is a cut down version of DLSS that focus on edges only so it is way easier to implement
It doesn't do the same thing twice , as DLSS name suggest DLSS is about Super Sampling , on the other had DLAA gets rid of the SS component and uses Deep Learning only for Anti-Aliasing .

This is not about difficulty of implementation , it's about having native resolution with the best anti aliasing possible . Since the game is not upscaled DLAA is going to be more taxing on the hardware than DLSS .
Posted on Reply
#29
erek
w/ 4K+ resolutions (reducing the need for AA) and 144Hz+ refresh rates (reducing the need for V-Sync) what's the point?

4K144+ is a very smooth experience, i just recently migrated from 60Hz and attest to the basic elimination of screen tearing without V-Sync enabled, etc
Posted on Reply
#30
Krzych
RavenasSure extremely noticeable if I stop my game play, take a picture of the current scene, and then zoom in on plant pedals to be able to come on this forum and say, “Yup, just like I thought, markedly inferior to DLSS!”
It is actually quite the opposite. It is a still image that needs more careful inspection, especially since it is viewed in web browser and without the same focus, attention and immersion that you have when actually playing. In actual game once your focus is up and you view actual scenes in motion, then it becomes abundantly clear how much less aliasing and shimmering DLSS has. Especially if you look at notoriously problematic things like hair, thin objects or distant landscape, all of that looks like twice the resolution with DLSS despite the fact that it is reconstructed from lower resolution, while FSR is simply going inherit all TAA issues and then make it worse through basic upscaling and sharpening it uses. The only place where FSR may be comparable are big close up objects with low complexity, because these are very easy to reconstruct and to remove aliasing from. For everything else, DLSS is always going to be miles ahead, as it should given light years technological difference between the two solutions.

My intention is not to hate on FSR, but how developers are supposed to treat players seriously if they cannot see the difference between cheap upscaler/sharpener and proper reconstruction. What you are essentially telling them is "We cannot see a damn thing anyway, so why give us any actual technology?"
Posted on Reply
#31
Ravenas
KrzychIt is actually quite the opposite. It is a still image that needs more careful inspection, especially since it is viewed in web browser and without the same focus, attention and immersion that you have when actually playing. In actual game once your focus is up and you view actual scenes in motion, then it becomes abundantly clear how much less aliasing and shimmering DLSS has. Especially if you look at notoriously problematic things like hair, thin objects or distant landscape, all of that looks like twice the resolution with DLSS despite the fact that it is reconstructed from lower resolution, while FSR is simply going inherit all TAA issues and then make it worse through basic upscaling and sharpening it uses. The only place where FSR may be comparable are big close up objects with low complexity, because these are very easy to reconstruct and to remove aliasing from. For everything else, DLSS is always going to be miles ahead, as it should given light years technological difference between the two solutions.

My intention is not to hate on FSR, but how developers are supposed to treat players seriously if they cannot see the difference between cheap upscaler/sharpener and proper reconstruction. What you are essentially telling them is "We cannot see a damn thing anyway, so why give us any actual technology?"
www.techpowerup.com/review/the-medium-dlss-vs-fsr-comparison/

Differences are minimal macro normal view, 4K FSR Ultra Quality versus 4K DLSS Quality, unless zooming very far in. In gameplay the player will not know the difference.
Posted on Reply
#32
Krzych
Ravenaswww.techpowerup.com/review/the-medium-dlss-vs-fsr-comparison/

Differences are minimal, 4K FSR Ultra Quality versus 4K DLSS Quality, unless zooming very far in. In gameplay the player will not know the difference.
You have picked like the least favorable comparison for yourself out of all in the whole world. That net is TAA's and FSR's worst nightmare while it is exactly what DLSS excels at, reconstructing fine detail like that so well that it actually looks way better than native res TAA, let alone FSR that is just one big oversharpening artifact on top of already numerous native res TAA issues. I don't know if you are viewing it on a phone or 15" laptop or what, but if you cannot see the difference then I don't know what to tell you. That scene is so tailor cut for DLSS and the difference is so massive that you could almost accuse the author of cherrypicking the scene to favor DLSS, and you say there is no difference...
Posted on Reply
#33
Ravenas
KrzychYou have picked like the least favorable comparison for yourself out of all in the whole world. That net is TAA's and FSR's worst nightmare while it is exactly what DLSS excels at, reconstructing fine detail like that so well that it actually looks way better than native res TAA, let alone FSR that is just one big oversharpening artifact on top of already numerous native res TAA issues. I don't know if you are viewing it on a phone or 15" laptop or what, but if you cannot see the difference then I don't know what to tell you. That scene is so tailor cut for DLSS and the difference is so massive that you could almost accuse the author of cherrypicking the scene to favor DLSS, and you say there is no difference...
My monitor?

LG UHD Monitor 27' 4K LED Nano IPS ... www.amazon.com/dp/B08BCRYS6B/ref=cm_sw_r_sms_awdb_imm_MTS3SH8VBDD8Q3E9G6G4?_encoding=UTF8&psc=1

The differences are negligible. Neither is a nightmare, and both look similar even rendering the game on my desktop testing both on my 3090 and 6900 XT.
Posted on Reply
#34
erek
RavenasMy monitor?

LG UHD Monitor 27' 4K LED Nano IPS ... www.amazon.com/dp/B08BCRYS6B/ref=cm_sw_r_sms_awdb_imm_MTS3SH8VBDD8Q3E9G6G4?_encoding=UTF8&psc=1

The differences are negligible. Neither is a nightmare, and both look similar even rendering the game on my desktop testing both on my 3090 and 6900 XT.
I just got that same monitor a few days ago!

Going from 60 yo 144 Hz made a huge difference for me...

4K giving reduced reasons for AA, and 144 giving reduced reasons for V-Sync, imho
Posted on Reply
#35
Minus Infinity
Adobe has an AI based super-resolution upscaling based enhancement in Photoshop. It resizes the image by 2x, but if you then downsize that image back to the original size using bicubic interpolation the resulting image is far more detailed in many cases. The difference can be huge and I use it for many of my images. I have a Sony 42MP camera and I'm amazed at how much more detail this can extract from and already very high quality image. It's similar to the pixel shift technology used by many camera companies now to increase resolution.

Nvidia is not doing anything original here IMO, but it is welcome in games.
Posted on Reply
#36
eidairaman1
The Exiled Airman
ZoneDymoThe problem with DLSS is....like so many projects from big N, just temporary, something will come along that works on everything, some global tech that will just replace it.
And then you will be looking back at those silly DLSS games from 10 years ago which no current gpu actually supports anymore.
Yup, physx, sli comesto mind,shovel ware

So i presume AI is from the internet?
Posted on Reply
#37
wolf
Performance Enthusiast
ZoneDymoThe problem with DLSS is....like so many projects from big N, just temporary, something will come along that works on everything, some global tech that will just replace it.
And then you will be looking back at those silly DLSS games from 10 years ago which no current gpu actually supports anymore.
Why would it need to last forever? It's meant for the here and now. Upscaling seems 100% here to stay and only going to increase, and Nvidia clearly have a big hand in paving the way forward. In 10 years time a current midrange GPU would obliterate any 2021 game and DLSS wouldn't be needed anyway.
Posted on Reply
#38
londiste
RavenasSure extremely noticeable if I stop my game play, take a picture of the current scene, and then zoom in on plant pedals to be able to come on this forum and say, “Yup, just like I thought, markedly inferior to DLSS!”
Actually, it is mostly the other way around, in the few FSR titles I have played FSR introduces quite noticeable shimmering. Plus it has clear sharpening artefacts, whether that is something you like or something that bothers you seems to be very subjective.
Minus InfinityAdobe has an AI based super-resolution upscaling based enhancement in Photoshop. It resizes the image by 2x, but if you then downsize that image back to the original size using bicubic interpolation the resulting image is far more detailed in many cases. The difference can be huge and I use it for many of my images. I have a Sony 42MP camera and I'm amazed at how much more detail this can extract from and already very high quality image. It's similar to the pixel shift technology used by many camera companies now to increase resolution.

Nvidia is not doing anything original here IMO, but it is welcome in games.
The trick with DLSS, as well as XeSS and FSR is to do this in a very limited timeframe and in a way that would be temporally stable. Nvidia has said they target DLSS at doing what it does at 2160p in 1.5ms, less at lower resolutions. XeSS and FSR no doubt have similar targets.

Research into different upscaling methods have been going on for decades and while the methods DLSS/XeSS/FSR use are from a while back (if we look at the landscape of general upscaling) the limited timeframe is the key factor in games.
Posted on Reply
#39
InVasMani
erekI just got that same monitor a few days ago!

Going from 60 yo 144 Hz made a huge difference for me...

4K giving reduced reasons for AA, and 144 giving reduced reasons for V-Sync, imho
AA is actually more dependent on PPI not resolution.
Posted on Reply
#40
Chrispy_
Bomby569The devs can implement DLSS, they don't need nvidia to do it, also 2.0 changed the way it works, there's no longer such a dependency on the AI supercomputers at nvidia headquarters. That's what i understood anyway,

You'll probably have launch games with DLSS that are nvidia branded and not on the ones that are AMD branded. The bi mounthly thing you mention is for older titles.
If the devs implement DLSS and nvidia don't feed it into their DL supercomputer for training, then it's not really DLSS, is it? It just uses the same generic upsampling and filtering techniques like FSR and prior title-specific postprocessing filters use.

DLAA without DL is just regular AA, we already have about two decades of development covering a myriad of clever sampling/dynamic resolution/temporal/sharpened/edge-detecting combinations. FXAA or TXAA seem to be good enough for most people at modern HD resolutions and 60FPS framerates, so adding some proprietary garbage that only works properly once Nvidia have tuned the algorithm and bundled it into drivers is, IMO, not useful enough and too late to the party in almost every instance.

I think my beef with DLAA is in the naming, not the actual technology. Using the otherwise-wasted tensor cores to do something useful is good. Just call it TensorAA ffs. It has NOTHING to do with deep-learning if it's just making use of spare silicon to do a regular AA job.
Posted on Reply
#41
londiste
Chrispy_If the devs implement DLSS and nvidia don't feed it into their DL supercomputer for training, then it's not really DLSS, is it? It just uses the same generic upsampling and filtering techniques like FSR and prior title-specific postprocessing filters use.
Based on what Nvidia has disclosed and what we know, DLSS is using an algorithm derived from machine-learning. Nvidia has said DLSS 2.0 is no longer trained per game which does not mean it is not machine learning, the resulting algorithm is simply more generic and applies well enough to games in general. The use of Tensor cores to run this thing also points at it being ML-derived.
Posted on Reply
#42
nguyen
Chrispy_If the devs implement DLSS and nvidia don't feed it into their DL supercomputer for training, then it's not really DLSS, is it? It just uses the same generic upsampling and filtering techniques like FSR and prior title-specific postprocessing filters use.

DLAA without DL is just regular AA, we already have about two decades of development covering a myriad of clever sampling/dynamic resolution/temporal/sharpened/edge-detecting combinations. FXAA or TXAA seem to be good enough for most people at modern HD resolutions and 60FPS framerates, so adding some proprietary garbage that only works properly once Nvidia have tuned the algorithm and bundled it into drivers is, IMO, not useful enough and too late to the party in almost every instance.

I think my beef with DLAA is in the naming, not the actual technology. Using the otherwise-wasted tensor cores to do something useful is good. Just call it TensorAA ffs. It has NOTHING to do with deep-learning if it's just making use of spare silicon to do a regular AA job.
"FXAA and TXAA seem to be good enough" - said no gamer ever :roll:

Nvidia probably has trained its neural network well enough to create Skynet, now they are telling the tensor cores to destroy gamers in games first before carrying out real world domination.
Posted on Reply
#43
InVasMani
FXAA with reshade is fine is closely comparable to SMAA imo and better performance. I even prefer the image quality myself, but as I said it's close. The text is more clear too for what it's worth.

SMAA vs FXAA
Posted on Reply
#44
nguyen
InVasManiFXAA with reshade is fine is closely comparable to SMAA imo and better performance. I even prefer the image quality myself, but as I said it's close. The text is more clear too for what it's worth.

SMAA vs FXAA
I tried FXAA once in GTA V and I would rather lower some settings + 2xMSAA
Posted on Reply
#45
InVasMani
Are you using in game FXAA or injection? I get what you're saying though on adjusting game render resolution and AA quality. Always a delicate balance between performance and image quality.
Posted on Reply
#46
Chrispy_
nguyen"FXAA and TXAA seem to be good enough" - said no gamer ever :roll:

Nvidia probably has trained its neural network well enough to create Skynet, now they are telling the tensor cores to destroy gamers in games first before carrying out real world domination.
Good enough to compensate for higher resolutions? No.
Good enough compared to the best alternatives at any given resolution? Yes.
  • FXAA can't compensate for lack of resolution, but hides jaggies for free. It's better than no AA unless it's implemented so poorly that it also applies to the HUD/Text.
  • TXAA does a much better job of maintaining detail but comes with a requirement for reasonably high framerates, and whilst cheap it's not free. 99% of cases it should be used if it's an option because the image quality improvements are significant for very little cost.
In an ideal world, AI is smart enough in realtime to know which parts of the image require VRS, MSAA, and TXAA. Nvidia's DLSS/DLAA isn't that AI, so until we get an actual inteligent frame-analysis AI that can apply enhancements on a per-frame basis, we're still living in the "generic postprocessing filter" era. Call it whatever name you want, it's not going to be a solution without compromise. The only good thing about DLAA is that tensor cores are horrendously under-utilised on consumer RTX cards, so giving them anything to do is a bonus.
Posted on Reply
#47
TheoneandonlyMrK
lynx29does this mean DLSS 2.0 will make more sense for 1080p gamers want to render higher? currently DLSS 2.0 only makes sense for 1440p and 4k gamers from what I understand. so this tech will make it worthwhile for 1080p wanting to scale up higher rez's?

@nguyen can you explain. its getting confusing as **** LOL
Your question is confusing.

Are there many wanting to scale up to 1080p anyway?!.
Posted on Reply
#48
nguyen
Chrispy_Good enough to compensate for higher resolutions? No.
Good enough compared to the best alternatives at any given resolution? Yes.
  • FXAA can't compensate for lack of resolution, but hides jaggies for free. It's better than no AA unless it's implemented so poorly that it also applies to the HUD/Text.
  • TXAA does a much better job of maintaining detail but comes with a requirement for reasonably high framerates, and whilst cheap it's not free. 99% of cases it should be used if it's an option because the image quality improvements are significant for very little cost.
In an ideal world, AI is smart enough in realtime to know which parts of the image require VRS, MSAA, and TXAA. Nvidia's DLSS/DLAA isn't that AI, so until we get an actual inteligent frame-analysis AI that can apply enhancements on a per-frame basis, we're still living in the "generic postprocessing filter" era. Call it whatever name you want, it's not going to be a solution without compromise. The only good thing about DLAA is that tensor cores are horrendously under-utilised on consumer RTX cards, so giving them anything to do is a bonus.
Yeah if I liked The Shimmering then I would use FXAA


What I liked best about DLSS is that it reduce the texture flickering, which is quite distracting, in Deathloop the flickering just made me not wanna to look at the game anymore.
Posted on Reply
#49
Chrispy_
nguyenYeah if I liked The Shimmering then I would use FXAA


What I liked best about DLSS is that it reduce the texture flickering, which is quite distracting, in Deathloop the flickering just made me not wanna to look at the game anymore.
FXAA doesn't reduce shimmering, period. It's limited by the source resolution.
TXAA (or any variant of TAA) is what you need to help with that. DLSS has a temporal component, so it's the TXAA part of DLSS that you like.
Posted on Reply
Add your own comment
Apr 25th, 2024 08:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts