• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD FSR 2.0 Quality & Performance

So wtf is DL in DLSS these days? Apparently AMD is doing it without any neural network shenanigans.
As per the while paper, "A Survey of Temporal Antialiasing Techniques" Using , "8.3. Machine learning-based methods"

Salvi [Sal17] enhances TAA image quality by using stochastic gradient descent (SGD) to learn optimal convolutional weights for computing the color extents used with neighborhood clamping and clipping methods (see Section 4.2). Image quality can be further improved by abandoning engineered history rectification methods in favor of directly learning the rectification task. For instance, variance clipping can be replaced with a recurrent convolutional autoencoder which is jointly trained to hallucinate new samples and appropriately blend them with the history data [Sal17].

Thus, DLSS uses a convolutional autoencoder for better quality output. Tensor cores help reduce Challenges by providing more processing power.

6. Challenges
Amortizing sampling and shading across multiple frames does sometimes lead to image quality defects. Many of these problems are either due to limited computation budget (e.g. imperfect resampling), or caused by the fundamental difficulty of lowering sampling rate on spatially complex, fast changing signals. In this section we review the common problems, their causes, and existing solutions.
Tensor cores increase the computational budget and the convolutional autoencoder helps with the second part which is the lowering sample rate by hallucinating new samples.

You can process ML tasks in any way you like but real-time puts a limit on how long you can process the image. This can reduce quality if there is not enough processing power. Tensor cores are faster at this task than normal cores, they execute within one clock cycle. Thus you can drop back to normal processing but as Intel states for their DP4 version of XeSS, both quality and performance is reduced when compare their xmx(Intel tensor cores) version.
 
It is proprietary, though..





Nvidia RTX - Wikipedia
DX12u and DXR are proprietary to microsoft. They are the standard both AMD and nvidia must follow. RT, ML and DirectStorage are covered in this standard. There is no point to be made here.

And he has an AMD CPU and GPU. Starting to see a pattern....
 
Last edited:
DX12u and DXR are proprietary to microsoft. They are the standard both AMD and nvidia must follow. RT, ML and DirectStorage are covered in this standard. There is no point to be made here.
The Tensor Cores are what RTX designates as proprietary .
 
Tensor cores are basically on cards without Ray Tracing cores. There were no mainstream GeForce graphics cards based on Volta for example. It was NVIDIA's first chip to feature Tensor Cores. Volta's Tensor cores are first generation while Ampere has third generation Tensor cores. RT cores are on Turing and successors.

RTX runs on Nvidia Volta-, Turing- and Ampere-based GPUs, specifically utilizing the Tensor cores (and new RT cores on Turing and successors) on the architectures for ray-tracing acceleration.
 
AFAIK on Volta the Tensor cores are only used in ray-tracing context for DLSS processing. There is no evidence they are used for RT acceleration.
 
AFAIK on Volta the Tensor cores are only used in ray-tracing context for DLSS processing. There is no evidence they are used for RT acceleration.
I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene. The AI works most of the rays out and you dont have to process as much. Its like a kind of DLSS but for the rays in a scene.

This video is just easier to watch or you can go to the source.

I believe this wont be ready for nvidia next generation cards but the generation afterwards will likely use AI to massively speed up Ray Tracing as their tensor cores will support this method. If nvidia next generation cards have this feature, AMD will be far behind in ray tracing. This is why AMD's lack of tensor or xmx like cores is a big deal in future.
 
I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene.
Yes, the Tensor cores can be used for AI-based denoising. However current games don't do that and whether the future ones will, we shall see.
 
Yes, the Tensor cores can be used for AI-based denoising. However current games don't do that and whether the future ones will, we shall see.
I believe its being research right now and will appear in the future. The output of this method can be seen here. Once this technology hits the mainstream Ray Tracing will be magical looking.

We propose the concept of neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration for solving integral equations. So far, the core challenge of applying the method of control variates has been finding a good approximation of the integrand that is cheap to integrate. We show that a set of neural networks can face that challenge: a normalizing flow that approximates the shape of the integrand and another neural network that infers the solution of the integral equation.

We also propose to leverage a neural importance sampler to estimate the difference between the original integrand and the learned control variate. To optimize the resulting parametric estimator, we derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.

When applied to light transport simulation, neural control variates are capable of matching the state-of-the-art performance of other unbiased approaches, while providing means to develop more performant, practical solutions. Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of insignificant visible bias.

Ignoring RT and AI is not what the mainstream should be doing. We should not allow companies like AMD to ignore it as well. Screaming FSR 2 does not need tensor cores, basically misses the point of what is happening in computer graphics.

The end result is going to be something like this:


Note how this AI network enhances GTA 5 so that it look almost photo real in real-time. Ignoring AI and downplaying it misses the whole direction graphics is currently following.
 
Last edited:
I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene. The AI works most of the rays out and you dont have to process as much. Its like a kind of DLSS but for the rays in a scene.

This video is just easier to watch or you can go to the source.

I believe this wont be ready for nvidia next generation cards but the generation afterwards will likely use AI to massively speed up Ray Tracing as their tensor cores will support this method. If nvidia next generation cards have this feature, AMD will be far behind in ray tracing. This is why AMD's lack of tensor or xmx like cores is a big deal in future.
Denoising in RT is already been used right now without the use of AI. Again, this is similar to this thread, it's not because you can do something with AI that you must do it with AI.

One of the thing to keep in mind is there is very few time to run the inference (applying what the neural network have learn) in Realtime. you can save some time by trying to run things asynchronously, but up to a point. (and Nvidia is already doing that).

there are plenty of good, relatively cheap denoising algorithm that you can use to denoise without having to use AI.

AI have more future in area where it can save a huge amount of work. By example, helping creating 3d maps like the Nvidia Demo. For real time rendering, i don't think we are there yet. There is not enough time for AI to really make a difference and they would have to do way more work.

You want to use AI to get some results. it must give that results way faster than what it would have taken to compute that results.(with the proper algorithm).

Right now, i think it's Nvidia trying to sellout AI to gamer when in reality they want (for a good reason) to get a good footstep in the AI market. They have tensor core to sell to gamer and they trying to do thing with it that do not need to be done with it.

On top of that, they can stamp the AI word on it and be trendy. People will think it's magic
 
Denoising in RT is already been used right now without the use of AI. Again, this is similar to this thread, it's not because you can do something with AI that you must do it with AI.

One of the thing to keep in mind is there is very few time to run the inference (applying what the neural network have learn) in Realtime. you can save some time by trying to run things asynchronously, but up to a point. (and Nvidia is already doing that).

there are plenty of good, relatively cheap denoising algorithm that you can use to denoise without having to use AI.

AI have more future in area where it can save a huge amount of work. By example, helping creating 3d maps like the Nvidia Demo. For real time rendering, i don't think we are there yet. There is not enough time for AI to really make a difference and they would have to do way more work.

You want to use AI to get some results. it must give that results way faster than what it would have taken to compute that results.(with the proper algorithm).

Right now, i think it's Nvidia trying to sellout AI to gamer when in reality they want (for a good reason) to get a good footstep in the AI market. They have tensor core to sell to gamer and they trying to do thing with it that do not need to be done with it.

On top of that, they can stamp the AI word on it and be trendy. People will think it's magic
They explain it all here.
 
Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.
 
Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.
Its cheaper to develop games with ray tracing. You don't have to have the goal of Photorealism. You can use upscaling to better create your artistic vision by adding more detail and then upscaling to maintain performance. Tensor cores for better ingame AI using DX12's support for machine learning.
 
Last edited:
FSR 2.0 is working on the Steam Deck. This tech is going to be more useful in the console / fixed hardware space than PC's
 
Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.
Photorealistic look and natural look are two different things.

I like realism when it comes to the behavior of lighting, shadows and reflections. But I do not like when games go for a natural, bland, washed out look. Art style is very important and I definitely prefer a unique design with an interesting color palette over something that tries to look like the real world. If I want to see the real world, I just go outside.

You get the same thing with movies and tv shows. While they all look inherently realistic, they usually do not look natural, thanks to the use of apertures and filters.
 
Why compare only still image ?? What about quality when moving ?
Thats the big issue with the temporal method, DLSS had all types of issues when moving. Both look good in youtube videos, so did FSR 1 and it was complete garbage.

 
DX12u and DXR are proprietary to microsoft. They are the standard both AMD and nvidia must follow. RT, ML and DirectStorage are covered in this standard. There is no point to be made here.

And he has an AMD CPU and GPU. Starting to see a pattern....

DXR is the standard which everyone must follow. Not only AMD and nvidia, but also Intel, hopefully you didn't forget about it.

RTX is NOT supported by non-nvidia graphics cards...
 
DXR is the standard which everyone must follow. Not only AMD and nvidia, but also Intel, hopefully you didn't forget about it.

RTX is NOT supported by non-nvidia graphics cards...
That was my....

More FSR 2 videos for anyone that cares.

DLSS has more fine detail and FSR 2 looks sharper. - HWU
In motion, FSR 2 loses more detail and is less stable than native. DLSS is more stable and has more fine detail when compared to native.
 
Please ignore the troll ARF, who doesn't know anything about what they post.
 
In Deathloop, the modes available are "Quality," Balanced," and "Performance." "Ultra Quality" from FSR 1.0 has been removed because it was just "Quality" with "Sharpening," which can now be adjusted separately.
Hi @W1zzard, can you elaborate?
I thought "Ultra quality mode" had 1.3X scaling per dimension and "Quality mode" 1.5X.
How is Ultra quality mode" just the same as "Quality mode"+Sharpening, like you said?
Is AMD giving the option to the developer to use lower than advertised resolution in "Ultra Quality mode" and at the same time allow them to call it "Ultra Quality mode"?

1652391587772.png
 
That was my....

More FSR 2 videos for anyone that cares.

DLSS has more fine detail and FSR 2 looks sharper. - HWU
In motion, FSR 2 loses more detail and is less stable than native. DLSS is more stable and has more fine detail when compared to native.

Some fine detail render better on nv, some better on amd, he brought up points like the wires for the baloons in movement were aliased on nvidia but not on amd and a metal fence on top of a building was better on nvidia.

the edge for fine detail is as you say to nvidia, but for in motion I didn't hear or hear a clear winner.

I need to bring out kekler 780TI and 7970 and see how they run, or if they run at all :D
FSR 1.0 worked on the 7970 just fine, which I think is really the gamechanger for these technologies.. they just run and work
 
That was my....

More FSR 2 videos for anyone that cares.

DLSS has more fine detail and FSR 2 looks sharper. - HWU
In motion, FSR 2 loses more detail and is less stable than native. DLSS is more stable and has more fine detail when compared to native.
pinnacle of technology knowledge.
 
Bear in mind that this is just one game, and so far multiple reviewers/sites/tech-tubers are only covering this one because it seems it's the only one they can cover.

We've been seeing the preview images for weeks now, so AMD has had considerable time to work with the developer to, at least in theory, make this a best case showing for FSR 2.0 - especially when they talk about mere days when the game already has DLSS, they've been fine tuning the hec out of this one.

I would love to see that this is the result we can generally expect mind you, but we all need to be appropriately cautious until the consistency of results starts to come together.
 
Back
Top