• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA DLSS 4 Transformer

Until yesterday DLSS 3.8 was impressive. Today we learn that DLSS 4.0 is impressive while DLSS 3.8 is not so impressive after all.

Waiting for DLSS 5.0 review in a couple of years vs DLSS 4.x, to read the same, that DLSS 4.x isn't as much impressive as we will be reading for the next couple of years, but DLSS 5.0 is really impressive.
Now with slightly less smearing! How did we miss it all before and notice now, moving on, isn't this low performance ray tracing great? At this point I think I'm going to ignore all DLSS/etc. talk since the issues that bother me seem inherent to it.
 
The benefit is is that you can run as low as performance with the Transformer and it will look better then quality mode CNN.

TLDR - everyone on 3000 can drop step down in DLSS mode and will still get the iq improvements. Prolly something I would add to the conclusion.
 
Sorry, but even on static images I can see ZERO difference between either modes.
Some textures are washed out with DLSS3.8. And there is some shimmering, too, but you can only see that in the video.

Having just watched the video on a 32" 4k monitor from 2' away, I can honestly say I couldn't tell native from DLSS4 in a blind test. Chances of getting a 5070 or 5060Ti, enabling all those "fake frames" and enjoying a few RT titles at 4k just went well up.
 
A famous quote from a TV series "Welcome to the death of the Age of Reason"
How can let's say a 10 megapixels picture be downscaled to 4 megapixels and then upscaled to 10 megapixels and look better than the original ?
Unless, and this sounds crazy, the original doesn't have contrast, sharpness, color and is somewhat gimped so the upscaled image can look better.
Shush! Soon they shall come and your fact-based statement shall be proved wrong by personal experience.
 
Judging by the 20 and 30 series GPUs having a very large performance hit with the RR Transformer model, my assumption is that it heavily utilizes FP8 instructions, which those GPUs' tensor cores do not have native support for but the 40 series does. I was actually expecting Nvidia to make use of the 50 series' new FP4 support here but that doesn't seem to be the case.

In any case, you really need to see the games in motion to properly evaluate the quality differences. The new transformer models do a lot to clean up temporal stability (though there are regressions in some cases) and ghosting issues, particularly with RR. In some cases there is a significant increase in image detail and sharpness too that can allow players to reduce the DLSS quality level while both maintaining quality and increasing performance compared to the CNN model.
 
It also looks strange for TPU to reuse third-party videos. o_O
aha .. that's Maxus24's personal channel... also posted a few days before he even gave us access to the video.. that's a breach of my personal trust, I'll terminate the relationship.

Any volunteers to take over this position?
 
A famous quote from a TV series "Welcome to the death of the Age of Reason"
How can let's say a 10 megapixels picture be downscaled to 4 megapixels and then upscaled to 10 megapixels and look better than the original ?
Unless, and this sounds crazy, the original doesn't have contrast, sharpness, color and is somewhat gimped so the upscaled image can look better.
You're missing the temporal aspect of DLSS or other upscaling solutions, that's where you're missing data comes from.
Virtually all games come with TAA as default which also uses temporal accumulation, some have better algorithms, some worse, mostly worse, that's why image that is upscaled can look better than native one with bad TAA.

Whether someone likes TAA or not, does not matter. Times of using MSAA or better yet no AA at all or FXAA that was popular in UE3/3.5 days are gone.
 
A famous quote from a TV series "Welcome to the death of the Age of Reason"
How can let's say a 10 megapixels picture be downscaled to 4 megapixels and then upscaled to 10 megapixels and look better than the original ?
Unless, and this sounds crazy, the original doesn't have contrast, sharpness, color and is somewhat gimped so the upscaled image can look better.
If you knew math, or at least understood vectorial graphics, you could answer that yourself.
Watch the video, there's a scene there where native fails to render some power lines, but DLSS4 brings them back. Of course, DLSS4 won't do that for you all the time. But it can do that.
 
"For everyone"? Did Nvidia switch to hardware agnostic all of a sudden?
Hardware agnostic and fighting against lock-in tech is a thing of the past.

We now applaud and worship things that remove options from us the consumers.
I'm just waiting for the day when DLSS is added to the thumbs down column when it becomes a poor man's, unacceptable way to increase performance at the expense of image quality.
See above but in my book, it should already be there since its main function is to keep you locked into Ngreedias hardware.
 
Honestly this looks great, there are a few minor issues like the funky shadows and lighting on the palm trees and the missing grass but the vegetation looks better, the power lines in Stalker reappear that are missing in even native. All up-scaling seems to have an issue with the rear window heater.
 
The best example is to look at our screenshots and compare CNN and Transformer models at 1440p "Performance" mode, and the difference in details is very obvious, especially on small thin objects such as power lines

I’m not seeing it :(
 
I’m not seeing it :(
1738002106992.png


And that's from 4K image, not 1440p one.
 
A famous quote from a TV series "Welcome to the death of the Age of Reason"
How can let's say a 10 megapixels picture be downscaled to 4 megapixels and then upscaled to 10 megapixels and look better than the original ?
Unless, and this sounds crazy, the original doesn't have contrast, sharpness, color and is somewhat gimped so the upscaled image can look better.
When the information in those 10 megapixels is redundant enough that you can still drop them or infer them without losing a significant portion of the context behind them.

Plenty of things both natural and artificial don't actually work at their raw values. For instance, Blu-Ray video drops well over half the information needed to encode every frame at their raw values. Your brain does a lot of inferring too.
 
You're missing the temporal aspect of DLSS or other upscaling solutions, that's where you're missing data comes from.
Virtually all games come with TAA as default which also uses temporal accumulation, some have better algorithms, some worse, mostly worse, that's why image that is upscaled can look better than native one with bad TAA.

Whether someone likes TAA or not, does not matter. Times of using MSAA or better yet no AA at all or FXAA that was popular in UE3/3.5 days are gone.
So you are basically saying some games implement TAA better than others and poor implementations can result in blurriness or ghosting, so DLSS can actually look better than the native resolution image if the native image uses a poor TAA implementation.
Why would game developers, especially those sponsored by Nvidia have a poor implementation of TAA ? I don't know, maybe others know.
 
All I'm gonna say is: Regardless what you think of Nvidia, the fact that they brought DLSS 4 and even Frame Gen to cards as shitty as the 2060 is a very impressive bit of engineering. Nice job Nvidia, hopefully they won't use this as an excuse to slack on raw performance for ages.
 
If you knew math, or at least understood vectorial graphics, you could answer that yourself.
Watch the video, there's a scene there where native fails to render some power lines, but DLSS4 brings them back. Of course, DLSS4 won't do that for you all the time. But it can do that.
So a game that screws up rendering due to poor TAA implementation means that DLSS4 improves image quality in all titles?

Nah, DLSS4 improves image quality in games where the devs massively cock up. TAA is a scourge among GPU rendering and frankly shouldnt exist. In games that dont use it, DLSS4 does not improve image quality.
All I'm gonna say is: Regardless what you think of Nvidia, the fact that they brought DLSS 4 and even Frame Gen to cards as shitty as the 2060 is a very impressive bit of engineering. Nice job Nvidia, hopefully they won't use this as an excuse to slack on raw performance for ages.
It is indeed great news for nVidia owners, especially midrangers. I look forward to the extra boost out of my 4060 mini PC.

With AMD locking FSR4 to rDNA4 GPUs, that's going to be yet another uphill battle for Team Red.
 
So you are basically saying some games implement TAA better than others and poor implementations can result in blurriness or ghosting, so DLSS can actually look better than the native resolution image if the native image uses a poor TAA implementation.
Why would game developers, especially those sponsored by Nvidia have a poor implementation of TAA ? I don't know, maybe others know.
Because most games are made in UE4/5, which has not bad, not great TAA in case of UE4 and better TSR in case of UE5 but both still do not compare to DLSS. You forget that you also have DLAA mode which basically is native, and in FSR3 you have Native mode literally.
Studios won't waste time developing new TAA techniques and fine tuning it when they have DLSS, FSR and XESS solutions that are more or less plug and play. And even when they do like in case of Insomniac it still looks worse.
Matter of fact is DLSS, XESS and even FSR are state of the art temporal upscaling/temporal AA solutions. You can run them at native if you like.

If you don't like temporal solutions as a whole then I don't know. Turn it off where you can and enjoy Native, no AA image.
So a game that screws up rendering due to poor TAA implementation means that DLSS4 improves image quality in all titles?

Nah, DLSS4 improves image quality in games where the devs massively cock up. TAA is a scourge among GPU rendering and frankly shouldnt exist. In games that dont use it, DLSS4 does not improve image quality.

It is indeed great news for nVidia owners, especially midrangers. I look forward to the extra boost out of my 4060 mini PC.

With AMD locking FSR4 to rDNA4 GPUs, that's going to be yet another uphill battle for Team Red.
If not TAA then what is the better solution? MSAA? FXAA?
 
All I'm gonna say is: Regardless what you think of Nvidia, the fact that they brought DLSS 4 and even Frame Gen to cards as shitty as the 2060 is a very impressive bit of engineering. Nice job Nvidia, hopefully they won't use this as an excuse to slack on raw performance for ages.

You wouldn’t know it by the way AMD users are shitting all over this article.

I don’t know, maybe they are hurt that FSR 4 isn’t coming to their GPU, or something :shadedshu:.

Edit: As far as raw performance, rasterization is tapped out. Go watch Mark Cerney’s PlayStation Pro presentation. Everyone - including AMD - is focused on what comes after rasterization.
 
Last edited:
Because most games are made in UE4/5, which has not bad, not great TAA in case of UE4 and better TSR in case of UE5 but both still do not compare to DLSS. You forget that you also have DLAA mode which basically is native, and in FSR3 you have Native mode literally.
Studios won't waste time developing new TAA techniques and fine tuning it when they have DLSS, FSR and XESS solutions that are more or less plug and play. And even when they do like in case of Insomniac it still looks worse.
Matter of fact is DLSS, XESS and even FSR are state of the art temporal upscaling/temporal AA solutions. You can run them at native if you like.

If you don't like temporal solutions as a whole then I don't know. Turn it off where you can and enjoy Native, no AA image.

If not TAA then what is the better solution? MSAA? FXAA?
I would happily take the performance hit of MSAA over the visual sludge that is TAA.
 
With these comparison images, when I move vertical line to the right edge, I see image described in the top left or right corner?
It's not obvious.
 
I would happily take the performance hit of MSAA over the visual sludge that is TAA.

The problem with MSAA is it only works on polygons, not texture or shader aliasing. It also doesn’t work with VRS, or anything that’s not updated at display refresh rate. Also, transparency just breaks it.
 
I would happily take the performance hit of MSAA over the visual sludge that is TAA.
Well then whole current rendering pipelines need to be changed to the old ways.

Also I would love to see how that MSAA would look on those Stalker 2 screenshots with all the alpha vegetation. But tbh some people would still claim it looks better.
 
With these comparison images, when I move vertical line to the right edge, I see image described in the top left or right corner?
It's not obvious.

Yes, and you can select the comparison image by clicking on the description. It’s quite well done.
 
aha .. that's Maxus24's personal channel... also posted a few days before he even gave us access to the video.. that's a breach of my personal trust, I'll terminate the relationship.

Any volunteers to take over this position?
I believe there are additional flaws in this test beyond this particular issue. For instance, any run that experiences element/assets loading failures should be discarded and excluded from the comparison.

The problem with MSAA is it only works on polygons, not texture or shader aliasing. It also doesn’t work with VRS, or anything that’s not updated at display refresh rate. Also, transparency just breaks it.
The hybrid of MSAA + SMAA works well. MSAA for geometry edges, post-process AA for textures. (Forward+ is an alternative to the blurry TAA in modern games.)
TAA reduces image quality, and relying on upscaling under these conditions is pointless, as it cannot restore lost details—especially since upscaling relies on the temporal filter anyway
 
Last edited:
Back
Top