• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Announces DLSS 3.5 Ray Reconstruction Technology, Works on GeForce 20 and Newer

Thank you Nvidia i was SO MAD DLSS 3 did not support 3000 series and Now we are happy which feature proves our RTX 3000 series and 2000 series Cards.
 
Thank you Nvidia i was SO MAD DLSS 3 did not support 3000 series and Now we are happy which feature proves our RTX 3000 series and 2000 series Cards.
you still don't have DLSS 3 since Frame generation is still a RTX 4000 thing. They are just making a big mess with the naming of their tech
 
you still don't have DLSS 3 since Frame generation is still a RTX 4000 thing. They are just making a big mess with the naming of their tech
i know its ok as long as we are now supporting the newer Dlss which is even better then 3 soon all 3 games will be 3.5 with Nvidia driver updates.
 
i know its ok as long as we are now supporting the newer Dlss which is even better then 3 soon all 3 games will be 3.5 with Nvidia driver updates.
It doesn't mean a lot of thing if the DLSS 3.5 is not = DLSS 3 plus new thing, it's just misleading
It's actually more like DLSS 3 for old RTX edition that you're gonna get.
 
DLSS, DLSS 2, DLSS 3, DLSS 3.5 are supported by all RTX cards.
DLSS 3 Frame Gen is only supported by 4000 series.

The naming is completely rubbish.
The Ray reconstruction is useful for cards that have the oomph to run games with RT on. For lower end RTX cards, it does not offer anything.
 
Last edited:
DLSS, DLSS 2, DLSS 3, DLSS 3.5 are supported by every RTX cards.
DLSS 3 Frame Gen is only supported by 4000 series.

The naming is completely rubbish.
The Ray reconstruction is useful for cards that have the oomph to run games with RT on. For lower end RTX cards, it does not offer anything.

I'm excited to try it out in CP2077 but I'm also not sure I would notice unless seeing both on and off side by side.... Alan Wake 2 will also probably benefit from it going by the released trailer.
 
It depends on the game. In Hogwarts Legacy it would make a hell of a difference since the reflections resolution is completely rubbish. The rt shadows and even the ambient occlusion are rendered in very low res.

In cyberpunk, yes, the reflections will be improved. And it would be great to see it in control where the noise can clearly be seen in many surfaces.
 
There's a lot of handwavium on how it actually works, but from what I can tell, they've actually managed to make a less-shit denoiser now by using some primitive AI pattern regonition in realtime on the GPU. The result is that is no longer rejecting so many rays in the denoising process and more of the actual light bounces and detail get through to the end result...

Another way of saying it, is that with DLSS 3.5 Nvidia will finally no longer be lieing to us; DLSS 1.0 launched for RTX cards 5 years ago with the promise that it used the AI tensor cores in RTX cards to do the denoising for raytracing effects. After a few months that turned out to be a lie, since other GPUs without tensor cores were eventually supported. To backtrack on their initial lie, Nvidia simply claimed that when they said "AI cores" they meant they were using using deep learning AI back at Nvidia HQ to pre-bake tuned denoising algorithms into the game-ready driver profiles for each supported title. Now, they're claiming it was hand-tuned all along, not AI deep learning at all.

So, five years after launch, Nvidia is finally using AI as promised in its raytracing and DLSS. Better late than never - even if it is little more than a tech-demo for the first couple of years until game developers bother to implement it in significant numbers.
 
Last edited:
So what does this thing actually do?
From what I gathered from the nebulous and misleading marketing, it seems to just be an AI enhanced denoiser.
Which is just fine, but I fail to see what it's actually going to make better. The intentionally f'd up images with half finished denoising as the "former example" have convinced me that this is a mostly moot upgrade since they had to make the current good-looking denoised images intentionally ugly. This marketing basically took a beautiful girl's face, smeared her with shit, and then cleaned her up and added some makeup and took the smeared face and made up face and went "look how beautiful she is with our makeup!". Ye, but without the smearing she was beautiful already.

The current denoising already works, it makes images already good looking with RT on.
So what does this AI denoising actually do? Enhance speed? Enhance some detail that you only notice when you zoom in 15x? Does it have an actual proper A/B comparison that doesn't require smearing some shit on the image?

The one image in Cyberpunk that compared a small pathway under lights honestly didn't make me feel like I was looking at anything different. And no, having to zoom in on the image doesn't mean crap. I don't use a lens when I game, nor do I randomly zoom in on things. I just play. If I can't see anything different on the fullscreen, then I don't see the point.

(watch the Nvidia cultists come and scream that if I don't see the Glory of DLSS I'm just not worth their time and that everyone should just naturally admire the Glory without any questions)
 
So what does this thing actually do?
From what I gathered from the nebulous and misleading marketing, it seems to just be an AI enhanced denoiser.
Which is just fine, but I fail to see what it's actually going to make better. The intentionally f'd up images with half finished denoising as the "former example" have convinced me that this is a mostly moot upgrade since they had to make the current good-looking denoised images intentionally ugly. This marketing basically took a beautiful girl's face, smeared her with shit, and then cleaned her up and added some makeup and took the smeared face and made up face and went "look how beautiful she is with our makeup!". Ye, but without the smearing she was beautiful already.

The current denoising already works, it makes images already good looking with RT on.
So what does this AI denoising actually do? Enhance speed? Enhance some detail that you only notice when you zoom in 15x? Does it have an actual proper A/B comparison that doesn't require smearing some shit on the image?

The one image in Cyberpunk that compared a small pathway under lights honestly didn't make me feel like I was looking at anything different. And no, having to zoom in on the image doesn't mean crap. I don't use a lens when I game, nor do I randomly zoom in on things. I just play. If I can't see anything different on the fullscreen, then I don't see the point.

(watch the Nvidia cultists come and scream that if I don't see the Glory of DLSS I'm just not worth their time and that everyone should just naturally admire the Glory without any questions)
We'll see when it's out, I guess. I bet half of the people are just as clueless as you are. Nvidia seems to be doing lots of deep tech talk to make sure no one understands, then they post two pictures with one showing +5 FPS more for the mobs to drool over. Interesting marketing strategy, to say the least.
 
Knowing their PR, they'll probably announce the date of the next announcement.
Watch it on Youtube at 11 and decide for yourself.
 
So what does this thing actually do?
From what I gathered from the nebulous and misleading marketing, it seems to just be an AI enhanced denoiser.
Which is just fine, but I fail to see what it's actually going to make better. The intentionally f'd up images with half finished denoising as the "former example" have convinced me that this is a mostly moot upgrade since they had to make the current good-looking denoised images intentionally ugly. This marketing basically took a beautiful girl's face, smeared her with shit, and then cleaned her up and added some makeup and took the smeared face and made up face and went "look how beautiful she is with our makeup!". Ye, but without the smearing she was beautiful already.

The current denoising already works, it makes images already good looking with RT on.
So what does this AI denoising actually do? Enhance speed? Enhance some detail that you only notice when you zoom in 15x? Does it have an actual proper A/B comparison that doesn't require smearing some shit on the image?

The one image in Cyberpunk that compared a small pathway under lights honestly didn't make me feel like I was looking at anything different. And no, having to zoom in on the image doesn't mean crap. I don't use a lens when I game, nor do I randomly zoom in on things. I just play. If I can't see anything different on the fullscreen, then I don't see the point.

(watch the Nvidia cultists come and scream that if I don't see the Glory of DLSS I'm just not worth their time and that everyone should just naturally admire the Glory without any questions)
If you can't see the difference by the videos nvidia published already then, well, you can't see the difference. Nobody can help you with that.
 
So dlss4 will do the de-noiser magic to the whole frame, not just the RT calculation part of it..

And by that we will finally have fake frame (dlss3) based on partial image (dlss4+3.5) all mashed up from a low res image (dlss2). Great.

Can't wait to get to dlss10, where you only need one click on the mouse and than the whole game rander and play itself automatically based on your past gaming profile.
The more you play, the higher your fps.
 
Last edited:
So dlss4 will do the de-noiser magic to the whole frame, not just the RT calculation part of it..

And by that we will finally have fake frame (dlss3) based on partial image (dlss4+3.5) all mashed up from a low res image (dlss2). Great.

Can't wait to get to dlss10, where you only need one click on the mouse once and than the whole game rander and play itself automatically based on your past gaming profile.
The more you play, the higher your fps.
Fake frames in fake games. Welcome to the future. :toast:
 
We'll see when it's out, I guess. I bet half of the people are just as clueless as you are. Nvidia seems to be doing lots of deep tech talk to make sure no one understands, then they post two pictures with one showing +5 FPS more for the mobs to drool over. Interesting marketing strategy, to say the least.
Idiots admire complexity, geniuses admire simplicity - Terry Davis

Nvidia has only idiots so they make pointlessly complex crap to impress them and then make superbly vapid crap to make them buy.

Idiots admire Nvidia, idiots wait for AMD to deliver, geniuses just play Factorio - me

If you can't see the difference by the videos nvidia published already then, well, you can't see the difference. Nobody can help you with that.
"If you don't see the Glory of DLSS, that's your problem"
Ah good to see that the cult mentality is doing as good as ever.

Fake frames in fake games. Welcome to the future. :toast:
Well, everything in games is fake. A lot in life is fake.
We live in a world of convincing fakeness.
 
"If you don't see the Glory of DLSS, that's your problem"
Ah good to see that the cult mentality is doing as good as ever.
You can apply your argument to everything. If I claim I can't see the difference in graphics between an Atari 2600 game and a PS5 game, what are you supposed to do? How are you supposed to convince me? Even better, what if I called you a cultist for "pretending" to notice the difference? Grow up man, you can't see it, it's fine, don't buy it, nobody is forcing you.

This isn't even the most extreme example but still ,if you can't see the differences between these screenshots then it's totally you my man. Nvidia CANNOT fix blindness.

nJUX8QvznaxJ1Sa1.jpg
 
Grow up man, you can't see it, it's fine, don't buy it, nobody is forcing you.
"Grow up man, you aren't good enough to be part of the cult, nobody is forcing you"
The worst is, this kind of talk works on most idiots.
 
To be fair, it's not that easy to understand the difference on the above image @fevgatos .
Someone needs to already understand how the rt works and why there is a latency on the SR+FG image while on the last one everything is lit correctly.
You should have uploaded an image where the reflection resolution is better with RR on.

Not everyone understands what RT does on the scene, let alone the negatives that come with it.
 
This isn'
To be fair, it's not that easy to understand the difference on the above image @fevgatos .
Someone needs to already understand how the rt works and why there is a latency on the SR+FG image while on the last one everything is lit correctly.
You should have uploaded an image where the reflection resolution is better with RR on.

Not everyone understands what RT does on the scene, let alone the negatives that come with it.
Really? Looks pretty obvious to me.

"Grow up man, you aren't good enough to be part of the cult, nobody is forcing you"
The worst is, this kind of talk works on most idiots.
Just noticed you have a 7900xt. Yeah, amd users for some reason can't see the difference. Cultists, what can you do
 
To be fair, it's not that easy to understand the difference on the above image @fevgatos .
Someone needs to already understand how the rt works and why there is a latency on the SR+FG image while on the last one everything is lit correctly.
You should have uploaded an image where the reflection resolution is better with RR on.

Not everyone understands what RT does on the scene, let alone the negatives that come with it.
If you have to "understand" what you see, it means the difference is minimal at best, doesn't it?

As for Nvidia's screenshot, the last picture seems to be missing a few white light sources around that purple neon thing, that's why the tone of the whole image is more purple. That's the only difference I see. Could that be the reason for the higher FPS?
 
If you have to "understand" what you see, it means the difference is minimal at best, doesn't it?

As for Nvidia's screenshot, the last picture seems to be missing a few white light sources around that purple neon thing, that's why the tone of the whole image is more purple. That's the only difference I see. Could that be the reason for the higher FPS?
It's 99% the same thing and in 99% of cases you will never notice any extra detail, any difference at all, unless you do a side by side comparison. And even then you need a lens.

But the cultists must believe what the Master says...
Eventually they'll release some explanation of what they're actually doing that's different. AI-ifying the denoising process can't hurt much, and should bring some speed upgrades/faster RT. That's the only advantage I can think of.
 
i know its ok as long as we are now supporting the newer Dlss which is even better then 3 soon all 3 games will be 3.5 with Nvidia driver updates.
It's actually DLSS 2.5

As for Nvidia's screenshot, the last picture seems to be missing a few white light sources around that purple neon thing, that's why the tone of the whole image is more purple. That's the only difference I see. Could that be the reason for the higher FPS?
How come DLSS off is 3 times less fps than DLSS 2.
DLSS 2 shouldn't be generating that many frames. Uhm, it just renders faster?

And by that we will finally have fake frame (dlss3) based on partial image (dlss4+3.5) all mashed up from a low res image (dlss2). Great.
May be transistors stopped shrinking since 10 years ago.
Now they're shrinking our wallet by unlocking what they have been hiding 10 years ago.
 
How come DLSS off is 3 times less fps than DLSS 2.
DLSS 2 shouldn't be generating that many frames. Uhm, it just renders faster?
It's a low resolution image, so we can't really see how much detail is missing. And on the topic of missing detail...
I marked the light sources that I was talking about in my previous post, and their reflections on the floor. The last picture seems to be missing them. It's easy to produce more frames per second with less detail.
1693135509292.jpeg
 
It's a low resolution image, so we can't really see how much detail is missing. And on the topic of missing detail...
I marked the light sources that I was talking about in my previous post, and their reflections on the floor. The last picture seems to be missing them. It's easy to produce more frames per second with less detail.
View attachment 310841
Purple rays came in and then denoiser said "let there be light..." and saw those details as unnecessary noise:D

Those flares on top were like icing on the cake. Sad to see the new DLSS punched them out.

Here's the same frame with DLSS 6. Nvidia finally managed to animate each frame. :rockout:

Christmas Lights GIF by James Madison University
 
Back
Top