• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Announces DLSS 3.5 Ray Reconstruction Technology, Works on GeForce 20 and Newer

It's a low resolution image, so we can't really see how much detail is missing. And on the topic of missing detail...
I marked the light sources that I was talking about in my previous post, and their reflections on the floor. The last picture seems to be missing them. It's easy to produce more frames per second with less detail.
View attachment 310841

You must be joking now...

When you walk in the game, you will definitely notice that the light bounces on the surfaces have latency. It's ridiculously obvious.
In this image it's more than obvious that the pavement does not reflect the lights above because it's some msecs behind in time.
The same happens with the reflections. The resolution is low or low and later gets better after it is rendered.

The problem is people do not realize how difficult it is for a hardware to process this in msecs and post bullshXts about RT.
A good CPU can render one, just one, frame in minutes while a GPU has to render 60+ frames in ONE second.
How can you say that nVidia, or even AMD, are not good enough for delivering RT?
Their GPUs, both radeon and geforce, are multiple years ahead of the CPUs.

If it weren't about RT, we would get the same oily shXtty plasticky graphics like all the console ports we got this year.
Increasing the texture resolution is NOT better graphics.
 
You must be joking now...

When you walk in the game, you will definitely notice that the light bounces on the surfaces have latency. It's ridiculously obvious.
In this image it's more than obvious that the pavement does not reflect the lights above because it's some msecs behind in time.
The same happens with the reflections. The resolution is low or low and later gets better after it is rendered.

The problem is people do not realize how difficult it is for a hardware to process this in msecs and post bullshXts about RT.
A good CPU can render one, just one, frame in minutes while a GPU has to render 60+ frames in ONE second.
How can you say that nVidia, or even AMD, are not good enough for delivering RT?
Their GPUs, both radeon and geforce, are multiple years ahead of the CPUs.

If it weren't about RT, we would get the same oily shXtty plasticky graphics like all the console ports we got this year.
Increasing the texture resolution is NOT better graphics.
I think you misunderstand me. I'm not talking about reflections, but light sources that seem to be missing from the picture with ray reconstruction enabled.

Purple rays came in and then denoiser said "let there be light..." and saw those details as unnecessary noise:D

Those flares on top were like icing on the cake. Sad to see the new DLSS punched them out.

Here's the same frame with DLSS 6. Nvidia finally managed to animate each frame. :rockout:

Christmas Lights GIF by James Madison University
DLSS 56487 needs to add a noiser filter for the de-noiser to be at its best.
 
How can you say that nVidia, or even AMD, are not good enough for delivering RT?
Their GPUs, both radeon and geforce, are multiple years ahead of the CPUs.

If it weren't about RT, we would get the same oily shXtty plasticky ...
 
I think you misunderstand me. I'm not talking about reflections, but light sources that seem to be missing from the picture with ray reconstruction enabled.

The missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.

1693137656128.png


 
The missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.

View attachment 310851

Ah, I see. We'll have to look at it in action from third-party sources, I guess.
 
It's 99% the same thing and in 99% of cases you will never notice any extra detail, any difference at all, unless you do a side by side comparison. And even then you need a lens.

But the cultists must believe what the Master says...
Eventually they'll release some explanation of what they're actually doing that's different. AI-ifying the denoising process can't hurt much, and should bring some speed upgrades/faster RT. That's the only advantage I can think of.
Ιm sorry to say but you sound like the one being in a cult here.

It's a low resolution image, so we can't really see how much detail is missing. And on the topic of missing detail...
I marked the light sources that I was talking about in my previous post, and their reflections on the floor. The last picture seems to be missing them. It's easy to produce more frames per second with less detail.
View attachment 310841
It's not missing anything, the light source on the first 3 pictures shouldn't be there but without RR it takes too long for the information to be processed. I guess a video would make a better job of demonstrating what it does.
 
The missing light is not a DLSS issue. It's just half second later and the lighting has changed.
The whole point is the latency of the light bounces.
The image nVidia used is not good for showing what DLSS 3.5 does.
A reflection on a mirror would be better for that or a video.

Another example:
The light has just been switched off but the information of the light bounces to stop hitting the darn surface has not arrived yet.
In cyberpunk is way more noticeable because there are changing lights everywhere.
Finally an actual answer!
So yeah, the AI denoiser accelerates the denoising, that's the only thing I thought it would do.
Well, it's a nice improvement.
 
The image nVidia used is not good for showing what DLSS 3.5 does.
That's not exactly correct. Previous denoisers are using temporal information, so they average out a bunch of frames over time, which smears out high-contrast areas. With DLSS 3.5 NVIDIA promises this is correctly taken into account. Their example is the front lights of the car in CP2077, which is just one large bright area with the old denoiser, with 3.5 you can clearly see the light cone
 
So if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
 
So if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
Either this, or DLSS/FSR is a push for everybody to buy a 4K monitor that you couldn't game on (not to mention you wouldn't want one) otherwise.
 
So if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
It's definitely improvement because it enables you to experience RT @4k @>60fps. Otherwise you couldn't do it.
It achiev so with compromise in the form of dlss, insted of the traditional graphic settings tune.

He who can't stand the notion of compromise will never accept dlss (or fsr or xess) and will be just fine with RT off.

The vast majority who can compromise, each one to his own degree, will be able to enjoy thos "precious" eye candy. Nothing more, nothing less.
 
So if I reading this right DLSS off shows the true power of the an expensive card. DLSS on shows oooo lets make it better by using software to lower the res then use AI to clean it up at high res.

I remember when video card companies improved their hardware and you did not need any software tweaking crap to play games. The 10 series was the last true Nvidia Pure hardware cards.

I mean RTX is cool looks great but if the hardware cannot do it without software rendering then its not an improvement. If that was the case the on board intel graphics card could be tweaked to do the same thing.
You are reading this wrong. DLSS / FSR allows you to improve image quality with no hit to framerate. 4k + DLSS Q looks better than 1440p native and has similar performance, so why would play at 1440p native instead of 4k DLSS Q? Rhetorical question.
 
....yes, if the normal raster games require, let's say, 10 calculations per second to be done by the gpu, a simple RT game with just reflections, shadows ask for 1000 calc/s.
If you don't want to compromise by using DLSS/FSR, wait 15-20 years and maybe you'll get a gpu that can render RT games natively.

I wrote it again. People have no idea of how RT works, how much difficult it is to calculate etc.
Are you happy if you see models with more triangles and better textures as an improvement in games?
 
You are reading this wrong. DLSS / FSR allows you to improve image quality with no hit to framerate. 4k + DLSS Q looks better than 1440p native and has similar performance, so why would play at 1440p native instead of 4k DLSS Q? Rhetorical question.
If you have a 1440p monitor, you'd want 1440p native, ideally. Not 1440p + DLSS. 4K is probably a different story. Not many graphics cards can game at such high resolution natively.
 
Is this a joke? DLSS 3 requires 40-series but somehow 3.5 runs on 20-series and up?

FSR 3.0 has frame-generation and will run on all GPU's that can do DX11+

nVidia is just being their deceiving/scummy selves as usual.
 
Back
Top