• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA: Image Quality for DLSS in Metro Exodus to Be Improved in Further Updates, and the Nature of the Beast

Not sure I know what you are getting at...

Like settings the same between the two runs outside of dlss?

Yes, cause its almost the reverse of what the games are getting. I understand its on rails and all that but still. In-game set scenes would have the same visual increase if that were true.
 
They wouldnt and dont... it's too variable comparatively. Much much smaller dataset to work with.
 
They wouldnt and dont... it's too variable comparatively. Much much smaller dataset to work with.

If the scene is similarly on rails, In-game no player control cut scene or a transitional dialog NPCs scene which there is no player interaction. It would there for not benefit ?
 
I really cant answer with any certainty. But outside of cut scenes, games aren't on rails and are rendering millions of DIFFERENT fps in a session. Hence the difference. A benchmark gets to learn the EXACT same scene and since it is never different it can optimize.

I'd be interested to hear more details on the neural network and learning. Can't say I buy the opinion of the editorial (not enough horsepower to catch up and optimize games), but I can sure see why that is his conclusion.
 
So, DLSS works best on "DEMO" or "On rail benchmarks" but epic failed in the real game compared to simple downscale method ?
 
I really cant answer with any certainty. But outside of cut scenes, games aren't on rails and are rendering millions of DIFFERENT fps in a session. Hence the difference. A benchmark gets to learn the EXACT same scene and since it is never different it can optimize.

I'd be interested to hear more details on the neural network and learning. Can't say I buy the opinion of the editorial (not enough horsepower to catch up and optimize games), but I can sure see why that is his conclusion.

Well Nvidia did say
Nvidia said:
DLSS uses fewer input samples than traditional techniques such as TAA
If it increases those samples it would need more resources.
 
They also said with more time, it learns better... so not sure how to take that.
 
I'm not even sure what to make of this. We now have an "AA" (upscaling) method that requires "training" on some cluster somewhere, on a per game, per resolution basis. It seems overly complicated... like flying a spaceship to work, which is 3 blocks away from your house.
 
They also said with more time, it learns better... so not sure how to take that.
So we should play the game maybe a year after the release for optimal DLSS performance ?
Is this why Metro choose to release on steam at 2020 ?
 
So we should play the game maybe a year after the release for optimal DLSS performance ?
Is this why Metro choose to release on steam at 2020 ?
I'll assume you are serious here...

No. Play it as is and if the IQ losses are too great... dont use it. See an update that is supposed to improve it... check again. Dont make it complicated. ;)
 
I'm not even sure what to make of this. We now have an "AA" (upscaling) method that requires "training" on some cluster somewhere, on a per game, per resolution basis. It seems overly complicated... like flying a spaceship to work, which is 3 blocks away from your house.

Well if you take Nvidias word. Its being trained on SS images of the current content. The improved A.I. algo is then being applied to app/drivers thus the algo is just being accelerated on the tensors GPU side.
 
so weird how negative people are. they jump on the hate bandwagon so quickly with everything, including new hardware designs. If you never support something new in hardware, that might not me super useful now, but could be down the road you'll never get anything new from hardware makers that MIGHT bring good change for game designers. honestly, the world has gone nuts with HATE, and sad to see that seep into an industry that I considered a hobby for so very long. This mentality will over time stifle and or slow down hardware development, and all you'll be getting is faster cards with no attempt made a something new. The worst one-sided hate towards nvidia I have seen yet, is the latest from Hardware Unboxed. now ill most likely get no understanding for what I just typed, except more hate. really sad to see this going on
 
SS images

Wouldn't the images be black and white? I didn't think they had color back then.

If you never support something new in hardware, that might not me super useful now

The idea is that those not super useful features are not the selling point but rather a bonus. Anyway, I think they amount of performance they managed to squeeze out of them is more impressive than the two features.
 
Alright guys...
DLSS is using pre-trained models and then running Inferencing on the tensor cores.
This is a far lighter workload than training. Training is done on DGX clusters, driver gets the new model.
Same thing self driving cars do, its pattern recognition based on pre-trained models... with months of compute time to back it.

That said... it may get better it may not, neural nets are not a one size fits all... one model may be good for one area of the map and terrible for another.
I may have facepalmed when 3dmark and other completely fucking linear tests are used as what DLSS can be.
It simply will never get there, just by the nature of the technology being used, sure it might get a skosh better... but not unicorns shitting rainbows 3dmark lies.
 
so weird how negative people are.

People paid for the promises that Jensen made and it clearly doesn't work.
People get mad.
Is that so weird?

The idea is that those not super useful features are not the selling point but rather a bonus.

You do know Jensen spent over 20 minutes on RTX release event On Stage talking about this "not the selling point" feature, right ?
 
DLSS IS Garbage.
Look at the video test/review and comparison done by Hardware Unboxed --- you are better off with your own upscaling - textures more crisp, more details, and same/similar framerates to DLSS.
www.youtube.com/watch?v=3DOGA2_GETQ
 
Anyone here tested without turning on any level of anti-aliasing but leaves DLSS enabled? Maybe that's how it supposed to work alone? Since it's meant to replace anti-aliasing without giving more performance hit while maintaining decent image quality?
 
Anyone here tested without turning on any level of anti-aliasing but leaves DLSS enabled? Maybe that's how it supposed to work alone? Since it's meant to replace anti-aliasing without giving more performance hit while maintaining decent image quality?

Jensen at RTX launch event deliberately compare 4K TAA to 4K DLSS.
They are supposed to compete.
 
That demo can't be used as a benchmark since it's a demo... I'm talking about real world comparison.
 
basically the AI can't process in real time each frame and won't be able to do it in the near future also as it need a lot a hp and time for learning...

nv you better leave out AI from gaming products as some gamers need max fps in max details instantly ...
 
Well, you can turn off DLSS once you enable DXR/RTX feature on...
 
That demo can't be used as a benchmark since it's a demo... I'm talking about real world comparison.
BFV DLSS is real world comparison.
And DLSS clearly lost to good old downscaling method.

And btw since it the "The Jensen" himself shown TAA vs DLSS On Stage with a DEMO.
I think it is fair.
And now we knew DLSS only works on DEMO or railed Benchmarks sections because they are fixed so the AI can train on them.

I assume the only type of "Real Game" that DLSS work best are old laser gun arcade shooters such as House of the DEAD.
 
Last edited:
seems it is... Still, at least we know the Tensor cores are getting "trained" with "patches". But, it is still nowhere as we are expecting them to perform unless there's some sort of SDK to leverage those cores that are seemingly doing nothing.
 
seems it is... Still, at least we know the Tensor cores are getting "trained" with "patches". But, it is still nowhere as we are expecting them to perform unless there's some sort of SDK to leverage those cores that are seemingly doing nothing.

If DLSS is just another SLI , user have to wait for SLI profiles and "Hopefully" it works. It is a waste of time and money.
Also speaking about time,
Nobody knows how long it would take for " Jensen's AI" to train for ONE game.
In this case for BFV, BFV released back in Nov 2018.
It Feb 2019 right now and his AI cannot produce better image than a simple downscaling method.
By the time the AI is capable to do that, the game maybe obsolete...........
 
The worst thing... what if I do not play those few AAA games that have this tech? Waste of silicon? What if a game has the feature, but someone didn't coin enough to dedicate almighty nvidia server resources to tailor it for the exact game, and then the priorities, who decides which one is the priority now? BFV or Metro or whatever? If the game has mods, skins, reshade... all that thing goes to the toilet instantly as the pre AI training isn't aware of custom things.

I used nvidia 980ti and 1080ti... I am not fond of investing in things I don't use... We do not buy Quadro cards exactly for the same reason. Raytracing is nice, but this? No thanks...
 
Back
Top