• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial NVIDIA: Image Quality for DLSS in Metro Exodus to Be Improved in Further Updates, and the Nature of the Beast

Don't know why TPU keeps using this image from Port Royal bench, when all images from real games show DLSS to be WAY more blurry.

To make it look better.
Why would they want it to look better than it is, is another question: perhaps stock? Or that wonderful NDA? Or basic green brains syncrome?

Or, perhaps, just lazy copypasta from green infected place.
 
Hi all, I am new here :-),

Metro Dev: Ray Tracing Is Doable via Compute Even on Next-Gen Consoles, RT Cores Aren’t the Only Way

It doesn’t really matter – be it dedicated hardware or just enough compute power to do it in shader units, I believe it would be viable. For the current generation – yes, multiple solutions is the way to go.
This is also a question of how long you support a parallel pipeline for legacy PC hardware. A GeForce GTX 1080 isn’t an out of date card as far as someone who bought one last year is concerned. So, these cards take a few years to phase out and for RT to become fully mainstream to the point where you can just assume it. And obviously on current generation consoles we need to have the voxel GI solution in the engine alongside the new ray tracing solution. Ray tracing is the future of gaming, so the main focus is now on RT either way.

In terms of the viability of ray tracing on next generation consoles, the hardware doesn’t have to be specifically RTX cores. Those cores aren’t the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to “run” DXR since DXR is just an extension of DX12.

Other things that really affect how quickly you can do ray tracing are a really fast BVH generation algorithm, which will be handled by the core APIs; and really fast memory. The nasty thing that ray tracing does, as opposed to something like say SSAO, is randomly access memory. SSAO will grab a load of texel data from a local area in texture space and because of the way those textures are stored there is a reasonably good chance that those texels will be quite close (or adjacent) in memory. Also, the SSAO for the next pixel over will work with pretty much the same set of samples. So, you have to load far less from memory because you can cache and awful lot of data.

Working on data that is in cache speeds things up a ridiculous amount. Unfortunately, rays don’t really have this same level of coherence. They can randomly access just about any part of the set of geometry, and the ray for the next pixels could be grabbing data from and equally random location. So as much as specialised hardware to speed up the calculations of the ray intersections is important, fast compute cores and memory which lets you get at your bounding volume data quickly is also a viable path to doing real-time RT.
https://wccftech.com/metro-dev-ray-tracing-doable-compute/

So IMO looks like NV knew that next gen Console could support DXR so they tried to launch RTX cards this year to be first to market and gain sales before the consoles are out with DXR and take high volume RTX GPU sales from NV?

P.S- if fast memory with low latency is so important to DXR- maybe next gen consoles will use HBM2?
 
If it increases those samples it would need more resources.
Technically nvidia could do it itself offline ("teach" neural network), then just ship weights as part of the driver.
 
Hi all, I am new here :),

Metro Dev: Ray Tracing Is Doable via Compute Even on Next-Gen Consoles, RT Cores Aren’t the Only Way


https://wccftech.com/metro-dev-ray-tracing-doable-compute/

So IMO looks like NV knew that next gen Console could support DXR so they tried to launch RTX cards this year to be first to market and gain sales before the consoles are out with DXR and take high volume RTX GPU sales from NV?

P.S- if fast memory with low latency is so important to DXR- maybe next gen consoles will use HBM2?

Problem with that it's needs awful lot compute power. Heck OC Titan V loses to RTX 2060 on Port Royal and you can't currently get more compute power than that. Will future consoles get some form of RT, maybe they will but I doubt it's build on DXR that needs acceleration HW for BVH.
 
"Image Quality for DLSS in Metro Exodus to Be Improved in Further Updates"

I'm sure every buyer of a $1100-1200 2080Ti was waiting for the future, further words in 5 months' time.

So, DLSS works best on "DEMO" or "On rail benchmarks" but epic failed in the real game compared to simple downscale method ?

Yes, you are exactly right, sir. As every feature NV announced with the RTX series is a failure in nearly half year's time. 2 games available with RT with pathetic performance, 1 RT game (FFXV) cancelled, 1 is still waiting for patch (Tomb Raider), which produced FHD 30ish results in the NV launch event. DLSS makes graphic quality look worse compared to switching it off. Now add the minimal performance boost to the last gen and the astonishingly increased prices. Then you get the result what NV financial reports indicated: a near 50% drop in gaming market sales.
 
Last edited:
Then you get the result what NV financial reports indicated: a near 50% drop in gaming market sales.
And they blamed AMD for it.
 
I'm not even sure what to make of this. We now have an "AA" (upscaling) method that requires "training" on some cluster somewhere, on a per game, per resolution basis. It seems overly complicated... like flying a spaceship to work, which is 3 blocks away from your house.

Add to this that even once its trained it will probably only look on par with the image otherwise upscaled (as per HWUB BFV video where the upscaled to 4K image was very close to native 4K yet DLSS was miles off).

Checker-boarding (Horizon Zero Dawn as an example on the PS4 Pro) and upscaling already do a good job, so its a wonder what the point of DLSS actually is. The actual software that supports it only reaffirms this position.
 

DLSS.jpeg


how about DLBS: blursampling
 
Last edited:
Technically nvidia could do it itself offline ("teach" neural network), then just ship weights as part of the driver.


Why not then just include AI learning that can use one of the many CPU cores and it reads a file with the information, kinda like they did precooking PhysX interactions, but then even that "couldn't" run on competitive hardware that had more accurate rendering (FP32/24 math was broken on older Nvidia cards as a way to get a performance boost https://www.computerbase.de/2018-07/hdr-benchmarks-amd-radeon-nvidia-geforce/2/). Its a gimmic that doesn't work and they of course have the ONLY real solution, PR for buy their crap.

Manufacture a problem, engineer a solution, try to profit.
 
Last edited:
Nvidia may see that DLSS improving over time is a bonus, but I see it as proof that the feature is inconsistent and will never be good at launch of a game. At least SMAA, TAA, and MSAA are stable, meaning that they look good at the beginning and you don't have to worry that you're playing with "beta" anti-aliasing.

They should have held off on any DLSS release until it was strictly better. Now we just are seeing how the sausage is made and it's gross. DLSS is a pipe dream.
 
What is the law’s definition of false advertising?
Generally, false advertising laws say that consumers have proved their case if they show: (a) that the advertising was false or misleading; (b) that the falsity was “material,” often meaning the company lied about something important; (c) the consumer saw the false advertisement; and (d) the consumer relied on the false advertising in purchasing the product or service. Consumers may show reliance be proving they wouldn’t have bought the product or service if not for the false advertising.They may also show they relied on a false advertisement if a false statement caused them to pay more for the company’s product or service than they otherwise would have.

A false advertisement may directly say something that is not true, or is misleading. By an advertisement may also be “false” based on what it doesn’t say. If important information is omitted from an advertisement and the consumer wouldn’t have bought the product or service had they known the truth, the consumer may be able to sue the company for this failure to disclose.
https://www.classlawgroup.com/consumer-protection/false-advertising/laws/

I feel like RTX owners could unite and sue Nvidiay for maybe false advertising?.
The beautiful demos without the FPS hit with RTX ON on RTX launch , No games to test when reviews went live, the unrealistic DEMO of DLSS with Port Royal, and some articles before launch "Just Buy It".

I think that if there is AMD lawsuit over false Bulldozer chip marketing, then this is even more possible?.
Of course I am no a lawyer, but this days everybody can sue for everything lol- who knows?:), just posting my thoughts.
 
Last edited:
I think that if there is AMD lawsuit over false Bulldozer chip marketing, then this is even more possible.

Don't worry,
Nvidia 's Cuda "core" falls into the same category as bulldozer "core" and will get sued when it actually went through xD.
 
They also said with more time, it learns better... so not sure how to take that.

Of course it does, all machine learning works like that by nature, because the longer it is learning, the more variables have been tried and can be omitted from future runs. But that doesn't mean that results can be copied over between games. MAYBE between engines, but even those are never identical between games.

But its also a very weak excuse to buy time and keep consumers in the dark about the technology and extract sales out of curiosity and promise that it will improve. This sounds a whole lot like FineWine to me and we know that it tastes sour.

Manufacture a problem, engineer a solution, try to profit.

This!!! is what Turing is by design. RTRT is the problem and DLSS supposed to be the (bandaid) fix until performance is acceptable at native res, which it probably will never be seeing as we are now only getting a very limited RT implementation and it already slashes FPS in half.

Let. It. Die.
 
This!!! is what Turing is by design. RTRT is the problem
By that statement, you clearly do not understand RTRT nor what it can do for gaming in the future.
and DLSS supposed to be the (bandaid) fix until performance is acceptable at native res
Wrong. DLSS is supposed to be a replacement for Anti-aliasing, not a fix for anything else.
Let. It. Die.
Where is your head? In the sand? RTRT and DLSS are here to stay. No amount of pointless, meritless whining is going to change that. Let. It. Go.
 
By that statement, you clearly do not understand RTRT nor what it can do for gaming in the future.

Wrong. DLSS is supposed to be a replacement for Anti-aliasing, not a fix for anything else.

Where is your head? In the sand? RTRT and DLSS are here to stay. No amount of pointless, meritless whining is going to change that. Let. It. Go.

Its a case of agree to disagree isn't it ;)
 
Why not then just include AI learning that can use one of the many CPU cores.
There must be a reason why nobody does machine learning on CPUs these days, don't ya think?

On the other side, yeah, looks like nothing but PR stunt, a shame that TPU is helping them push the FUD notoriously posting misleading crops.
 
You're ridiculous.

There is nothing misleading about it. Clearly it's a best case scenario... but come on.. stop with your incessant anti nvidia toxicity. It's a joke.

Another meatball that goes on ignore here.
 
Would have been nice if they at least included HDR you know something everyone can use and isn’t a performance tanking “shiny thing” not ready for prime time
 
They also said with more time, it learns better... so not sure how to take that.
That's their way of saying that they needed to put the tensor cores to use to make the investment in an RTX card to be worth it to be honest. All they did was use machine learning to "fill in the blanks" because that hardware is available for possible acceleration. This is really just nVidia trying to use all the extra cruft that they added to these GPUs, at least that's the vibe I'm getting.
Why not then just include AI learning that can use one of the many CPU cores and it reads a file with the information, kinda like they did precooking PhysX interactions, but then even that "couldn't" run on competitive hardware that had more accurate rendering (FP32/24 math was broken on older Nvidia cards as a way to get a performance boost https://www.computerbase.de/2018-07/hdr-benchmarks-amd-radeon-nvidia-geforce/2/). Its a gimmic that doesn't work and they of course have the ONLY real solution, PR for buy their crap.
One word. Latency. We already can see the latency hit it takes when using the tensor cores (hence why the GPU needs to be under heavy load, otherwise the framerate will be too high to make it worth it.) Offloading this stuff to the CPU will make the latency problem worse, at least that's my take on it.
 
That's their way of saying that they needed to put the tensor cores to use to make the investment in an RTX card to be worth it to be honest. All they did was use machine learning to "fill in the blanks" because that hardware is available for possible acceleration. This is really just nVidia trying to use all the extra cruft that they added to these GPUs, at least that's the vibe I'm getting.
Right, yep. I get it. Makes sense to use the hardware they put on the card.

I was simply trying to convey that, over time, things should improve. 3DMark is clearly a best case scenario due to its static and limited FPS in the benchmark, but I don't feel it is misleading/FUD/PR stunt. Time will tell how much improvement we will see.
 
I was simply trying to convey that, over time, things should improve. 3DMark is clearly a best case scenario due to its static and limited FPS in the benchmark, but I don't feel it is misleading. Time will tell how much improvement we will see.
The problem is that the nature of machine learning is that when stuff is done incorrectly then the machine adjusts. It relies on being wrong some portion of the time, otherwise there would be no purpose to "learning" because a static algorithm could be applied ahead of time without trying to figure out if it got it right or not. ML is good at filling in the blanks when there isn't a good way to definitively get the right answer, but the reality is that it's going to get some of those blanks wrong, that's the nature of ML.

Let me put it another way, ML is a lot more like lossy compression. You lose some level of accuracy by using it.
 
Back
Top