• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

You must be young. Every single gfx technology that came before this took years till developers learned how to use it properly, RTX/DXR is no different. Also, no game supports all effects in DX12 or DX11 or DX10, so I don't see how supporting everything DXR can do is relevant.
And again, this is not a discussion to have in a thread about RTX being sort of supported on Pascal cards.

It is relevant.
This "RTX being sort of supported on Pascal cards" move has one purpose only : Try to convince ppl to "upgrade / side-grade" to RTX cards.
Then, what is the point to do this "upgrade / side-grade" when there are not even a handful of Ray-Tracing games out there ?

RTX based on DXR which is just one part of the DX12 API, RTX itself is not comparable to fully featured DX10 / DX11 / DX12 .
Compare it to Nvidia's last hardware-accelerated marketing gimmick a.k.a PhysX, tho.
It was the same thing again, hardware PhysX got 4 games back in 2008 and 7 in 2009, then hardware PhysX simply fade out and now open sourced.
Now we got 3 Ray Traced games in 7 months, sounds familiar ?
 
Last edited:
I was expecting a much bigger gap to be honest, usually specialised hardware with optimised routines has much more than a 2-3x performance improvement. This RT was optimised for RTX and it only has a 200-300% advantage depending on game.

All this data has told me, is that if games get made with compute versions of RT optimised for it, then potentially performance can be quite close. I dont understand what nvidia are doing here, it seems out of desperation they enabled support for the large pascal userbase to try and entice developers. RT cards may be dead consumer tech within 1-2 generations.
 
I still think RT cores are basically lean Kepler (or similar architecture) cores that are reserved so they don't interfere with the graphics pipeline. If this is the case, GCN using async compute should be able to do very well at DXR without modification.

NVIDIA to date hasn't technically described what the RT cores are.

I guess we'll find out when AMD debuts DXR support.
 
It was the same thing again, hardware PhysX got 4 games back in 2008 and 7 in 2009, then hardware PhysX simply fade out and now open sourced.
PhysX games were plenty and the PhysX added to the game's realism. The fact that you didn't get to observe it doesn't change the facts.
Fallout 4 (FleX added in 2017), Witcher 3 family (2016), COD Ghosts (2013) really benefited from that - at least I personally liked the effects. I could't stop using grenades :)
Was it doomed by being closed source? Maybe. But that doesn't mean it didn't work. Unreal Engine 4 still uses it.

I would like an identical approach for RTX: Let me add another card and dedicate it to RTX. That will make me maybe take the bait to upgrade sooner to a single-card solution.
 
Last edited:
I was expecting a much bigger gap to be honest, usually specialised hardware with optimised routines has much more than a 2-3x performance improvement. This RT was optimised for RTX and it only has a 200-300% advantage depending on game.
All this data has told me, is that if games get made with compute versions of RT optimised for it, then potentially performance can be quite close.
The current hybrid solution means only a small part of frame rendering uses DXR. Even then, only specific operations are done on RT cores, data setup and management still happens on shaders.
Compare results from BF5 that uses little, Metro/SoTR that use little bit more and benchmarks like Port Royal or techdemos that use a lot of RT. The more RT is used the bigger the performance gap gets.
The other part is that Nvidia chose to put front and center results with DXR Low/Medium and modest resolutions. These paint Pascal in a better light than DXR High/Ultra results.

For a visual representation on what I am trying to say, look at the Metro Exodus frame graphs from Nvidia's original announcement, the middle part represents the part that RT Cores deal with:
https://www.techpowerup.com/253759/...10-and-16-series-gpus-in-april-drivers-update
https://www.techpowerup.com/img/Qr86CtLnbFWCRcfc.jpg

NVIDIA to date hasn't technically described what the RT cores are.
They have not described the units very precisely. However, it is not quite correct to say we do not know what the RT Cores do. They run a couple operations for raytracing implemented in hardware. Anandtech's article has a pretty good overview:
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
 
Last edited:
They have not described the units very precisely. However, it is not quite correct to say we do not know what the RT Cores do. They run a couple operations for raytracing implemented in hardware. Anandtech's article has a pretty good overview:
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
Something like a Kepler core could be doing everything the "RT core" does.

Remember, RTX has an *extremely* limited capability to ray trace: it complements existing rendering techniques in games rather than replacing it.
 
If AMD come up with something let's say "FreeRay" which runs on and optimized for typical graphics cards instead of RTX cards, would that benefit RTX card sales ?

Yes it would, because it's literally impossible to do this in a performant way on standard non tensor gpus.

As RTX are the only gpus with tensor cores right now, it would run like shit, driving upgrades to RTX.

That is his point in a nutshell.

Nvidia does not own DXR.
If AMD could come up with their own solution to optimize DXR without dedicated "cores", RTX cards would become utterly pointless.

They would need to add tensor cores first.
 
Tensor cores are quite useless for DXR. NVIDIA is using tensor cores to up-sample resolution to compensate for framerate loss due to DXR. If the RT cores weren't rubbish and/or the GPU could properly async (like GCN can) so they can raytrace without impacting framerate, DLSS would be useless. A proper raytracing ASIC could be the solution...assuming DXR is a problem worth solving which I don't believe it is. There would have to be a monumental jump in compute capabilities (as in, a ton of cheap performance to waste) to warrant pursuing DXR as a useful technology in games.
 
Yes it would, because it's literally impossible to do this in a performant way on standard non tensor gpus.

As RTX are the only gpus with tensor cores right now, it would run like shit, driving upgrades to RTX.
That is his point in a nutshell.
They would need to add tensor cores first.

The Tensor cores in RTX cards don't do DXR .

Nvidia described Tensor cores as "specialized execution units designed specifically for performing the tensor / matrix operations that are the core compute function used in Deep Learning " .

They have nothing to do with Ray Tracing.
 
If raytracing wasnt demonstrated running on a radeon card (and rather well) by a third party, none of this would be happening.
Full "Damage Control" mode.
 
They have nothing to do with Ray Tracing.

They do the "denoising" that enables raytracing to be possible on present hardware (we can't possibly push enough raw rays).

Thus, they have everything to do with it.
 
They do the "denoising" that enables raytracing to be possible on present hardware (we can't possibly push enough raw rays).

Thus, they have everything to do with it.

lol you are mixing things up.
RTRT doesn't require denoiser to work.
Denoiser is an after-effect added to the final image.
 
lol you are mixing things up.
RTRT doesn't require denoiser to work.
Denoiser is an after-effect added to the final image.
Sure is not necessary... if you are able to ray trace all the rays. But the current generation of RTX are not.
They trace only a couple of rays per pixel and mix some textures in, hence they have to apply de-noise to that to make it look good.
I guess you didn't read the excellent article linked above? Here you go a quote:
Essentially, this style of ‘hybrid rendering’ is a lot less raytracing than one might imagine from the marketing material. Perhaps a blunt way to generalize might be: real time raytracing in Turing typically means only certain objects are being rendered with certain raytraced graphical effects, using a minimal amount of rays per pixel and/or only raytracing secondary rays, and using a lot of denoising filtering; anything more would affect performance too much.
 
Last edited:
Sure is not necessary... if you are able to ray trace all the rays. But the current generation of RTX are not.
They trace only a couple of rays per pixel and mix some textures in, hence they have to apply de-noise to that to make it look good.

Yes, that is exactly the point.

Nvidia offers an AI-based de-noiser powered by tensor cores.
It will de-noise any image given, no matter it is an in-game image, or a photo.

If it is just the de-noiser which matters, then it is the de-noiser, NOT the Tensor cores.
If AMD could come up with an efficient de-noise method without any dedicated hardware, Tensor cores also become utterly pointless.
 
Something like a Kepler core could be doing everything the "RT core" does.

Remember, RTX has an *extremely* limited capability to ray trace: it complements existing rendering techniques in games rather than replacing it.
Well, but what about "fully RT Quake 3"?
Of course, models that are rendered there are simple, but still.

This boils down to "it does in RT cores what DXR API is about" namely, intersection matching.
Uh, who would have thought.
 
Yes, that is exactly the point.

Nvidia offers an AI-based de-noiser powered by tensor cores.
It will de-noise any image given, no matter it is an in-game image, or a photo.

If it is just the de-noiser which matters, then it is the de-noiser, NOT the Tensor cores.
If AMD could come up with an efficient de-noise method without any dedicated hardware, Tensor cores also become utterly pointless.
So what happens with Pascal, I'm guessing lack of tensor cores isn't the (biggest) reason why RT tanks the performance on that thing :wtf:
 
So what happens with Pascal, I'm guessing lack of tensor cores isn't the (biggest) reason why RT tanks the performance on that thing :wtf:
Only the leather jacket himself knows.
Without any comparison data from the red team, we have no idea if the pascal cards received optimization for RTRT , or no optimization at all.

After all, Nvidia naturally wants to sell more Turing cards, optimize old pascal cards for the selling feature of Turing is the exact opposite of that.
 
Well, but what about "fully RT Quake 3"?
Of course, models that are rendered there are simple, but still.
Judging by AAA games, publishers aren't willing to sacrifice so much for raytracing. On that note, if raytracing were more accessible, indie developers would probably use it because a lot of them go for a minimalist graphics style anyway.
 
Last edited:
Judging by AAA games, publishers aren't willing to sacrifice so much for raytracing. On that note, if raytracing were more accessible, indie developers would probably use it because a lot of them go for a minimalist graphics style anyway.
Funny you should mention that. The PC game Abducted is adding in raytracing for those that have the hardware in one of its soon to be released early access patches. The game has been EA for three years and is almost ready. The dev just announced a couple weeks ago that RT would be added before final release. This game, btw, is not using minimalist graphics.
 
Last edited:
Then I have to assume that NVIDIA probably gave the devs RTX cards on the condition that they add RTX support.

It's way too early for indie devs to be adding raytracing as a cost-saving measure.
 
Then I have to assume that NVIDIA probably gave the devs RTX cards on the condition that they add RTX support.

It's way too early for indie devs to be adding raytracing as a cost-saving measure.
I think you’re right. I don’t think it is a cost saving measure. I think they are just trying to offer it as a nice perk. They are very responsive and I think they really just want to make the best product they can.
 
Something like a Kepler core could be doing everything the "RT core" does.
Remember, RTX has an *extremely* limited capability to ray trace: it complements existing rendering techniques in games rather than replacing it.
It could but would be rather inefficient at it. Turing SM is faster than Pascal SM in practically every aspect. Pascal is faster than Maxwell which is faster than Kepler. If Nvidia would think RT is best done on good old shaders, they would simply add more shader units and would not bother with RT Cores.
Tensor cores are quite useless for DXR. NVIDIA is using tensor cores to up-sample resolution to compensate for framerate loss due to DXR.
I am not too sure this is exactly the case here. True, they have no real purpose for RT calculations themselves. However, Nvidia has claimed (or at least did so initially) that their denoising algorithm runs on Tensor cores. This is definitely not the only denoising algorithm and maybe/likely not the best.
Judging by AAA games, publishers aren't willing to sacrifice so much for raytracing. On that note, if raytracing were more accessible, indie developers would probably use it because a lot of them go for a minimalist graphics style anyway.
This is exactly what Nvidia has been going after. They say there are potential cost savings from reduced time in workflow of creating a game. Less workarounds, less artist/designer work. Some developers have supported that claim so they might actually have a point. Making raytracing more accessible is exaclty what DXR support for GTX cards is about.
 
Last edited:
It could but would be rather inefficient at it. Turing SM is faster than Pascal SM in practically every aspect. Pascal is faster than Maxwell which is faster than Kepler. If Nvidia would think RT is best done on good old shaders, they would simply add more shader units and would not bother with RT Cores.
All I know is that the organization of RT cores in Turing doesn't make sense.

This is exactly what Nvidia has been going after. They say there are potential cost savings from reduced time in workflow of creating a game. Less workarounds, less artist/designer work. Some developers have supported that claim so they might actually have a point. Making raytracing more accessible is exaclty what DXR support for GTX cards is about.
That is only true if the game was coded from the ground up to exclusively use raytracing. If there was any time put into traditional lighting/rendering techniques then raytracing is added cost.
 
Back
Top