• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft Releases DirectX Raytracing - NVIDIA Volta-based RTX Adds Real-Time Capability

Just to abandon it like physx/sli...
 
Denoise? Sounds like they are trying to optimize what is essentially upscaling or antialiasing equivalent for raytracing.
Nvidia surely does not mind, raytracing will eat up all the hardware you are able to throw at it and they are more than happy to help provide that for you.

AMD probably intends to leverage Vega 16-bit compute capabilities for the same purpose as Nvidia does with Tensor cores.

Having an API, even more so in DX, is very VERY important milestone for raytracing even if no significant game/application takes advantage of it this time around.
 
Denoise? Sounds like they are trying to optimize what is essentially upscaling or antialiasing equivalent for raytracing.
Nvidia surely does not mind, raytracing will eat up all the hardware you are able to throw at it and they are more than happy to help provide that for you.

AMD probably intends to leverage Vega 16-bit compute capabilities for the same purpose as Nvidia does with Tensor cores.

Having an API, even more so in DX, is very VERY important milestone for raytracing even if no significant game/application takes advantage of it this time around.

There's another AI:ish difference between Pascal and Volta SMs, Volta can do fp32 and int32 at the same time while Pascal can't. Just remembered that Nvidias research paper of AI denoising filter, so maybe their denoising filter really need tensor cores for to make it real time:
We implemented the inference (i.e. runtime reconstruction) using fused CUDA kernels and cuDNN 5.11 convolution routines with Winograd optimization. We were able to achieve highly interactive performance on the latest GPUs. For a 720p image (1280×720 pixels), the reconstruction time was 54.9ms on NVIDIA (Pascal) Titan X. The execution time scales linearly with the number of pixels.
The performance of the comparison methods varies considerably. EAW (10.3ms) is fast, while SBF (74.2ms), AAF (211ms), and LBF (1550ms) are slower than our method (54.9ms). The NFOR method has a runtime of 107–121s on Intel i7-7700HQ CPU. Our comparisons are based on the image quality obtainable from a fixed number of input samples, disregarding the performance differencies. That said, the performance of our OptiX-based path tracer varies from 70ms in SponzaGlossy to 260ms in SanMiguel for 1 sample/pixel. Thus in this context, until the path tracer becomes substantially faster, it would be more expensive to take another sample/pixel than it is to reconstruct the image using our method.
Furthermore, our method is a convolutional network, and there is a strong evidence that the inference of such networks can be accelerated considerably by building custom reduced-precision hardware units for it, e.g., over 100× [Han et al. 2016]. In such a scenario, our method would move from highly interactive speeds to the realtime domain.
 
My guess is even if this will work on Volta, it will still bring it to its knees. Historically, none of the features being introduced ever worked fine on the first generation hardware that supported them. Tessellation, PS 3.0, PS 2.0, vertex shaders, TnL, even 8 bits per channel was too much for the hardware at its time.

So everybody just take it easy, for the time being this is aimed at developers so they get get their feet wet. We'll get this in a usable form in the next generation of GPUs. Or the one after that.
 
My guess is even if this will work on Volta, it will still bring it to its knees. Historically, none of the features being introduced ever worked fine on the first generation hardware that supported them. Tessellation, PS 3.0, PS 2.0, vertex shaders, TnL, even 8 bits per channel was too much for the hardware at its time.

So everybody just take it easy, for the time being this is aimed at developers so they get get their feet wet. We'll get this in a usable form in the next generation of GPUs. Or the one after that.

Sure, GDC is game developers conference after all. The rumor is though there will be games with DXR features coming this year, obviously they will be optional but I don't think we are that far a way from hardware side either.
 
Sure, GDC is game developers conference after all. The rumor is though there will be games with DXR features coming this year, obviously they will be optional but I don't think we are that far a way from hardware side either.
Yeah, enable DXR and enjoy the game at 12fps. However this is still invaluable for developers that can validate the rendering, the drivers at whatnot. Consumers, I think we're looking a two year waiting at the minimum. Maybe less you're willing to buy high-end and SLI/Crossfire.

However, ray tracing is worth any wait ;)
 
My guess is even if this will work on Volta, it will still bring it to its knees. Historically, none of the features being introduced ever worked fine on the first generation hardware that supported them. Tessellation, PS 3.0, PS 2.0, vertex shaders, TnL, even 8 bits per channel was too much for the hardware at its time.

So everybody just take it easy, for the time being this is aimed at developers so they get get their feet wet. We'll get this in a usable form in the next generation of GPUs. Or the one after that.

What's your basis for saying that? Why would NVIDIA create hype and game devs embrace this if it can't create playable framerates?
 
What's your basis for saying that? Why would NVIDIA create hype and game devs embrace this if it can't create playable framerates?

Every first generation hardware that is using some next gen feature will be only formally playable on that hardware.
Someone old enough maybe remembers T&L (Transforming and Lightning) on the first Geforce ever made.
When game designers finally implemented T&L Geforce1 was already to weak to run any game that had that feature.
And i am sure there are more similar up to date examples
 
Last edited:
What's your basis for saying that? Why would NVIDIA create hype and game devs embrace this if it can't create playable framerates?
I just told you in my previous post: to lay the groundwork for what's to come.
And I also told you what's my basis: that's how all new technologies have been introduced since I care to remember.
 
Why would NVIDIA create hype and game devs embrace this if it can't create playable framerates?

Come on , don't look so surprised. Remember the frame rate crippling tessellation from the early days ?
 
Come on , don't look so surprised. Remember the frame rate crippling tessellation from the early days ?
You mean TruForm? :)
But tessellation is actually a fairly good example of a long adoption period. The tech was there in hardware since 2001 but it only gradually gained traction in mid to late 2000s. Getting tessellation into an API (primarily Direct3D) was a huge step in making that happen.
 
Yeah, enable DXR and enjoy the game at 12fps. However this is still invaluable for developers that can validate the rendering, the drivers at whatnot. Consumers, I think we're looking a two year waiting at the minimum. Maybe less you're willing to buy high-end and SLI/Crossfire.

However, ray tracing is worth any wait ;)

Which part of it to enable though, why to go full-monty if one does not have to? I.E. There's already other good methods for AO and shadows, maybe use raytracing just for reflections if it's feasible by performance penalty(Wonder how would mirrors edge look-like with raytraced reflections :D)... It's PC after all, settings for adding/removing graphical fidelity is already very large.
 
Which part of it to enable though, why to go full-monty if one does not have to? I.E. There's already other good methods for AO and shadows, maybe use raytracing just for reflections if it's feasible by performance penalty(Wonder how would mirrors edge look-like with raytraced reflections :D)... It's PC after all, settings for adding/removing graphical fidelity is already very large.
Yeah, I don't think ray tracing works like that. I don't know the specifics of this implementation, but when I learnt about it, ray tracing was pretty much a scene-wide affair.
 
Back
Top