• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs

How did this "

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs"​


Turn into this

AMD users need not apply. Maybe in 2027 AMD will release compatible hardware.

I think I may go back to reddit.
 
How did this "

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs"​


Turn into this

AMD users need not apply. Maybe in 2027 AMD will release compatible hardware.

I think I may go back to reddit.
Whoa there big fella, thinking and Reddit don't mix. :laugh:
 
Whoa there big fella, thinking and Reddit don't mix. :laugh:
I just don't get how it is always the exact same users. At least on Reddit anyone that continues gets overwhlemed
 
Why not both?

Why? You mean making DXR, neural rendering and other bits a mandatory part and fashioning that into a new version?

Why? What makes DX12 unoptimized? What bugs and slowness do you mean?

Guys, DX12 is an API. The way it is being or needs to be used is different from API itself. If you are talking about games it is not the API that is buggy - in most cases, there have been some relatively smaller bugs obviously - but the game or application that developer made. DX12 is a comparatively lower-level API, same as Vulkan. Which means the API and IHV implementations of it in drivers will not hold your hand the same way older APIs like OpenGL or DX11 did. While there is a bigger possibility for optimization, there is also a bigger possibility of shooting your own foot.
Good post, thanks
 
How did this "

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs"​


Turn into this

AMD users need not apply. Maybe in 2027 AMD will release compatible hardware.

I think I may go back to reddit.
So the tech that increases performance is Opacity Micromaps (OMM) and Shader Execution Reordering (SER). Nvidia supports both starting in 40 series, OMM is supported by all RTX series cards. AMD and Intel doesn't support either but hope that AMD has a scheduler tool that might mimic SER but its unconfirmed if it can atm according to Tomshardware. Hence why someone posted the need not apply remark.

Link to original post with article from tomshardware. https://www.techpowerup.com/forums/...el-and-nvidia-gpus.334455/page-2#post-5481187
 
How long of applying make up on dead pig thats DX12? There needs to be a clean up and move onto DX13 or 14(for the superstitious).
What would they be cleaning up exactly? The name itself isn't relevant.
 
How did this "

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs"​


Turn into this

AMD users need not apply. Maybe in 2027 AMD will release compatible hardware.

I think I may go back to reddit.

What I posted is 100% factual. You’ll just have to wait for AMD to get these features or buy Nvidia.
 
Last edited:
Whomever told you this was incorrect. Just Google it and you will find no mention of it.
All Nvidia GPUs dating back to Turing (GeForce RTX 20-series) support Opacity Micromaps (OMM), so these graphics cards can potentially experience a performance boost once game developers implement them into their titles. Intel said its next-generation Celestial (Xe3) GPUs will also support OMM.
If you read the post and seen the link to tomshardware you would see THEY SAID it so you saying there is no mention is incorrect.
 
How did this "

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs"​


Turn into this

AMD users need not apply. Maybe in 2027 AMD will release compatible hardware.

I think I may go back to reddit.
Team-ball nonsense. "My choice of GPU vendor is better than your choice of GPU vendor, your product is inferior and you should feel bad about buying it". My ignore list keeps growing unfortunately :(

On to the topic, I don't know a lot about the underlying plumbing here but I saw a comment elsewhere that said the speed-up is for particular operations, not an overall 10x improvement? So things WILL get faster but it won't be a night-and-day difference. Is that the case or was I misinformed?
 
If you read the post and seen the link to tomshardware you would see THEY SAID it so you saying there is no mention is incorrect.
I just edited, I had thought you said all RDNA GPUs support OMM.

apologies

"My choice of GPU vendor is better than your choice of GPU vendor, your product is inferior and you should feel bad about buying it"
Please point out where I wrote this.

Oh, I didn’t. Derp.
 
I just wish MS would get rid of the of single thread* only aspects of DX12 (as is also the case with DX11) so that multi-chip GPUs could be implemented with more linear increases in performance at reasonable costs**. Though I rarely buy the top performance GPUs anymore, I think this is the only way to break out of low double digit increases in performance in Raster. AMD made some good progress, after much research with RDNA3. Awesome graphics and high frame rates would be nice. OFC, this would probably break many current games (just a guess) - so we soldier on with laden down with golden handcuffs.

* Not on the CPU side, where multithreading is available (though often not implemented in games due to problems with thread locks and syncing).

** From what I read concerning problems with the development of the the large multi-chip RDNA4 top perf GPU. It's just a hard problem to solve.
 
I just don't get how it is always the exact same users. At least on Reddit anyone that continues gets overwhlemed
oh thats easy. reddit is heavily botted. you say anything outside of the corporate orthadoxy there and the bots downvote you to hell.
 
I just wish MS would get rid of the of single thread* only aspects of DX12 (as is also the case with DX11) so that multi-chip GPUs could be implemented with more linear increases in performance at reasonable costs**. Though I rarely buy the top performance GPUs anymore, I think this is the only way to break out of low double digit increases in performance in Raster. AMD made some good progress, after much research with RDNA3. Awesome graphics and high frame rates would be nice. OFC, this would probably break many current games (just a guess) - so we soldier on with laden down with golden handcuffs.

* Not on the CPU side, where multithreading is available (though often not implemented in games due to problems with thread locks and syncing).

** From what I read concerning problems with the development of the the large multi-chip RDNA4 top perf GPU. It's just a hard problem to solve.
The issues with RDNA 3 weren’t Microsoft’s.
 
oh thats easy. reddit is heavily botted. you say anything outside of the corporate orthadoxy there and the bots downvote you to hell.
Yes but a.) you are way offtopic and b.) all social media sites are.
 
What I posted is 100% factual. You’ll just have to wait for AMD to get these features or buy Nvidia.

It looks like this slide from the RDNA4 presentation proves you 100% wrong.

https://www.techpowerup.com/review/...echnical-deep-dive/images/architecture-10.jpg

According to AMD's RDNA4 slides, the 9070 series does indeed support OOM and SER. Most reviews don't highlight the advances in RT architecture but it's definitely confirmed that RDNA4 complies with DXR1.2. Even RDNA3 also had some OOM tech according to the slide.

Here's another article that shows how to use SER on RDNA4.
https://markaicode.com/amd-rdna4-ra...ization/#performance-monitoring-and-profiling

Don't know which current games use SER but even if there are some that do, it's likely using a code path specifically for Nvidia RTX cards. Newer games that adhere to the DXR1.2 spec may well improve RT performance on RDNA4.
 
Last edited:
So the tech that increases performance is Opacity Micromaps (OMM) and Shader Execution Reordering (SER). Nvidia supports both starting in 40 series, OMM is supported by all RTX series cards. AMD and Intel doesn't support either but hope that AMD has a scheduler tool that might mimic SER but its unconfirmed if it can atm according to Tomshardware. Hence why someone posted the need not apply remark.

Link to original post with article from tomshardware. https://www.techpowerup.com/forums/...el-and-nvidia-gpus.334455/page-2#post-5481187
"RDNA 4 introduces new shader reordering similar to NVIDIA's Shader Execution Reordering (SER) for the GeForce RTX 40 Series"

You're wrong.

NVIDIA supports OOM and SER via NVAPI; thus, NVIDIA is already leveraging its advantage. e.g. Cyberpunk 2077.
 
Last edited:
Back
Top