Friday, December 13th 2019

Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2

Hardware-accelerated ray tracing and variable-rate shading will be the design focal points for AMD's next-generation RDNA2 graphics architecture. Microsoft's reveal of its Xbox Series X console attributed both features to AMD's "next generation RDNA" architecture (which logically happens to be RDNA2). The Xbox Series X uses a semi-custom SoC that features CPU cores based on the "Zen 2" microarchitecture and a GPU based on RDNA2. It's highly likely that the SoC could be fabricated on TSMC's 7 nm EUV node, as the RDNA2 graphics architecture is optimized for that. This would mean an optical shrink of "Zen 2" to 7 nm EUV. Besides the SoC that powers Xbox Series X, AMD is expected to leverage 7 nm EUV for its RDNA2 discrete GPUs and CPU chiplets based on its "Zen 3" microarchitecture in 2020.

Variable-rate shading (VRS) is an API-level feature that lets GPUs conserve resources by shading certain areas of a scene at a lower rate than the other, without perceptible difference to the viewer. Microsoft developed two tiers of VRS for its DirectX 12 API, tier-1 is currently supported by NVIDIA "Turing" and Intel Gen11 architectures, while tier-2 is supported by "Turing." The current RDNA architecture doesn't support either tiers. Hardware-accelerated ray-tracing is the cornerstone of NVIDIA's "Turing" RTX 20-series graphics cards, and AMD is catching up to it. Microsoft already standardized it on the software-side with the DXR (DirectX Raytracing) API. A combination of VRS and dynamic render-resolution will be crucial for next-gen consoles to achieve playability at 4K, and to even boast of being 8K-capable.
Add your own comment

119 Comments on Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2

#101
cucker tarlson
Pushing 4k cards would be way more profitable for nvidia than RTX,dunno what you're taking about Vayra
Posted on Reply
#102
Vayra86
londiste@efikkan, @Vayra86, I think neither of you are wrong. 4K adoption is not going all too well and with the generally expected rate of 30% more performance per generation there are a couple generations to go until 4K 60FPS is easy. GPU vendors do need a new incentive of some sort to push the envelope.

Raytracing has been kind of coming for a while. Theory is there, research is there but performance simply has not been for anything real-time. Now Nvidia pushed the issue to the brink of being usable. Someone has to push new things for these to be implemented and widespread enough especially when it comes to hardware features.

@Vayra86 just look at how lighting and shadowing methods have evolved. Shadow maps, dynamic stencil shadows, soft/hard shadows and the ever more complex methods for these. Similarly and closely related - GI methods. Latest wave of GI methods were SVOGI (that CryEngine uses for fallback in Neon Noir and their RT solution is evolved from) and Nvidia's VGXI are both Voxel-based and with a very noticeable performance hit. In principle both get more and more closer to raytracing. Also keep in mind that rasterization will apply several different methods on top of each other, complicating things.
Thank you once again for your nuance and informative posts :respect:
Posted on Reply
#103
INSTG8R
Vanguard Beta Tester
cucker tarlsonPushing 4k cards would be way more profitable for nvidia than RTX,dunno what you're taking about Vayra
Yet ASUS just released a 27” 1080p 265hz IPS. I don‘t disagree 4K is more “reasonable” goal to shoot for but adoption is still low, but eventually we’ll be at the point where 4K and RT are gonna have to be a thing for the “bleeding edge crowd” I mean I’m part of the HDR minority another tech that’s still hit and miss.
Posted on Reply
#104
cucker tarlson
INSTG8RYet ASUS just released a 27” 1080p 265hz IPS. I don‘t disagree 4K is more “reasonable” goal to shoot for but adoption is still low, but eventually we’ll be at the point where 4K and RT are gonna have to be a thing for the “bleeding edge crowd” I mean I’m part of the HDR minority another tech that’s still hit and miss.
4k is waaaaaay more popluar.youve got 50 4k options for one 240hz
Posted on Reply
#105
INSTG8R
Vanguard Beta Tester
cucker tarlson4k is waaaaaay more popluar.youve got 50 4k options for one 240hz
I’m not saying there”s not options for 4K but the entry barrier is still high when you factor the GPU to get the 4K 60hz experience. 1080 is still the “norm” when you consider the market overall when the 1060 is “median“ of the GPU user base.
Posted on Reply
#106
efikkan
cucker tarlsonPushing 4k cards would be way more profitable for nvidia than RTX
Imagine if Nvidia used all* the die space of the RT cores for just more SMs; it would obviously be more profitable for Nvidia, and people would of course get more excited, boosting demand probably higher than supply. Instead Nvidia opted for a more "balanced" approach (in their opinion), with adding some more performance in general while also adding RT acceleration to bring the advancement of technology forward. In the short term not a financial "smart move", but it certainly is in the long term.

*) They would have to scale it a little back to avoid too high power consumption.
Posted on Reply
#107
londiste
efikkanImagine if Nvidia used all* the die space of the RT cores for just more SMs; it would obviously be more profitable for Nvidia, and people would of course get more excited, boosting demand probably higher than supply. Instead Nvidia opted for a more "balanced" approach (in their opinion), with adding some more performance in general while also adding RT acceleration to bring the advancement of technology forward. In the short term not a financial "smart move", but it certainly is in the long term.
RT Cores and Tensor cores are ~10% of the die space, RT cores are smaller part of that, in the range of 3%. This would not be enough for noticeable general performance boost. There is a case to be made about Tensor cores but a large part of that seems to be double-use for FP16 and possibly the concurrent FP+INT. Power concerns are on top of that.

AMD's RDNA2 seems to go and follow the same general idea as Nvidia did - small specialized ASIC added to existing pipeline, in RDNA2 reportedly next to or in the TMUs.
Posted on Reply
#108
cucker tarlson
I hope they improve RT/tensor count.Either go big or go home nvidia.
Control was beautiful but still played like a demo with DLSS at 55-62 fps.I want steady 70 fps at native with a $600 prie tag max.
Posted on Reply
#109
INSTG8R
Vanguard Beta Tester
cucker tarlsonI hope they improve RT/tensor count.Either go big or go home nvidia.
Control was beautiful but still played like a demo with DLSS at 55-62 fps.I want steady 70 fps at native with a $600 prie tag max.
Of course it will get better and more viable as newer generations of cards move forward, like all new tech SM3.0
, Physx, the evolution of DX, Tessellation, all things that needed a generation to become useable and another one to become features we now take for granted and are just standard settings we don’t even think about anymore effecting our performance.
Posted on Reply
#110
cucker tarlson
INSTG8ROf course it will get better and more viable as newer generations of cards move forward, like all new tech SM3.0
, Physx, the evolution of DX, Tessellation, all things that needed a generation to become useable and another one to become features we now take for granted and are just standard settings we don’t even think about anymore effecting our performance.
yup.
remember how expensive gtx 8800 was ?
it was a breakthrough card nevertheless.
now we've got rtx 2060 running circles around 1080Ti in ray tracing.
Posted on Reply
#111
efikkan
cucker tarlsonyup.
remember how expensive gtx 8800 was ?
it was a breakthrough card nevertheless.
now we've got rtx 2060 running circles around 1080Ti in ray tracing.
Ray tracing will improve a lot in a couple of generations.
GTX 1080 Ti received a lot of harsh criticism when it launched, then Turing came along and all of a sudden GTX 1080 Ti was excellent and Turing bad.
Also, AMD criticized ray tracing when Turing launched, and now they may be launching something similar next year (if rumors are to be believed).
As always, the perspective changes to fit the ever-changing narrative.
Posted on Reply
#112
ratirt
efikkanRay tracing will improve a lot in a couple of generations.
GTX 1080 Ti received a lot of harsh criticism when it launched, then Turing came along and all of a sudden GTX 1080 Ti was excellent and Turing bad.
Also, AMD criticized ray tracing when Turing launched, and now they may be launching something similar next year (if rumors are to be believed).
As always, the perspective changes to fit the ever-changing narrative.
I don't recall AMD as the company criticizing Ray Tracing. Maybe they have criticized Turing graphics for the RT performance although I can't recall that either.
Posted on Reply
#113
londiste
efikkanAlso, AMD criticized ray tracing when Turing launched, and now they may be launching something similar next year (if rumors are to be believed).
AMD didn't really criticize RT. They were pretty quiet about it and the promiment (or maybe only) real statement they made is that they will add RT capability in hardware when it makes sense across the entire range.

From rumors and leaks, RDNA2 places RT ASIC into TMU. This should lead to an interesting situation where AMD has more RT capabilities assuming their unit has similar capabilities to Nvidia's RT Cores (which, based on AMD's patent is very likely). RX 5700XT has 160 TMUs while RTX 2080 Super has 48 SMs/RT Cores (Turing has one RT Core per SM).
Posted on Reply
#114
efikkan
londisteFrom rumors and leaks, RDNA2 places RT ASIC into TMU. This should lead to an interesting situation where AMD has more RT capabilities assuming their unit has similar capabilities to Nvidia's RT Cores (which, based on AMD's patent is very likely). RX 5700XT has 160 TMUs while RTX 2080 Super has 48 SMs/RT Cores (Turing has one RT Core per SM).
That will be very hard to assess accurately without anything but speculation. AMD's solution here will be more "improvised", it may end up successful, but it could easily go the other way as well. So I don't dare to claim that it will be more capable than Turing at this point.

Also keep in mind that "RDNA2" will mostly compete with the next generation from Nvidia.
Posted on Reply
#115
delshay
In the mean time another game has gone beta RTX "AMID EVIL". Follow the instructions on the steam how to access & download. You need to have the base game to access RTX features.
Posted on Reply
#116
INSTG8R
Vanguard Beta Tester
delshayIn the mean time another game has gone beta RTX "AMID EVIL". Follow the instructions on the steam how to access & download. You need to have the base game to access RTX features.
Elaborate? :confused:
Posted on Reply
#119
delshay
INSTG8RThank you. I read it like a typo “AMD EVIL” :roll:
When I first saw this game, this is how I saw it too. So your not alone. This game is awesome & I have completed it.
Posted on Reply
Add your own comment
Apr 19th, 2024 21:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts