Monday, July 1st 2019

AMD Patent Shines Raytraced Light on Post-Navi Plans

An AMD patent may have just shown the company's hand when it comes to its interpretation of raytracing implementation on graphics cards. The patent, titled "Texture Processor Based Ray Tracing Acceleration Method and System", describes a hybrid, software-hardware approach to raytracing. AMD says this approach improves upon solely hardware-based solutions:
"The hybrid approach (doing fixed function acceleration for a single node of the bounded volume hierarchy (BVH) tree and using a shader unit to schedule the processing) addresses the issues with solely hardware based and/or solely software based solutions. Flexibility is preserved since the shader unit can still control the overall calculation and can bypass the fixed function hardware where needed and still get the performance advantage of the fixed function hardware. In addition, by utilizing the texture processor infrastructure, large buffers for ray storage and BVH caching are eliminated that are typically required in a hardware raytracing solution as the existing vector general purpose register (VGPRs) and texture cache can be used in its place, which substantially saves area and complexity of the hardware solution."
Essentially, AMD will be introducing what it calls a "fixed function ray intersection engine", which is specialized hardware that only handles BVH intersection (processing BVH calculations in a stream processor solely via a software solution isn't a pretty option, since execution divergence means that a number of error corrections are required, which makes the process time and resource-intensive). This fixed function hardware (which is nothing like NVIDIA's RT cores and is much simpler) is added in parallel to the texture filter pipeline in GPU's texture processor.
The idea is that the fixed-function raytracing hardware can now use the texture system's already existing memory buffers instead of having to store raytracing-specific data locally, which adds to die area and chip complexity. Additionally, since there is no software to allocate resources and schedule work for the fixed-function hardware, pure hardware solutions require an additional hardware scheduler only for RT-specific workloads, which AMD claims its implementation bypasses - the shader processor sends raytracing data down the texture processing path for the fixed-function hardware to process, saving even more die space that would be used in a "classical" hardware solution.

It's pretty well-known that both Sony and Microsoft's next-gen consoles will support raytracing, and will be AMD Navi-based in nature. It's likely these custom chips have some more of the special dust from AMD's RDNA architecture (which is only sprinkled on consumer, PC-level Navi), and these special components certainly pertain (even if not completely) to both consoles' raytracing capabilities. While the patent has been submitted a year and a half ago, this is the time to reap fruits from such a hybrid design; Some highlights on AMD's approach that have been taken from the paper can be seen below, but if you fancy a read of the whole patent, follow the source link.
The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node.

(...)

A texture processor based ray tracing acceleration method and system are described herein. A fixed function BVH intersection testing and traversal (a common and expensive operation in ray tracers) logic is implemented on texture processors. This enables the performance and power efficiency of the ray tracing to be substantially improved without expanding high area and effort costs. High bandwidth paths within the texture processor and shader units that are used for texture processing are reused for BVH intersection testing and traversal. In general, a texture processor receives an instruction from the shader unit that includes ray data and BVH node pointer information. The texture processor fetches the BVH node data from memory using, for example, 16 double word (DW) block loads. The texture processor performs four ray-box intersections and children sorting for box nodes and 1 ray-triangle intersection for triangle nodes. The intersection results are returned to the shader unit.

In particular, a fixed function ray intersection engine is added in parallel to a texture filter pipeline in a texture processor. This enables the shader unit to issue a texture instruction which contains the ray data (ray origin and ray direction) and a pointer to the BVH node in the BVH tree. The texture processor can fetch the BVH node data from memory and supply both the data from the BVH node and the ray data to the fixed function ray intersection engine. The ray intersection engine looks at the data for the BVH node and determines whether it needs to do ray-box intersection or ray-triangle intersection testing. The ray intersection engine configures its ALUs or compute units accordingly and passes the ray data and BVH node data through the configured internal ALUs or compute units to calculate the intersection results. Based on the results of the intersection testing, a state machine determines how the shader unit should advance its internal stack (traversal stack) and traverse the BVH tree. The state machine can be fixed function or programmable. The intersection testing results and/or a list of node pointers which need to be traversed next (in the order they need to be traversed) are returned to the shader unit using the texture data return path. The shader unit reviews the results of the intersection and the indications received to decide how to traverse to the next node in the BVH tree.
Sources: AMD Patent Application, via DSO Gaming
Add your own comment

55 Comments on AMD Patent Shines Raytraced Light on Post-Navi Plans

#2
londiste
This fixed function hardware (which is nothing like NVIDIA's RT cores and is much simpler)
As far as we know RT Cores in Nvidia hardware do BVH traversal and ray-intersection testing. AMD's patent describes their approach in more detail but both seem to be doing roughly the same thing. Care to highlight the differences in respective implementations?
Posted on Reply
#3
lexluthermiester
Vayra86, post: 4073043, member: 152404"
There we go.
? What do you mean?
Posted on Reply
#4
Vayra86
londiste, post: 4073055, member: 169790"
As far as we know RT Cores in Nvidia hardware do BVH traversal and ray-intersection testing. AMD's patent describes their approach in more detail but both seem to be doing roughly the same thing. Care to highlight the differences in respective implementations?
Possibly the absence of a (dedicated) scheduler?

I'm also not reading much akin to denoising and/or firing thousands of rays.

Posted on Reply
#5
birdie
If I were an AMD fan I would skip the Radeon RX 5XXX generation altogether since AMD is again dedicating most of its resources to next-gen MS/Sony consoles with HW accelerated Ray Tracing while gamers will receive half-baked products which will be rendered obsolete less than a year from now.

I'm not saying NVIDIA RTX is worth buying - I'm saying if you can wait, do wait. In a year from now we'll have proper RDNA (2.0?) for PC and Turing Refresh/Ampere on 7nm.
Posted on Reply
#6
Windyson
Navi10 is the transitional product. Is it worth buying?
Posted on Reply
#7
lynx29
birdie, post: 4073081, member: 131299"
If I were an AMD fan I would skip the Radeon RX 5XXX generation altogether since AMD is again dedicating most of its resources on next-gen MS/Sony consoles with HW accelerated Ray Tracing while gamers will receive half-baked products which will be rendered obsolete less than a year from now.

I'm not saying NVIDIA RTX is worth buying - I'm saying if you can wait, do wait. In a year from now we'll have proper RDNA (2.0?) for PC and Turing Refresh/Ampere on 7nm.
yep I agree
Posted on Reply
#8
Xzibit
birdie, post: 4073081, member: 131299"
If I were an AMD fan I would skip the Radeon RX 5XXX generation altogether since AMD is again dedicating most of its resources on next-gen MS/Sony consoles with HW accelerated Ray Tracing while gamers will receive half-baked products which will be rendered obsolete less than a year from now.

I'm not saying NVIDIA RTX is worth buying - I'm saying if you can wait, do wait. In a year from now we'll have proper RDNA (2.0?) for PC and Turing Refresh/Ampere on 7nm.
Not worth it. RT RT requires way too much power with current methods. 1spp with 15% (Low)-40% (ultra) scaling at best with ray cap and having to use a denoiser to clean it up.

You'll be waiting a lot for GPU archs to be decent at RT RT and have them implemented meaningfully. Probably be 10yrs+ before they even improve fidelity in the most minor way and go to 16spp.
Posted on Reply
#9
Xuper
This ray tracing is FAKE! you want to feel REAL ? here you hear :
In 2001 , Alias-Wavefront announced Maya 4.along with Maya 4 , There was add-on and It was Metal ray which later bought by NVIDIA.I was quite interested in Mental ray rendering.I did draw some geometry and rendered in Mental ray.I was like wow, my god.it was damn beautiful.after 18 years , I saw first ray tracing tech in BF/Metro Exodus , I didn't feel exactly like Ray tracing in 18 years ago.You want it ? allright Feel like this :

Posted on Reply
#10
ZoneDymo
birdie, post: 4073081, member: 131299"
If I were an AMD fan I would skip the Radeon RX 5XXX generation altogether since AMD is again dedicating most of its resources to next-gen MS/Sony consoles with HW accelerated Ray Tracing while gamers will receive half-baked products which will be rendered obsolete less than a year from now.

I'm not saying NVIDIA RTX is worth buying - I'm saying if you can wait, do wait. In a year from now we'll have proper RDNA (2.0?) for PC and Turing Refresh/Ampere on 7nm.
yeah but imagine if you wait another year! you would get even better more capable stuff!

apart form that, Ill admit I dont really understand much of this, but would it not be better to like, not entirely file a patent on this?
If you want this to become the norm you dont want the competition excluded but rather included so everyone can jump on this solution and R&D the heck out of it instead of both pursuing their own ideas with game developers haveing to choose between the 2 (cough PhysX cough)

or is this merely a case of developers implementing ray tracing and both companies just rendering that in their own way?


Xuper, post: 4073119, member: 83814"
This ray tracing is FAKE! you want to feel REAL ? here you hear :
In 2001 , Alias-Wavefront announced Maya 4.along with Maya 4 , There was add-on and It was Metal ray which later bought by NVIDIA.I was quite interested in Mental ray rendering.I did draw some geometry and rendered in Mental ray.I was like wow, my god.it was damn beautiful.after 18 years , I saw first ray tracing tech in BF/Metro Exodus , I didn't feel exactly like Ray tracing in 18 years ago.You want it ?
ermm yeah, its cutting corners, hence the big fat De-noiser.
Its pretty common knowledge we have had ray tracing/global illumination for forever now and it has been used in 3D art, the problem is and has always been rendering it out in real time.
Heck normal games have their high res resources "baked" into much lower quality files purely to make it possible for GPU's to run the damn thing, its all about cutting corners so the hardware can deal with it.
Posted on Reply
#11
_Flare
Nvidia does it near the Shared- /L1- and Tex-Cache too. Maybe a bit more HW-accelleration, but quite similar.

Posted on Reply
#12
Juankato1987
I just have one Doubt, a pretty big question, if I'm able to do it,
Why Sony/Microsoft keep using AMD gpu's, when from my POV NVidia has the upper hand
in Power Efficiency and Performance, I mean Sony could use GP106 to carve PS4 Pro, and get
better power management, and at same time more perfomance. I've heard of PS4 PRO GPU
to be at level of RX 470, wichi has same TDP with GTX 1060, and there is a huge diference.


P.D. All I can imagine to keep on AMD is backwards compatibility.


P.D. P.D. Sorry if this is not the place to make this kind of questions.


Posted on Reply
#13
Mamya3084
Juankato1987, post: 4073156, member: 188618"
I just have one Doubt, a pretty big question, if I'm able to do it,
Why Sony/Microsoft keep using AMD gpu's, when from my POV NVidia has the upper hand
in Power Efficiency and Performance, I mean Sony could use GP106 to carve PS4 Pro, and get
better power management, and at same time more perfomance. I've heard of PS4 PRO GPU
to be at level of RX 470, wichi has same TDP with GTX 1060, and there is a huge diference.


P.D. All I can imagine to keep on AMD is backwards compatibility.


P.D. P.D. Sorry if this is not the place to make this kind of questions.



I'd say cost. Nvidia probably charge too much for a custom GPU. That's just a guess.
Posted on Reply
#14
Fouquin
Juankato1987, post: 4073156, member: 188618"
I just have one Doubt, a pretty big question, if I'm able to do it,
Why Sony/Microsoft keep using AMD gpu's, when from my POV NVidia has the upper hand
in Power Efficiency and Performance, I mean Sony could use GP106 to carve PS4 Pro, and get
better power management, and at same time more perfomance. I've heard of PS4 PRO GPU
to be at level of RX 470, wichi has same TDP with GTX 1060, and there is a huge diference.


P.D. All I can imagine to keep on AMD is backwards compatibility.


P.D. P.D. Sorry if this is not the place to make this kind of questions.



Because nVidia can't provide an x86 SoC to accompany the GPU, and they especially can't for the same prices.
Posted on Reply
#15
Fluffmeister
Yeah Intel and AMD want a huge slice of the GPU and CPU pie, but equally they would rather not share their x86 duopoly with anyone else.
Posted on Reply
#16
Juankato1987
Fouquin, post: 4073164, member: 157604"
Because nVidia can't provide an x86 SoC to accompany the GPU, and they especially can't for the same prices.
Didn'nt come to my mind that point, and a very important one.
Because consoles uses SOC, and Nvidia only SOC is Tegra with ARM.
Thanks for your answer.
Posted on Reply
#17
Mephis
Juankato1987, post: 4073167, member: 188618"
Didn'nt come to my mind that point, and a very important one.
Because consoles uses SOC, and Nvidia only SOC is Tegra with ARM.
Thanks for your answer.
The SOC isn't the only reason they all go with AMD. One of the biggest reasons, is both companies (Sony and MS) past experiences with Nvidia in the console space. Sony used Nvidia in the PS3 and MS used them in the original Xbox. I believe in both instances, Nvidia sold essentially off the shelf designs (some tweaking, but not much) to both companies. Also Nvidia retained all the ip to the designs, meaning that Nvidia got to choose how and when to shrink the chips. This is in contrast to AMD, who sells or licenses the IP to the console maker. They can work on shrinking the chips.
Posted on Reply
#18
theoneandonlymrk
londiste, post: 4073055, member: 169790"
As far as we know RT Cores in Nvidia hardware do BVH traversal and ray-intersection testing. AMD's patent describes their approach in more detail but both seem to be doing roughly the same thing. Care to highlight the differences in respective implementations?
Rt cores have their own scheduler and cache like a separate coprocessor , Amd has tighter integration planned , likely less powerful then nvidias brute force technology though but well see ,I mean how many texture mapping units is typically used /144.
No where near nvidias Rt core count.
Posted on Reply
#19
Midland Dog
nv does RTRT: the world: rt is stupid and a waste of die space. amd does rtrt: the world: such innovation, well done. lets not forget people that amscum have put there raster cards at NVs RTRT pricing, 10 bucks says amd will price there rtrt cards even higher
Posted on Reply
#20
Mamya3084
Midland Dog, post: 4073195, member: 168254"
nv does RTRT: the world: rt is stupid and a waste of die space. amd does rtrt: the world: such innovation, well done. lets not forget people that amscum have put there raster cards at NVs RTRT pricing, 10 bucks says amd will price there rtrt cards even higher
I'm waiting for a card with built in RnT ;)
Posted on Reply
#21
Totally
Juankato1987, post: 4073156, member: 188618"
P.D. All I can imagine to keep on AMD is backwards compatibility.
Starting with this gen backwards compatibility is achieved via software (VM) instead of hardware.
Posted on Reply
#22
Darmok N Jalad
AMD will continue to get console design wins until the competition can produce something similar. Nvidia can produce a good GPU and Intel can make a good CPU, but neither can make a good APU. MS and Sony don’t want to contract two companies when one can do the job, especially when that company is perfectly willing to accommodate custom orders.
Posted on Reply
#23
Zubasa
Midland Dog, post: 4073195, member: 168254"
nv does RTRT: the world: rt is stupid and a waste of die space. amd does rtrt: the world: such innovation, well done. lets not forget people that amscum have put there raster cards at NVs RTRT pricing, 10 bucks says amd will price there rtrt cards even higher
Lol, so what about all those that suddenly cares about RTRT on 2060 tier cards that are too slow to run games with RTRT properly anyway?
Last time I heard, there are plenty of people that are upset with first gen Navi not supporting RTRT, while have no idea what kind of processing power RTRT requires.
Posted on Reply
#24
sutyi
Midland Dog, post: 4073195, member: 168254"
nv does RTRT: the world: rt is stupid and a waste of die space. amd does rtrt: the world: such innovation, well done. lets not forget people that amscum have put there raster cards at NVs RTRT pricing, 10 bucks says amd will price there rtrt cards even higher
It's is kind of like the chicken and egg problem. Until game developers and game engines catch up to hybrid render RT, it is a waste of die space does not matter if it comes from NV or AMD. That "wasted" die space on RT logic currently can barely produce enjoyable framerates without the blurry mess of DLSS.

What we currently have from nVIDIA is the first generation dedicated hardware with not enough horsepower to drive RT properly while also not having any decent use case of RT in games. Cause what we have seen so far is either only reflections that are not really much better compared to screenspace reflections that are done proper, or doing shadows with it in which case it mostly just looks like over-glorified ambient occlusion, while also tanking performance.

I would like to also ask a favor from those whom are commenting here. Please choose your words and be polite... don't call people scum and idiots, this is not the wccftech comment section and we would like to keep it that way.
Posted on Reply
#25
ratirt
If you ask me AMD is thinking this through and doesn't rush like NV did to just release something new to be first. AMD may be second but I think with a better result. Of course we will all see how it pans out in the later term but AMD seems to be confident with what they are presenting.
Posted on Reply
Add your own comment