• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2

VRS is great that's one of the more exciting new GPU features in reality. A simple and easy tangible performance boost by diluting parts of the scene you care less about in the grand scheme why would anyone view that as a bad trade off is beyond me. Utilizing resources where they can be put to best usage plain and horsepower means very little when you have no traction which is exactly why drag cars do burnouts before they race to warm those f*ckers up a little to grip the road when they goose it aka 3, 2, 1 punch it.
VRS is a technology that I've wanted for 10 years, but not as a way to reduce details in parts of the scene, only to improve select parts. I think this technology has great potential, but like with many other advanced techniques, it needs to be utilized right, otherwise the end result is bad.

Let's say you have a scene with a nice landscape in the lower half of the screen, and a sky (just a skydome or skybox) in the upper half. You might think that rendering the upper half in much fewer samples might be a good way to optimize away wasteful samples. But the truth is that low detail areas like skies are very simple to render in the first place, so you will probably end up with a very blurry area and marginal performance savings.

To make matters worse, this will probably only increase the frame rate variance (if not applied very carefully). If you have a first-person game walking a landscape, looking straight up or down will result in very high frame rates while looking straight forward into an open landscape will give low performance. Even if you don't do any particular fancy LoD algorithms, the GPU is already pretty good at culling off-screen geometry, and I know from experience that trying to optimize away any "unnecessary" detail can actually increase this frame rate variance even more.
 
well it's not like nvidia already does it in wolfenstein games
and why would that be blurry ? it's not an image reconstruction tehcnique.
On the VRS video linked earlier it does get blurry when the res is really down. So your image reconstruction technique mention here is simply pointless.
I'm more worried about the implementation of this so that it wont end up like NV's DLSS. Nvidia does it? You mean NVidia's using it.
 
No Radeon card has much of any value till they get this done.
In times when less than 1% of games support it, and even those who support make 2080Ti sweat, yeah, I mean, a must have/no feature-no buy feature.
Obviously.

Because once we get into full throttle RT "non-RT cards are not supported" some time in 2025, today's card will absolutely be adequate to run it.
Apparently.

But, but Real time ray tracing are gimmicks!

—-certain fanbois

Low ray count ray tracing, producing hell of a noisy images that get heavily "denoised" to produce a handful of effects in some otherwise traditionally rasterized scenes.. is not gimmick?

Because, let me guess, it has "RT" and "real time" in it?

Clearly, only fanbois would disagree with it!

Exciting times!
 
In times when less than 1% of games support it, and even those who support make 2080Ti sweat, yeah, I mean, a must have/no feature-no buy feature.
Obviously.

Because once we get into full throttle RT "non-RT cards are not supported" some time in 2025, today's card will absolutely be adequate to run it.
Apparently.



Low ray count ray tracing, producing hell of a noisy images that get heavily "denoised" to produce a handful of effects in some otherwise traditionally rasterized scenes.. is not gimmick?

Because, let me guess, it has "RT" and "real time" in it?

Clearly, only fanbois would disagree with it!

Exciting times!
well better go back to buying cards that can't do none of it.
isn't it good to have a choice...
about that 1%.....look how many triple A games out now or announced for 2020 have rtx support.
blurry and noisey ? depnds
this is rtx+dlss in Control

Control Screenshot 2019.09.20 - 17.40.43.82.png
 
Last edited:
But, but Real time ray tracing are gimmicks!

—-certain fanbois

Gene_Wilder_as_Willy_Wonka.jpeg


Do tell, what has been you user experience/utilisation of RTRT over the course of the ownership of your RTX card? Do you believe that you will fully take advantage it's RTRT capabilities before replacing it with a next gen nvidia/amd card that can actually handle such?

P.S. If it's a gimmick at present there is nothing wrong with doing so.
 
Last edited:
View attachment 139366

Do tell, what has been you user experience/utilisation of RTRT over the course of the ownership of your RTX card? Do you believe that you will fully take advantage it's RTRT capabilities before replacing it with a next gen nvidia/amd card that can actually handle such?

P.S. If it's a gimmick at present there is nothing wrong with doing so.

So, err. dumb question. RT was originally developed by a software company, not Nvidia. Nvidia took it and developed a hardware method to (attempt to) make it have an "acceptable impact" and release it to the market earlier than said software company's solution. I feel that was a brilliant marketing move, but a poor end-user solution. RTRT would be great it if had little-to-no-real-world impact on framerates, and widespread adoption to the game development world. If there is a software solution, and hardware that can "add this in", without dropping framerates below "targets" such as 60 FPS for 60hz gaming, or 144 FPS for 144HZ gaming, or 240 for 240hz gaming, why would we care that the card has RT cores or not? Now, are we going to get there in the current or coming generation, almost certainly not. But since development is currently relying on those RT cores for programming models, we're actually delaying development adoption to get it 'now' IF we do get there?

VRS, on the other hand, is a potential framerate improvement, leaving us with "better' state, provided it doesn't significantly impact image quality. Why would anyone NOT want this, IF it works as advertised?
 
This is very interesting to me... the software based RayTracing. Pretty good results IMO
There is no good reason to be very keen on software-based raytracing (running on generic shaders, as in this case).

Until Crytek implements DXR there is no direct comparison. So far Neon Noir running on Vega 56/64 is about on par with it running on GTX1080. Anything DXR cards can do (mainly considerably larger amount of rays) is on top of that. If you want a comparison, check the differences between GTX and RTX cards - RTX2060 should be generally on par with GTX1080, so it is direct enough comparison - in Battlefield V DXR. It employs DXR for the same effect as Neon Noir employs its RT shaders).

On the VRS video linked earlier it does get blurry when the res is really down. So your image reconstruction technique mention here is simply pointless.
I'm more worried about the implementation of this so that it wont end up like NV's DLSS. Nvidia does it? You mean NVidia's using it.
VRS is in DX12 and both Nvidia and Intel have this capability deployed. I believe Nvidia also has OpenGL and Vulkan extensions available for it, not sure about Intel.
VRS is not an image reconstruction technique. It does reduce image quality in parts of the image but option of using VRS is purely and entirely up to developer. When used well - in parts of screen that do not benefit from more details and quality lowered to acceptable degree - it provides a small but measurable performance boost for minimal image quality penalty.
 
Last edited:
So, err. dumb question. RT was originally developed by a software company, not Nvidia. Nvidia took it and developed a hardware method to (attempt to) make it have an "acceptable impact" and release it to the market earlier than said software company's solution. I feel that was a brilliant marketing move, but a poor end-user solution. RTRT would be great it if had little-to-no-real-world impact on framerates, and widespread adoption to the game development world. If there is a software solution, and hardware that can "add this in", without dropping framerates below "targets" such as 60 FPS for 60hz gaming, or 144 FPS for 144HZ gaming, or 240 for 240hz gaming, why would we care that the card has RT cores or not? Now, are we going to get there in the current or coming generation, almost certainly not. But since development is currently relying on those RT cores for programming models, we're actually delaying development adoption to get it 'now' IF we do get there?

VRS, on the other hand, is a potential framerate improvement, leaving us with "better' state, provided it doesn't significantly impact image quality. Why would anyone NOT want this, IF it works as advertised?

This goes back to the comment I quoted, I'm probably one of those certain people claimed to say "RT is a gimmick", that is out of context. How Nvidia is implementing it is a gimmick, no matter how you cut it with the current batch of cards it's not practical nor useful therefore a gimmick. RT itself was never thought or said to be superfluous by myself personally. That out of way will probably get there eventually but feel like had nvidia not snatched it up it would we would have seen acceptable RT sooner because the collaborative element has been removed, working on RT from that point either develop your own from scratch or go through Nvidia and use their hardware.
 
Last edited:
I would be surprised if they have a larger Navi ready, and if they did, the TDP would probably be in the 300-350W range.
Also don't forget that RX 5700 was renamed "last minute", even some official photos displayed "RX 690".
I'd take a 350W card that can pull 4k 60FPS in Ubisofts poorly optomized yearly AAAs. From either company.
Back in the day I owned Asus Mars II which was Bitchin'fast!3D2000 personified. It was 365W TDP tripple slot 3x 8pin monstrosity, and yes, it had over 20000 BungholioMarks.
01.jpg

I had no problem with heat, and it lasted over 3 years. Good times.
 
I do love these pissing contests, Fermi famously got no love from the ATi/AMD crowd for being hot and power hungry, but it was clearly the faster more forward looking tech. And in grand scheme of things, AMD cards over recent years have made Fermi look kind to the environment!

Turing packs all this tech already, and can only improve once Nvidia go down to 7nm too, RTRT may realistically remain years off, but things like VRS coming to next gen consoles can certainly offer some nice benefits as details and res go up. AMD are playing catch up, but there is no need to get too butt hurt people.
 
VRS is a technology that I've wanted for 10 years, but not as a way to reduce details in parts of the scene, only to improve select parts. I think this technology has great potential, but like with many other advanced techniques, it needs to be utilized right, otherwise the end result is bad.

Let's say you have a scene with a nice landscape in the lower half of the screen, and a sky (just a skydome or skybox) in the upper half. You might think that rendering the upper half in much fewer samples might be a good way to optimize away wasteful samples. But the truth is that low detail areas like skies are very simple to render in the first place, so you will probably end up with a very blurry area and marginal performance savings.

To make matters worse, this will probably only increase the frame rate variance (if not applied very carefully). If you have a first-person game walking a landscape, looking straight up or down will result in very high frame rates while looking straight forward into an open landscape will give low performance. Even if you don't do any particular fancy LoD algorithms, the GPU is already pretty good at culling off-screen geometry, and I know from experience that trying to optimize away any "unnecessary" detail can actually increase this frame rate variance even more.
I think you used a poor example because it's unlikely that scenario would be applied or only sparingly. As far as frame time variance is concerned the hardware remains the same in either case it's simply prioritizing render tasks a bit different within the GPU. If anything it could be used to improve frame time variance in the worse case scenario's by utilizing VRS selectively switching a few things to a lower quality when frame rates dip below certain FPS trigger thresholds til they normalize. Sure I'm sure it could get used poorly, but it could get used very well at the same time and having it as a option doesn't hurt.

Here's a example GPU recognizes frame rate is below 60FPS or say below 30FPS which is even worse and input lag gets really crappy really quickly below that point. AA is gets set for 75% of screen resolution for w/e high quality setting you determine and the other 25% gets set lower when the trigger point kicks in until the frame rate normalizes. Frame rate variance improved a bit of image quality reduction temporarily, but in the grand scheme a good trade off perhaps given the scenario described. That could be applied to more than AA like shading, lighting, and geometry as well as other stuff. Boils down to how it gets used and applied, but VRS has the premise of improving quality and performance both in variable ways. It just depends how it gets injected into the render pipe line.

On the VRS video linked earlier it does get blurry when the res is really down. So your image reconstruction technique mention here is simply pointless.
I'm more worried about the implementation of this so that it wont end up like NV's DLSS. Nvidia does it? You mean NVidia's using it.
Plenty of video streams do variable rate adjustments similar to that based on DL speed because of traffic congestion. I honestly wouldn't mind a bit of selective DLSS smeary temporarily if my FPS dipped below a frame rate threshold I determined it beats chopping frame rates and sloppy input lag.

well better go back to buying cards that can't do none of it.
isn't it good to have a choice...
about that 1%.....look how many triple A games out now or announced for 2020 have rtx support.
blurry and noisey ? depnds
this is rtx+dlss in Control

View attachment 139363
That scene looks like it's be post processed with Charmin Ultrasoft soap opera effect. I cannot in good faith say I'm fond of the look. My vision isn't even 20/20 I'm blind as a bat w/o glasses, but that would drive me nuts personally so I'd hate to think what people with good vision think of it that dull mess. I practically looks like a mClassic doing upscaling from like a 720p console game in terms of texture detail it's horrible quite frankly the quality simply isn't there bottom line you can't make a blu-ray quality video out of a DVD rom which also true of all the sacrifices to RTRT to do with ray tracing and lower the amount of light passes and denoise to poorly run RTRT at a unsteady frame rate.

View attachment 139366

Do tell, what has been you user experience/utilisation of RTRT over the course of the ownership of your RTX card? Do you believe that you will fully take advantage it's RTRT capabilities before replacing it with a next gen nvidia/amd card that can actually handle such?

P.S. If it's a gimmick at present there is nothing wrong with doing so.
Let's not kid ourselves here next generation nvidia/amd card won't handle won't be tricking anyone's eyes into thinking RTRT Crysis is real life either.

Hell yeah, how dare I!?!?!?!
Oh wait:



I guess it is too much to expect from users on techy forum to have basic understanding of the underlying technology...
Speak of the devil or close enough and actually that that demo was one of the better examples of RTRT type effects aside from that staged star wars demo Nvidia did that didn't materialize into actual playable games like that go figure who would've guessed it. Crytek did a pretty decent job though defiantly not perfect simply because single GPU hardware won't get us to GoPro Hero realism at this point in time we've got a ways to go still before we reach that point.

This goes back to the comment I quoted, I'm probably one of those certain people claimed to say "RT is a gimmick", that is out of context. How Nvidia is implementing it is a gimmick, no matter how you cut it with the current batch of cards it's not practical nor useful therefore a gimmick. RT itself was never thought or said to be superfluous by myself personally. That out of way will probably get there eventually but feel like had nvidia not snatched it up it would we would have seen acceptable RT sooner because the collaborative element has been removed, working on RT from that point either develop your own from scratch or go through Nvidia and use their hardware.
You make a bit of a good point RTRT could be viewed as a bit of a preemptive **** block attempt by Nvidia to ray tracing with developers that will ultimately slow the progression of ray tracing. No one wants another HairWorks or Physx scenario down the road for ray tracing, but could be right where things are headed towards. Luckily AMD is in the next gen console's so we might avoid that scenario so good chess move follow up by Lisa Su. I'm sure RTRT will improve in the coming hears and heat up further, but at this stage it's safe to call it a bit of a gimmick given how it both looks and performs neither are optimal and need tons more polish before people consider them high quality and high desirability. I don't think too many people bought RTX cards for RTRT alone, but rather for both performance/efficiency and RTX features that include RTRT among other tech like mesh shading and DLSS.

Turing packs all this tech already, and can only improve once Nvidia go down to 7nm too, RTRT may realistically remain years off, but things like VRS coming to next gen consoles can certainly offer some nice benefits as details and res go up.
Pretty much agreed Nvidia moving to 7nm will certainly only bring about further advancements though so too will AMD moving to 7nm EUV and increasing it's GPU divisions R&D budget over time as it continues to pay down debt from that ATI merger from years past. Intel's CPU stumbling will only benefit AMD especially given it's higher focus on the CPU side at present. AMD is defiantly in a good position to shift gears and focus or switch from 2WD to 4WD at any point in time between CPU/GPU so that's a good thing really it's worse days appear behind them. AMD has it's work cut out for them ahead especially on the GPU side of things, but I think they'll inch their way forward and regain market share back from Nvidia over the coming years. I do believe a stronger R&D budget and less debt will make a big difference in their overall competitiveness plus Intel's stumbles should help and those security stumbles could hurt Intel a lot that won't just be forget about given the scale of them that keeps getting deeper.
 
Last edited:
That scene looks like it's be post processed with Charmin Ultrasoft soap opera effect. I cannot in good faith say I'm fond of the look. My vision isn't even 20/20 I'm blind as a bat w/o glasses, but that would drive me nuts personally so I'd hate to think what people with good vision think of it that dull mess. I practically looks like a mClassic doing upscaling from like a 720p console game in terms of texture detail it's horrible quite frankly the quality simply isn't there bottom line you can't make a blu-ray quality video out of a DVD rom which also true of all the sacrifices to RTRT to do with ray tracing and lower the amount of light passes and denoise to poorly run RTRT at a unsteady frame rate.
:roll:
well,thanks for the elaborate description.
I don't know why it looks like that to you ,maybe you do need glasses after all.


You make a bit of a good point RTRT could be viewed as a bit of a preemptive **** block attempt by Nvidia to ray tracing with developers that will ultimately slow the progression of ray tracing.
Luckily AMD is in the next gen console's so we might avoid that scenario so good chess move follow up by Lisa Su.

:roll:
 
Last edited:
I'd take a 350W card that can pull 4k 60FPS in Ubisofts poorly optomized yearly AAAs. From either company.
Would you take a 350W card when you can get a 250W card with the same performance?
350W is pushing it in terms of cooling it without terrible noise levels.

I think you used a poor example because it's unlikely that scenario would be applied or only sparingly. As far as frame time variance is concerned the hardware remains the same in either case it's simply prioritizing render tasks a bit different within the GPU.
I think you missed the point. The hardware is of course the same, the variance is in the workload.

If anything it could be used to improve frame time variance in the worse case scenario's by utilizing VRS selectively switching a few things to a lower quality when frame rates dip below certain FPS trigger thresholds til they normalize. Sure I'm sure it could get used poorly, but it could get used very well at the same time and having it as a option doesn't hurt.
It is certainly possible to build an algorithm that uses performance metrics from previous frames and dynamically adjusts LoD on the fly, I have even looked into implementing something like that once. The issue is that you have to rely on the performance metrics of the last ~10 frames, so any adjustment will happen after the performance has changed, and will for this reason not reduce stutter. The best approach is to reduce the variance preemptively.

I stand by my claim that it can be used poorly, resulting in blurry scenes and in worst case flickering or artifacts.
 
I told you,you can pay the same and be happy to get less if you wish,you have a choice.
People who insist RTX "is a must" (I won't repeat "less than 1% of games, yadayada") talk about having a choice.
Ironic.

Let me elaborate, not only is RTX a gimmick at this point (pathetic number of rays that are capable to produce noisy shadow/reflection like effects, with heavy de-noising) it is absolutely not clear how this area will develop. Whatever it will be, as with FreeSync, it won't be NV alone deciding it, heck, but AMD alone could, as, wait for it:
1) AMD commands 35% of the GPU market (and is poised to grab more), but also
2) 100% of the console market (Switch is in no way capable of RT-ing anyhow) which is expected to roll out next gen consoles with GPU at 2070-2080-ish levels

Last, but not least, the screenshot you have shared makes me smile. Looks like generic adventure to me.
Yeah, devs were able to gimmick reflections/shadows way before RT-ing, it's just about effort to implement it (must be much easier with RT).

Would you take a 350W card when you can get a 250W card with the same performance?
It will depend on the price.
 
People who insist RTX "is a must" (I won't repeat "less than 1% of games, yadayada") talk about having a choice.
Ironic.

Let me elaborate, not only is RTX a gimmick at this point (pathetic number of rays that are capable to produce noisy shadow/reflection like effects, with heavy de-noising) it is absolutely not clear how this area will develop. Whatever it will be, as with FreeSync, it won't be NV alone deciding it, heck, but AMD alone could, as, wait for it:
1) AMD commands 35% of the GPU market (and is poised to grab more), but also
2) 100% of the console market (Switch is in no way capable of RT-ing anyhow) which is expected to roll out next gen consoles with GPU at 2070-2080-ish levels

Last, but not least, the screenshot you have shared makes me smile. Looks like generic adventure to me.
Yeah, devs were able to gimmick reflections/shadows way before RT-ing, it's just about effort to implement it (must be much easier with RT).


It will depend on the price.
funny how you talk like that all the time while amd reveals their goal for 2020 is to match nvidia's 2018 :laugh:
 
Last edited:
Last edited:
I think you missed the point. The hardware is of course the same, the variance is in the workload.


It is certainly possible to build an algorithm that uses performance metrics from previous frames and dynamically adjusts LoD on the fly, I have even looked into implementing something like that once. The issue is that you have to rely on the performance metrics of the last ~10 frames, so any adjustment will happen after the performance has changed, and will for this reason not reduce stutter. The best approach is to reduce the variance preemptively.

I stand by my claim that it can be used poorly, resulting in blurry scenes and in worst case flickering or artifacts.
So long as VRS has gears it can shift thru that hopefully subdivides with standard 24FPS animation frame rates it should be a great option with little downside. I could be used poorly, but so can RTRT and other things so that's nothing new.
 
Fixing their driver is probably more important idk
 
Plenty of video streams do variable rate adjustments similar to that based on DL speed because of traffic congestion. I honestly wouldn't mind a bit of selective DLSS smeary temporarily if my FPS dipped below a frame rate threshold I determined it beats chopping frame rates and sloppy input lag.
I'd rather skip RT and go with higher FPS than use DLSS (I see the difference in-games with image quality with this thing on) to speed things up because RT is eating all the performance.
 
So, err. dumb question. RT was originally developed by a software company, not Nvidia. Nvidia took it and developed a hardware method to (attempt to) make it have an "acceptable impact" and release it to the market earlier than said software company's solution. I feel that was a brilliant marketing move, but a poor end-user solution. RTRT would be great it if had little-to-no-real-world impact on framerates, and widespread adoption to the game development world. If there is a software solution, and hardware that can "add this in", without dropping framerates below "targets" such as 60 FPS for 60hz gaming, or 144 FPS for 144HZ gaming, or 240 for 240hz gaming, why would we care that the card has RT cores or not? Now, are we going to get there in the current or coming generation, almost certainly not. But since development is currently relying on those RT cores for programming models, we're actually delaying development adoption to get it 'now' IF we do get there?

VRS, on the other hand, is a potential framerate improvement, leaving us with "better' state, provided it doesn't significantly impact image quality. Why would anyone NOT want this, IF it works as advertised?

really CBF digging around in the past

but https://www.awn.com/news/sgi-demos-integration-art-vps-ray-tracing-hardware

You will find SGI had it ages ago for cad not really for gaming.
Pretty sure sun had real time ray tracing as well..

First gen hardware / software tends to suck, until it get momentum.

An open standard would be nice.

Since consoles will be using and and consoles drive PC gaming unless some killer app has Ray tracing is just a gimmick at the moment.

Anyone else remember stand alone physics cards? Wank factory 99% there where some titles that you could run that where cool but other than that a waste of money.
 
Unlike many irrational fan Boyz here, I have been very critical of AMD in both cpu and gpu products they released and the undeserved hype they got with 7nm process. But I have to admit and admire the rx5700. Best card for the money and interestingly power draw as well. You can fine tune it to consume around 140w and have a slightly better performance than its stock.and it beats both 2060 super and 2060 naturally.probably their best product of the year and after that comes ryzen 5 3600.
 
Back
Top