• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA to Enable DXR Ray Tracing on GTX (10- and 16-series) GPUs in April Drivers Update

I'm not saying RTRT is a gimmick. It definitely adds to the quality of the graphics but it is a mixture of ray tracing and rasterization.
The whole point of GPUs is to utilize rasterization.
If our chips were fast enough to do full RT in games (like we do RTRT in renders CGI movies), you wouldn't need a GPU.
CPUs are more suited to ray tracing in general. It's just that they're usually optimized for other tasks (low latency etc).

I doubt.

The time you've wasted on writing that post could have been spent on reading few paragraphs of this:
https://en.wikipedia.org/wiki/Ray_tracing_(graphics)
 
I see.
So nVidia spent years upon years "doing something with RT" to come up with glorious (stock market crashing :))))) RTX stuff that absolutely had to use RTX cards, because, for those stupid who don't get it: it has that R in it! It stands for "raytracing". Unlike GTX, where G stands for something else (gravity perhaps)
Did I say "years"? Right, so years of development to bring you that RTX thing.
Nvidia and AMD and Intel have spent years "doing something with RT". Nvidia has OptiX (wiki, Nvidia), AMD has Radeon Rays as the latest iteration of what they are doing. Intel has for example OSPRay. This is just examples, the work, research and software in this area is extensive.

The current real-time push that Nvidia (somewhat confusingly) branded as RTX is not a thing in itself, it is an evolution on all the previous work. RT cores and the operations they are able to perform are not chosen randomly, they have extensive experience in the field. While you keep lambasting Nvidia for RTX, APIs for RTRT seem to take a pretty open route. DXR is in DX12, open for anyone to implement. Vulkan has Nvidia RTX specific extensions but there is a discussion whether something more generic is needed.

Nobody has claimed RT or RTRT cannot be done on anything else. Any GPU or CPU can, it is a simple question of performance. So far, including DXR and Optix tests done on Pascal, Volta and Turing suggest RT Cores do provide a considerable boost to RT performance.
 
The current real-time push that Nvidia (somewhat confusingly) branded as RTX is not a thing in itself, it is an evolution on all the previous work.
A pinnacle of the effort.
The best thing in RT ever produced.
Brought to us, by the #leatherman.

Thank you so much for your insight!

Unfortunately it is somewhat misplaced. The context of the argument was RTX for GTX cards being something nVidia likely prepared for GDC as opposed to something that nVidia likely had to pull out from the back pocket, to address thunder from the Crytek demo.

RT cores and the operations they are able to perform are not chosen randomly, they have extensive experience in the field.
Development at NVDA is not done randomly, got it.
Not least, because they are lead by, you know, Lisa's Uncle.

Vulkan has Nvidia RTX specific extensions but there is a discussion whether something more generic is needed.
Because it is not obvious that "more generic" (if that's what we call open standards these days) Ray tracing related standard is needed, or for some even more sophisticated reason?
 
Because it is not obvious that "more generic" (if that's what we call open standards these days) Ray tracing related standard is needed, or for some even more sophisticated reason?
RT Cores perform specific operations that are exposed via current Nvidia extensions. If I remember the discussion in the threads correctly the argument was that Vulkan does not need to provide a standard API calls for RT because RT is compute in its core and when IHVs expose the necessary GPU compute capabilities via their own extensions, developers can leverage these extensions to implement RT. In essence, Vulkan is low-level API and perhaps anything towards more generic RT is too high-level for it to address by design. It is not a wrong argument and in a way a question of principle.
 
RT Cores perform specific operations that are exposed via current Nvidia extensions. If I remember the discussion in the threads correctly the argument was that Vulkan does not need to provide a standard API calls for RT because RT is compute in its core and when IHVs expose the necessary GPU compute capabilities via their own extensions, developers can leverage these extensions to implement RT. In essence, Vulkan is low-level API and perhaps anything towards more generic RT is too high-level for it to address by design. It is not a wrong argument and in a way a question of principle.

Stop feeding him, might as well be talking to a wall. The best you'll get is your posts misconstrued and ripped up in quotes and a -1 spree on everything you do. You've been warned ;)

The bait is so obvious, the only reason he's not on my ignore list is for entertainment purposes.
 
Anyway, this is a great way to tease the technology and boost adoption rates. Overall what's coming out now is looking a whole lot more like actual industry effort, broad adoption and multiple ways to attack the performance problem. The one-sided RTX approach carried only by a near-monopolist wasn't healthy. This however, looks promising.

I'm not sure i follow you on that last line, just because the wording almost makes it sound like a G-sync or Physx walled garden approach which it isn't. DXR is an open standard, developed with both AMD and nVidia, available to anyone through DX12. RTX is just nVidias implementation of an open source API on their hardware. Only AMD is to blame not showing up and offering support for a standard that they helped create. If/when AMD releases something, they will also have an equivalent name that is proprietary to their hardware (i.e. Radeon Rays or something similar). And let's be real for a second, if it wan't for nVidia releasing RTX, there would be no Crytek RT demos, RT support in Unreal Engine, or anyone else working on solutions. So to give credit where it is due, RTX most definitely -single handedly- got the ball rolling on a tech the myself (and seemingly now yourself) can both agree is gaining industry wide attention and looks promising... when less than a year ago, the 'ball' didn't even exist.

However, I completely agree with the rest of that assessment.
 
I'm not sure i follow you on that last line, just because the wording almost makes it sound like a G-sync or Physx walled garden approach which it isn't. DXR is an open standard, developed with both AMD and nVidia, available to anyone through DX12. RTX is just nVidias implementation of an open source API on their hardware. Only AMD is to blame not showing up and offering support for a standard that they helped create. If/when AMD releases something, they will also have an equivalent name that is proprietary to their hardware (i.e. Radeon Rays or something similar). And let's be real for a second, if it wan't for nVidia releasing RTX, there would be no Crytek RT demos, RT support in Unreal Engine, or anyone else working on solutions. So to give credit where it is due, RTX most definitely -single handedly- got the ball rolling on a tech the myself (and seemingly now yourself) can both agree is gaining industry wide attention and looks promising... when less than a year ago, the 'ball' didn't even exist.

However, I completely agree with the rest of that assessment.

RTX is just like Gsync except in how they sell it (and even there, the similarities exist). The walled garden is artificial and Nvidia's RTX approach is too expensive to last in the marketplace, it will be eclipsed by cheaper, more easily marketed alternatives. Gsync is also a separate bit of hardware that is 'required' to get it right, according to Nvidia, while the rest of the industry works towards solutions that can actually turn mainstream; Freesync as part of VESA standards.
 
RTX is just like Gsync except in how they sell it (and even there, the similarities exist). The walled garden is artificial and Nvidia's RTX approach is too expensive to last in the marketplace, it will be eclipsed by cheaper, more easily marketed alternatives. Gsync is also a separate bit of hardware that is 'required' to get it right, according to Nvidia, while the rest of the industry works towards solutions that can actually turn mainstream; Freesync as part of VESA standards.
At least they try. That's the whole point of being successful in business. You try 5 things, one sticks.

And I don't agree. RTX is here to stay. We don't know for how long: 2, 3, 5 years? For few years it'll cement Nvidia supremacy. That's all they want - they already won the race for market share and profits.
By 2025 GPUs will be twice as fast, mainstream PCs will have PCIe 5.0 and 10-16 cores. At that point CPUs could take some of the RTRT workload (and they're much better at it).

RTX makes another 2 big wins for Nvidia probable. Imagine RTRT sticks - people will learn to like it. Imagine AMD doesn't make a competing ASIC.
Datacenter and console makers will think twice before they choose AMD as the GPU supplier for next gen streaming/console gaming.
 
Doesn't all that apply to DX12 and Vulkan as well?
No, because DX12 and Vulkan can result in 10%+ FPS uplift in all future games they make with the engine. That means lowering requirements to run the game which translates to more money now and into the future. Well...that wasn't the case until Microsoft announced D3D12 support for Windows 7...now it translates to more money. :roll:

Ehm, just by transitioning to D3D12 and abandoning D3D11, they can spend far less time optimizing for that 10%+ down the road. Time is money.

What does DXR offer that saves money? Nothing? Because it isn't fully real-time raytraced which would save them money. Because so little hardware can even do it, not doing D3D11/D3D12/Vulkan is out of the question in DXR games.

Quite the opposite. DXR promises to shave 1000's of development hours by not having to paint lightmaps.
In a decade or two, sure. Not until then. Gamers want 1440p, 4K, higher framerate, HDR, and finally raytracing. Until an ASIC is developed and integrated into GPUs to do raytracing at 4K in realtime with negligible framerate cost, it will not become mainstream and not for a decade after those products start launching.

Crytek has not implemented DXR - they have a custom algorithm for ray-traced reflections.
Using RadeonRays/OpenCL? Makes sense seeing how AMD doesn't technically support DXR yet.
 
Last edited:
In a decade or two, sure. Not until then. Gamers want 1440p, 4K, higher framerate, HDR, and finally raytracing. Until an ASIC is developed and integrated into GPUs to do raytracing at 4K in realtime with negligible framerate cost, it will not become mainstream and not for a decade after those products start launching.

I'm not sold its decades, neither do I think a dedicated ASIC is needed. What is needed however is to be able to divide FP32 and INT32 units down to the smallest instruction possible, pack instructions into the one unit, and execute at the lowest cost. This will enable units on die not being wasted most of the clockcycle by being able to do something else, or needing to maintain coherence back into the main render pipeline.

IMO, that period after next gen consoles come out and after devs stop supporting current gen consoles is when we will see a shift over.
 
Vega cards beating Titan in productivity - allow more performance from drivers
Crytek shows Ray Tracing running on Vega - NV allow Ray Tracing on Pascal cards.

So pathetic. :D
 
There is a reason why RTX GPUs are not selling well. Last I heard Nvidia had over 1.6 billion unsold inventory :nutkick: :D. That's what happens when you overprice GPUs that don't deserve such high prices.
 
Last edited:
RTX is just like Gsync except in how they sell it (and even there, the similarities exist). The walled garden is artificial and Nvidia's RTX approach is too expensive to last in the marketplace, it will be eclipsed by cheaper, more easily marketed alternatives. Gsync is also a separate bit of hardware that is 'required' to get it right, according to Nvidia, while the rest of the industry works towards solutions that can actually turn mainstream; Freesync as part of VESA standards.

I think the thing that will be eclipsed by easier alternatives will be DXR, not necessarily RTX. My reasoning behind this is that nVidia chose to add additional hardware to accelerate ray tracing, but it is not exclusively bound to DXR. Crytek has already hinted at optimizations in their new ray tracing solution that will also utilize RT cores. So they won't be automatically relegated to the 'useless bin'. I would think that they will still more than likely offer a substantial boost over doing the entire computation on the GPU. Until this week, no one but Microsoft offered a method of easily incorporating ray tracing in to games, so nVidia basically had DXR only to work with. Apparently they thought the hardware accelerated route was correct/required for that particular method. Is it the most elegant and efficient solution to have that monolithic chip? Nope, obviously not. If other options to DXR existed, perhaps the architecture of Turing may be different, it may be a compute heavy card, or maybe not and it would be just the same. Everyone is pretty quick to bag RTX and blamed it for RT failure, but maybe it is in fact DXR that is the fat pig, hogging up all of the gpu power? Seems to be a likely possibility when a Vega56 is doing what it's doing. There are so many talented individuals in the world that for decades have come up with countless ways of doing things better than Microsoft. I have faith in these people, and I think there will be many changes and shifts in the early days to figure out the sweet spot and what is/isn't required to get acceptable results. But someone had to be the one to step out and take a chance in order to get the rest of these great individuals and the industry involved.

My opinion/prediction/hope is that Crytek showed this tech in order to try and be more relevant in the console market. They can now offer a way for the next console to be able to run ray tracing on AMD hardware (hint hint) and make it a huge selling point (the new 4k if you will). Now Unreal Engine has also added ray tracing. RT was never going to take off on the PC until consoles could also adopt it. Several people have declared it dead because a console wouldn't be able to ray trace for another 10 years, but apparently, all it needs is a Vega 56 equivalent gpu to get 4k/30 at what looks to be slightly less than current gen console quality geometry and textures. This should definitely be obtainable next gen release. And if this ends up being the case, then as a pc enthusiast I benefit.
 
c904ebd20714a4b5b307108caa0547b9455b4adad34e90d93e9dba98587c5674.png


Exaggerate much?
Nope U?
 
Baby Turing (1660 series) should do a lot better than Pascal of equal core count also due to the fact that the Tu116 has dedicated 2XFP16 pathways, alongside the dedicated INT32/FP32 ones. I think this MIGHT mean TU116 can use the 2XFP16 acceleration for BVH and take some load off the other CUDA cores. Pascal, OTOH, has to make do with a jack of all trade CUDA core. Wouldnt be surprised if GTX 1660 is faster than GTX 1080 in RTX. Pls correct me if im wrong.
 
Baby Turing (1660 series) should do a lot better than Pascal of equal core count also due to the fact that the Tu116 has dedicated 2XFP16 pathways, alongside the dedicated INT32/FP32 ones. I think this MIGHT mean TU116 can use the 2XFP16 acceleration for BVH and take some load off the other CUDA cores. Pascal, OTOH, has to make do with a jack of all trade CUDA core. Wouldnt be surprised if GTX 1660 is faster than GTX 1080 in RTX. Pls correct me if im wrong.
Ray Tracing will eventually be readily available as AA for example. As more and more games adopt it, but right now and in the foreseeable future, its not worth the headache IMO
 
Ray Tracing will eventually be readily available as AA for example. As more and more games adopt it, but right now and in the foreseeable future, its not worth the headache IMO
mmhmm, i was just stating a bit of trivia lol
 
IMO, that period after next gen consoles come out and after devs stop supporting current gen consoles is when we will see a shift over.
Console gamers want 4K. Navi can deliver that but not with raytracing. Console raytracing ain't happening until the generation after next at the earliest... close to a decade out. Considering how fabrication improvements have been coming at snails pace and slowing more with each generation, I think it's likely consoles won't see raytracing adoption for two-three generations past Navi. It is too expensive with virtually no benefit because we're still talking hybrid raytracing here which represents more work, not less. Raytracing only is four+ generations past Navi at best. Which is also probably moot because then there will be a push to 8K over raytracing.

Raytracing is better suited for professional software than games and that's not going to change for a long time. If the change of substrates leads to a huge jump in performance (e.g. graphene and terahertz processors) at little cost, then we could see raytracing flood in to take advantage of all that untapped potential...and that's at least a decade out too.
 
Last edited:
Sorry but the RTX in this game is a complete and total waste of resources since the quality difference is mediocre. Nothing to brag about.
The devs should be better focusing on the gameplay, story and atmosphere of the game instead of wasting time with this nonsense.
 
Sorry but the RTX in this game is a complete and total waste of resources since the quality difference is mediocre. Nothing to brag about.
The devs should be better focusing on the gameplay, story and atmosphere of the game instead of wasting time with this nonsense.
That's the motto of Nintendo right there.
 
@notb I can't really imagine AMD not doing what's needed; ASIC integration or otherwise, unless they decide to get out of GPU, which is probably unlikely.

AMD also have the option of integrating RTRT via a CPU ASIC, perhaps a chiplet dedicated to that. Maybe part of the reason for the "more cores" push is for this kind of thing, making the CPU + GPU more balanced in graphics workloads. A 7nm (and lower) chiplet would be far more economical than having to add the same die size to a monolithic GPU die. With e.g IF chiplets could also be put on graphics cards.

IMO this is the way things will start to go, the only down-side is we may end up with even more SKUs. ;)

Thinking further out, I wonder will chiplets (on cards) possibly allow for user customisable layout, i.e. you buy a card and have maybe 8 or 16 sockets on it, into these you can install modules depending on your workload. Do you want rasterisation, RT, GPGPU, custom ASIC for some blockchain or AI etc. etc. put on the power you need for the task. It would also allow gradual upgrades and customisation on a per game level (to a point, it would be a hassle to swap chips every time you change game). i.e. when a new engine comes out, you may need to "rebalance" your "chip-set" <G>.
 
Last edited:
There is a reason why RTX GPUs are not selling well. Last I heard Nvidia had over 1.6 billion unsold inventory :nutkick::D. That's what happens when you overprice GPUs that don't deserve such high prices.
It would be nice to provide some data for sales. I don't have any, but from what I see on store websites and on forums like this one, it does seem to already have surpassed Vega in popularity among heavy gamers. So on one hand: Nvidia is a lot larger, so this was expected. On the other: RTX is a premium lineup. It's like if Radeon VII outsold Vega 56. Unlikely.

Inventory stems from crypto crash and that's a common knowledge you also possess. Stop manipulating. They've already reported $1.4B in October 2018.
Nvidia has warehouses full of small Maxwell and Pascal chips they struggle to move. It'll most likely be written off - hence the drop in stock lately.

And it seems you haven't looked into AMD's financial statement lately.
They have $0.8B inventory (12% of FY revenue). So it's not exactly better than Nvidia's situation (14% of FY revenue).
And AMD's inventory is likely mostly GPUs while revenue is CPU+GPU.
 
Is AMD's inventory stated as GPU specifically? Just asking?
 
Back
Top