• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed

RTX support will be implemented, which should improve performance on NVIDIA graphics cards
Given how different RT technique used by the demo and what Turing RT are doing, uh, really?

Fairy dust works just as well though there seems to be a shortage
Because just looking at visuals and deciding if it does look cool or not is too much for human beings, we need to know what magical buzzwords were sprinkled over, how many gigarays were gigarayed and whether AI was used to aggressively denoise the utter crap that came out of mentioned gigarays, and if yes, which version of AI it was, 1.0 or 2.0, as 2.0 is bigger and should, hence, be better.
 
@RH92 , RTX-powered games is a marketing term that should die and burn in a fire. Both because it is technically incorrect as well as because it immediately induces flame in forums/comments.

Ray-tracing is not a complex thing in its core and is a well-understood concept. The only thing RTX cards have to improve things is couple hardware operations that are faster/more efficient than running the same operations on shaders. Battlefield V, Metro Exodus and Shadow of Tomb Raider are all using DXR which is a feature of DX12. This is a standard. Things with Vulkan are a bit more dicey as currently the operations are exposed in Nvidia-specific extensions - that is what Quake2 VKPT uses and Quake2 RTX will use if/when Nvidia decides to release it.

If AMD so wishes they can write an implementation of DXR and provide it to us in drivers. I am very sure they have one running in-house but there is no logical reason for them to release it. Vega should be able to compete with Pascal very favorably in DXR but that is pointless in the grand scheme of things and would only highlight lack of RT hardware compared to RTX cards, thus weakening AMD's own position.

Given how different RT technique used by the demo and what Turing RT are doing, uh, really?
Could you please elaborate on what exactly you mean by different?
RT technique used in Neon Noir is practically identical to what Battlefield V implements. Everything down to the optimization choices both CryTek and DICE went for.
 
You are mixing up stuff ....

The point here is that in order to achieve 1080p30 on a Vega 56 CryTek team is already using lower graphical quality in their demo that what can be observed on most RTX powered games , for instance : '' All the objects in the Neon Noir Demo use low-poly versions of themselves for reflections,” Frölich says. “As a few people have commented, it is noticeable on the bullets'' that's not the case on RTX titles or at least not the case when RTX Ultra is enabled as far as il aware. With this in mind lowering even more the graphical quality of that demo in order to achieve higher framerates/resolution makes not much sense because you are already offering less graphical quality that the reference ( wich is RTX powered games ) at 1080p30 .

Hence why comparing the perf of the Vega 56 on that demo with the perf of Nvidia cards on RTX powered titles at 1080p makes litle sense .

Regardless CryTek team is basicaly confirming that in order to enjoy proper raytracing you need hardware support !

That LODs, you kept saying Resolution. Nothing new about "Optimization". BFV had to look into reducing its LODs further if you recall.

Expect to see more granularity added to the DXR settings, perhaps with a focus on culling distance and LODs

However, there are discussions internally to change what each individual settings do; we could do more, like play with LODs and cull distances as well as perhaps some settings for the new hybrid ray tracer that is coming in the future.

We are also looking into reducing the LOD levels for alpha tested geometry like trees and vegetation and we are also looking at reducing memory utilisation by the alpha shaders like vertex attribute fetching (using our compute input assembler).

Not sure how one can even say Proper Ray Tracing when talking about a Hybrid method.
 
Last edited:
Nvidia's RTX ray-tracing has de-noise pass which is pixel re-construction
Let me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: http://cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
 
Let me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: http://cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.

The article is dated same time GTC 2018. Nvidia announced RTX Tech at that time. Even held In-depth Dev talks.
 
Let me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: http://cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
If you actually read http://cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr it mentions "real time" i.e. I quote

These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
 
If you actually read http://cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr it mentions "real time" i.e. I quote

These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
Yes, Nvidia did harp on RTRT already in the initial announcement but it was clearly aimed towards professional market and it was clear real-time part of it would be limited. Half a year later, Turing brought out a rather large improvement for the performance side of things.

At that point it remained somewhat unclear what RTX technology was or what it brings to the table. Turns out, RTX is mostly just a marketing term. Volta had no RT Cores. Tensor cores were there but denoising on Tensor cores is only one option (one that we are not sure if DXR games even currently use). The main purpose was and is to bring RTRT to the table for general public. The primary drivers are DXR and implementations/solutions both within GameWorks and outside of it.

RTX as a term is Nvidia's marketing failure. It started with it meaning a set of RTRT related technologies, followed by a prefix of graphics card series along with bundling DLSS underneath the same RTX moniker. The meaning of the term got more and more mixed and along with reaction to RTX cards makes RTX as a term quite meaningless.
 
Last edited:
Yes, Nvidia did harp on RTRT already in the initial announcement but it was clearly aimed towards professional market and it was clear real-time part of it would be limited. Half a year later, Turing brought out a rather large improvement for the performance side of things.

At that point it remained somewhat unclear what RTX technology was or what it brings to the table. Turns out, RTX is mostly just a marketing term. Volta had no RT Cores. Tensor cores were there but denoising on Tensor cores is only one option (one that we are not sure if DXR games even currently use). The main purpose was and is to bring RTRT to the table for general public. The primary drivers are DXR and implementations/solutions both within GameWorks and outside of it.

RTX as a term is Nvidia's marketing failure. It started with meaning a set of RTRT related technologies, followed by a prefix of graphics card series along with bulding DLSS underneath the same RTX moniker. The meaning of the term got more and more mixed and along with reaction to RTX cards makes RTX as a term quite meaningless.
Besides tensor and RT units, RTX 2080 Ti still has non-RT upgrades e.g. improve raster performance, rapid pack math (CUDA cores), proper hardware async scheduler from Volta, discrete integer CUDA cores, double L2 cache storage, variable shader rate and 'etc'.

Direct12's DirectML extension exposes GPU's rapid pack math features.
 
Back
Top