Friday, March 15th 2019

Crytek Shows Off Neon Noir, A Real-Time Ray Tracing Demo For CRYENGINE

Crytek has released a new video demonstrating the results of a CRYENGINE research and development project. Neon Noir shows how real-time mesh ray-traced reflections and refractions can deliver highly realistic visuals for games. The Neon Noir demo was created with the new advanced version of CRYENGINE's Total Illumination showcasing real time ray tracing. This feature will be added to CRYENGINE release roadmap in 2019, enabling developers around the world to build more immersive scenes, more easily, with a production-ready version of the feature.


Neon Noir follows the journey of a police drone investigating a crime scene. As the drone descends into the streets of a futuristic city, illuminated by neon lights, we see its reflection accurately displayed in the windows it passes by, or scattered across the shards of a broken mirror while it emits a red and blue lighting routine that will bounce off the different surfaces utilizing CRYENGINE's advanced Total Illumination feature. Demonstrating further how ray tracing can deliver a lifelike environment, neon lights are reflected in the puddles below them, street lights flicker on wet surfaces, and windows reflect the scene opposite them accurately.
Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

Ray tracing is a rendering technique that simulates complex lighting behaviors. Realism is achieved by simulating the propagation of discreet fractions of energy and their interaction with surfaces. With contemporary GPUs, ray tracing has become more widely adopted by real-time applications like video games, in combination with traditionally less resource hungry rendering techniques like cube maps; utilized where applicable.
The experimental ray tracing tool feature simplifies and automates the rendering and content creation process to ensure that animated objects and changes in lighting are correctly reflected with a high level of detail in real-time. This eliminates the known limitation of pre-baked cube maps and local screen space reflections when creating smooth surfaces like mirrors, and allows developers to create more realistic, consistent scenes. To showcase the benefits of real time ray tracing, screen space reflections were not used in this demo.
Add your own comment

150 Comments on Crytek Shows Off Neon Noir, A Real-Time Ray Tracing Demo For CRYENGINE

#101
Stefem
NxodusI'm one of those idiots who bought an RTX card for RT. I don't care about the future, I wanted to enjoy RT now! And Metro Exodus, man, I gotta say, RTX blew my mind away. I can't even find words for the exceptional beauty of it.. I mean, all those screenshots did no justice to RT at all. Turning RT off in Metro Exodus made it look like a 2009 game. The realism, the ambience of RTX justified every cent I paid for my 2060. I only play 6 or so games a year, I wanted the best visual experience and it was well worth it.

I understand your points, but It's important to note, there is a tiny minority of people like me, who fell in love with the RTX line of cards and my motto is: Once you RTX you never go back:)
I don agree with him on anything to be honest, I'm not yet an RTX owner but I wouldn't worry, CryTek itself pointed out that once integrated in their engine they will leverage the advantage of the newer hardware:

"However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12."

We don't know almost anything on the CryTek implementation, they may be using SDF volumetric representation in place of standard geometry for example but it's know by almost a full year that some developers was working to use raytracing on old hardware (even console), that's what Sebastian Aaltonen said last summer for example.
AmioriKI just thought I'd copy something I posted in another thread as it's relevant to this discussion Regarding the cryengine RT approach.



These are just my 2 cents on the ray tracing. Maybe I'm wrong and big Turing implementation is innately inferior to a GPU approach... Time will tell.

I didn't, and won't, invest in 20 series GPU because I feel it isn't worth it yet. But I do think Nvidia will improve dedicated rtx cores.
FP16 can be used to compute BVH traversal (something even Vega or smaller Turing without dedicated hardware can benefit of) but RT cores are still much faster and they also offload the shader core that can then work on other stuff, even CryTek aim to take advantage of the newer hardware once they start to actually implement the tech in their engine.
We don't know exactly their implementation as they refraining from giving any details and circumventing any technical question they are given but they are planning to slowly release details, I guess that is to draw attention and generate hype, sadly CryTek had been suffering for years and advertising may help them, I loved how they pushed things forward with Crysis.
Posted on Reply
#102
rtwjunkie
PC Gaming Enthusiast
XuperOMG!!! What the hell are you doing ? This Crytek demo is not big GUN that you want to defend RTX! GET OVER IT! Two Pages wasted for nonsense posts ! MOD Please Remove All non-related Posts , Thanks
“RTX” is an Nvidia term for their implementation of ray tracing. So this thread is exactly about ray tracing being done by someone other than Nvidia. Keep in mind, this is not Nvidia exclusive technology.
Posted on Reply
#103
Xuper
rtwjunkie“RTX” is an Nvidia term for their implementation of ray tracing. So this thread is exactly about ray tracing being done by someone other than Nvidia. Keep in mind, this is not Nvidia exclusive technology.
Whatever but Topic about Radeon VII vs 2080 needs to be Off.
Posted on Reply
#104
medi01
Soo, if generic CUs can do that, why waste silicon on dedicated?
RecusWhile rest will use industry standard DXR.
Posted on Reply
#105
Vayra86
XuperWhatever but Topic about Radeon VII vs 2080 needs to be Off.
I think we're past that, so stop digging it up if you want it like that. There is a Report button, use it. Posting about it is just as offtopic as the things you complain about.
Posted on Reply
#106
Stefem
medi01Soo, if generic CUs can do that, why waste silicon on dedicated?
Hem... being much faster could be a reason, perhaps? that's why graphic cards were born although everything was possible in software. Why mine on asics if can be done on a CPU, why use dedicated encoding and decoding for video, why tessellator , why texture mapping unit, why ROP.... there are plenty of example and all have the same answer.
Posted on Reply
#107
cucker tarlson
medi01Soo, if generic CUs can do that, why waste silicon on dedicated?
you mean why didn't they just make a 6000 cuda card on that 750mm2 ?
maybe extra cuda cores draw more power than tensor/rt cores,or they'd have more production issues with such a card.they'd need a complete die redesign too.with 2080ti it's the same 88 rop/11gb configuration,with tensor and rt cores added to them.
that is a good question I'd like to know the answer for too.
In the end I think they decided that economically they're better off with rt-specific hardware and software rather than brute force and more cuda.
maybe they just wanted to use a proprietary solution (dxr) cause they're nvidia.
Posted on Reply
#108
medi01
StefemHem... being much faster could be a reason, perhaps?
Well, perhaps, but do you see that "much faster" missing in the OP demo?
cucker tarlsonmaybe they just wanted to use a proprietary solution (dxr) cause they're nvidia.
Knowing Huang's habbits, hell yeah, on the other hand DXR made it into DX12.
Makes sense only if he hoped competitors would not bother implementing it like that/it would take them long to catch up (years of research are behind it).
Notably, this very demo doesn't use it, does it?
Posted on Reply
#109
cucker tarlson
medi01Knowing Huang's habbits, hell yeah, on the other hand DXR made it into DX12.
Makes sense only if he hoped competitors would not bother implementing it like that/it would take them long to catch up (years of research are behind it).
Notably, this very demo doesn't use it, does it?
seems to me it doesn't,at least now.


look at this video,titan v only delivers 27 fps where 2080ti delivers 42. seems like the more cuda instead of decicated rt/tensor cores approach would still be a way less efficient way
Posted on Reply
#110
Stefem
medi01Well, perhaps, but do you see that "much faster" missing in the OP demo?



Knowing Huang's habbits, hell yeah, on the other hand DXR made it into DX12.
Makes sense only if he hoped competitors would not bother implementing it like that/it would take them long to catch up (years of research are behind it).
Notably, this very demo doesn't use it, does it?
I don't see how the demo disprove what I said, they didn't compare performance and (as I've already posted above) Crytek said the integration of this tech in their engine will benefit from the enhancement delivered by newer hardware using DX12 and Vulkan.
We know almost nothing on what they've done, they carefully avoided to give out any detail and they are dodging technical question but they said they will gradually release some info, the only thing to do is to wait for actual details.
As I've said in a precedent post, there are other developer working to use RT on old hardware and console and there are even game out now, look at Claybook for example it uses raytracing for primary + AO + shadow rays and no one talk about it, I think that CryTek is way better at generating flames debates :laugh:.
Posted on Reply
#111
Vya Domus
Ray traced elements have been used for years for approximating all sorts of effects such as global illumination, there was never the case of needing a particular hardware/software framework for these things to work.

GPUs these days are no longer GPUs, they are compute accelerators with some dedicated graphics hardware strapped on. Learning GPGPU/OpenCL/CUDA made me realize how many hardware capabilities and features have been stuffed inside these things that have none or very little relevance to graphics workloads. There is a reason why Microsoft made no particular hardware requirements to DXR, it may very well be the case that future GPUs will go down the path of doing RTRT under the form of generic compute workloads rather than strapping yet another dedicated ASIC on these already clogged architectures.
Posted on Reply
#112
bigfurrymonster
Vayra86Yes. Been saying since day one. .
I think the killer solution would be a multi chip GPU design like what they use for Ryzen3 (io controller+zen cores)

You could have a small "RT" coprocessor and a GPU on seperate dies connected by infinity fabric.
The small RT coprocessor cost would be negligible compared to Nvidia's monolithic approach.
Posted on Reply
#113
medi01
cucker tarlsonlook at this video,titan v only delivers 27 fps where 2080ti delivers 42. seems like the more cuda instead of decicated rt/tensor cores approach would still be a way less efficient way
In that particular way of doing RT rendering, yes.
Doesn't necessarily mean anything about what Crytek did.
StefemI don't see how the demo disprove what I said, they didn't compare performance and (as I've already posted above) Crytek said the integration of this tech in their engine will benefit from the enhancement delivered by newer hardware using DX12 and Vulkan.
They didn't have to compare performance, they are selling the engine, not non-RTX GPUs.

Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).
StefemWe know almost nothing on what they've done
Well, why, they said that they used SVOGI or Sparse voxel octree global illumination
Posted on Reply
#114
PanicLake
I just realized: RTX is the new "G-Sync"...
Posted on Reply
#115
bug
medi01Soo, if generic CUs can do that, why waste silicon on dedicated?



Gee, I don't know. Why do we waste silicon on 3D in general if we can do 3D in software?
Just look at what the tensor cores do for some compute workloads, that's why specialized silicon is needed.
Posted on Reply
#116
INSTG8R
Vanguard Beta Tester
GinoLatinoI just realized: RTX is the new "G-Sync"...
PhysX 2.0
Posted on Reply
#117
londiste
RecusNo DXR or fallback layer. So this "ray tracing" method will be locked on Cryengine. While rest will use industry standard DXR.
www.cryengine.com/news/crytek-releases-neon-noir-a-real-time-ray-tracing-demonstration-for-cryengine
Total Illumination is their voxel AO solution, it is in its principle halfway towards raytracing and they have obviously expanded the feature by quite a bit.
They said they will use hardware acceleration if they can. Vulkan and DX12 imply using VK_RT extensions and DXR which today means it does include RTX support. Or, technically the other way around - RTX does support the APIs that CryTek uses.
https://www.cryengine.com/news/crytek-releases-neon-noir-a-real-time-ray-tracing-demonstration-for-cryengineHowever, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.
medi01Soo, if generic CUs can do that, why waste silicon on dedicated?
Efficiency. Dedicated hardware can do BVH traversal faster. Less resources, less power.
medi01In that particular way of doing RT rendering, yes.
Doesn't necessarily mean anything about what Crytek did.
...
Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).
Vulkan has VK_NVX_raytracing extensions. Both DXR and these extensions provide access to RT cores that do BVH traversal. This is fairly central operation to most raytracing implementations.
Posted on Reply
#118
Vayra86
GinoLatinoI just realized: RTX is the new "G-Sync"...
That may turn out to be very accurate indeed. It will do it 'a little better' at a tremendous cost.
Posted on Reply
#119
medi01
bugWhy do we waste silicon on 3D in general if we can do 3D in software?
Because it is vastly faster. Something that is apparently not the case here with ray tracing, is it?
NV went with "look, you need (my) specialized hardware for RT reflections/shadows!".
Crytek called BS.
Let's twist it somehow, shall we?
londisteVulkan has VK_NVX_raytracing extensions.
Good for whatever NVX stands for. Oh wait, isn't it the thing that killed OpenGL? Hmm...
Vayra86It will do it 'a little better'
Good that you put it into quotes. I hopeyou also meant it.
Posted on Reply
#120
bug
Vayra86That may turn out to be very accurate indeed. It will do it 'a little better' at a tremendous cost.
The thing is, the DXR part of RTX is in DX now. So there are a few parties at least behind that. Then again, who knows what DXR 2.0 or DXR 3.0 will look like?
medi01Because it is vastly faster. Something that is apparently not the case here with ray tracing, is it?
How do you figure? I have seen no numbers or technical detalis about what Crytek did, yet you're assuming performance is about the same?
Posted on Reply
#121
medi01
bugHow do you figure? I have seen no numbers or technical detalis
You see I didn't need checking third party tests to figure hardware rendering wiped the floor with software rendering back when GPUs became a thing.
Posted on Reply
#122
bug
medi01You see I didn't need checking third party tests to figure hardware rendering wiped the floor with software rendering back when GPUs became a thing.
Maybe so, but now you're assuming a solution without dedicated hardware performs about the same as a solution having said hardware. Which is quite the opposite.
Posted on Reply
#123
londiste
Vayra86Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were more efficient. Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.

This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the entire GPU instead of just a part of it, while the entire GPU is also available should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is viable on a marketplace.

Another striking difference I feel is the quality of this demo compared to what Nvidia has put out with RTX. This feels like a next step in graphics in every way, the fidelity, the atmosphere simply feels right. With every RTX demo thus far, even in Metro Exodus, I don't have that same feeling. It truly feels like some weird overlay that doesn't come out quite right. Which, in reality, it also is. The cinematically badly lit scenes of Metro only emphasize that when you put them side by side with non-RT scenes. The latter may not always be 'correct' but it sure is a whole lot more playable.



*DXR. In the end Nvidia is using a customized setup that works for them, it remains to be seen how well AMD can plug into DXR with their solution, or how Crytek does it now, and/or whether they even want to or need to. The DX12 requirement sure doesn't help it and DXR will be bogged down by rasterization as well as it sits within the same API. There is a chance the overall trend will move away from DXR altogether, leaving RTX in the dust or out to find a new point of entry.
Sorry for digging up an old post but I think you are off base with this.
- Die space cost for RT cores is 10-15%, probably less. I am not sure if that is exactly massive.
- RTX is proprietary, DXR is not, Vulkan extensions may or may not turn out to be proprietary depending on what route the other IHVs take.
- Software-based implementation - or in this case, implementation running on general-purpose hardware - is simply not as efficient as dedicated hardware. So far everything points at this being the case here, whether you take Nvidia's inflated marketing numbers or actual tests by users. This shows even with production applications and Turing vs Titan V. RT cores simply do make a big difference in performance.
- Quality of demo is a different topic, CryTek is selling the engine so it needs to look beautiful. This one is probably best compared to the Star Wars demo. Metro is an artistic problem rather than technical one.

CryTek said this is on the release roadmap in 2019 so all the performance aspects should be testable eventually. I would expect them to talk more about it during GDC as well.
Posted on Reply
#124
medi01
bug...now you're assuming a solution without dedicated hardware performs about the same...
I'm not assuming, I'm seeing it.
londisteThis one is probably best compared to the Star Wars demo.
Why would the mentioned demo, focusing on the RTX, would not be made to look good?
Posted on Reply
#125
Stefem
medi01They didn't have to compare performance, they are selling the engine, not non-RTX GPUs.
I've probably misunderstood you question, didn't you asked if dedicated hardware being faster was missing in the CryTek demo?
medi01Vulkan doesn't have anything like DXR (a very specific set of instructions to do certain thing with rays).
Yes it does, NVIDIA proposed several extension for raytracing that have since been integrated into the API and that with the some contribution of both Intel and AMD.
www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VK_NV_ray_tracing
medi01Well, why, they said that they used SVOGI or Sparse voxel octree global illumination
That's not raytracing and it's nothing new for them nor for others as there are many games out by years that make use of voxel for GI and, BTW, it's not the solution used to render reflection (the only thing they claim is raytraced).
Posted on Reply
Add your own comment
Apr 26th, 2024 04:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts