I can only imagine how much work it would take to create a map in Metro Exodus. Is the ray tracing baked into light sources and or reflective surfaces for Metro Exodus: Enhanced Edition? Or does the map designer have to manually add those?
I once designed some skirmish maps for Red Alert 2, those were a huge time sink (especially if using AI scripts) but I'll bet it's nothing in comparison to creating a FPS map.
Good questions. There are different applications. Like, I think it's possible to do partial illumination, factor in only certain sources. But generally I think it applies to all sources globally, factors in the whole 3D scene. You can probably tweak what is and isn't detected as a source to use baked lighting instead of RTGI selectively. And I suspect something like that was done in Metro Enhanced at certain points. If I'm not mistaken, when performance was harder to get, the global illumination was often sort of a hybrid, with some key sources hitting on RT, but other parts of the scenes still being traditional. If you play around with performance settings for RTGI, the lower performance options tend to be less blatant and seem to hit on fewer things. Hence why the difference wouldn't always be so obvious versus a game like Control's implementation, where it looks like pretty much every single surface and source is reacting to the raytraced features and utterly crippled all but the absolute fastest first-gen RTX cards up. Compare that with Shadow of the Tomb Raider's 'don't bother' level miniscule shadows at low settings. It seems like they all balance how much RT they use for various reasons, by default, with totally pure applications not truly existing in the wild. Only the devs would know that.
It's also not like RT lighting is a flat improvement. Sometimes things need tweaking and curating. Additionally, I think the effect is material-dependent. Like... maybe a flag or special texture. At a minimum, the qualities of things like specular and normal bumpmaps will telegraph onto how the ray traced illumination ultimately looks, and how apparent it is on those surfaces. I think when it comes to reflections, it's triggered on a per-surface basis and in fact can even be blended with traditional SSR, to enable using less accuracy/range in the RT reflections and save performance. Degree probably goes by an indexed material quality of some sort, some variable that dictates how reflective that surface is, if the right boolean is up.
The main visual differentiators for light have to be things like ray length, filtering levels/accuracy, internal resolution, depth-related parameters, min/max range, intensity/balance of illumination and occlusion... things like that. And then you play around with materials to balance different elements, plan your source placement carefully.
Though honestly, there are some pretty major changes to the pipeline required for RT shaders. But you do still NEED a 2D raster pipeline at the end, after the RT-injected 3D pipeline. So it should in theory be possible to blend the output between raytraced and traditional as selectively as you could want, though some things may not wind up being as straightforward as they may seem when it comes to compatibility. It's up to the engine devs to decide how to incorporate it into their pipelines. That will play a major role in what you can and can't do.