Computers can't do full ray-traced lighting. They can only approximate.What I meant is that the current gpu's that support raytracing are not powerful enough to utilize it in a way that matters. Who cares about some nice puddle reflection? I want full raytraced lighting and shadows.
Computers can't even store precise numbers, so what exactly do you expect?
Which are often badly ported console games.My best guess is that some game and engine developers rather optimize for the majority (with possible incentives behind it). We've seen lots of recent games where radeon graphic cards perform better than its nvidia counterparts.
If you don't care about optimization, a game will likely run better on a more powerful hardware, which is usually AMD.
If you care, it may be easier to optimize for Nvidia. It's certainly easier to do computing on Nvidia.
As for Tensor and RTX cores - we're still talking about using them as Nvidia wants you to do in a gaming card, i.e. for DLSS and RTRT. But the same stuff is found in computing accelerators - doing more general tasks. After all it's just a circuit optimized for certain algebra problems. It's a matter of time until we learn how to utilize them in consumer PCs.
So while using RTRT will always mean a drop in fps (like any other image quality option), switching it off could mean that the hardware can be used for other jobs => provide a performance boost.
It's a bit too early to judge, but look what happened with the whole GPGPU phenomenon. And it all started in a similar way - GPU makers designed circuitry that was meant to improve lighting effects.