The point was, where are you getting that from? There are no comparisons to make that conclusion yet. When there are current benchmarks/games that have known performance metrics and are then patched with RTRT we can make comparisons and conclusions. We're not there yet and there is only speculation as to real world performance.
Everything we have seen so far is clearly low FPS material, there is no denying that. So there is that, for measurable performance. On top of that, we see low poly models left and right - this Lunar Landing video even has them and they are quite noticeable - and that is happening in *tech demos*. And on top of all that, we know many of these demos require multiple GPUs and a fixed, highly optimized run/scenario. Gaming is none of that: SLI is no longer interesting (NVlink won't change that, its a niche more than it ever was since midrange no longer has it) and games are naturally not perfectly optimized and much more dynamic than a demo.
You say you see the writing on the wall - THAT is the writing on the wall. Performance is abysmal and that is underlined every time we see or hear of RTRT
. And if its not, its blurry junk that, while 'dynamic', is convincingly uglier than the tried and tested approach; while
still performing worse than that approach.
The quality we need in RTRT is still far beyond reach for most games in any sort of playable setting.
Last but not least, and
this is very important to keep in the back of your head:
'RT cores' and the current state of Turing is just a hand-me-down from Volta. The intention was
never to go all out on ray tracing, its just something that tensor is quite capable of. But almost nothing in the Turing GPU was specifically built for RTRT in games. Its
still mostly meant for development work. This tech was meant for Quadro and Tesla class GPUs. That also confirms why we get cut down chips. It also confirms that yields are not fantastic, that the cost/risk is rather high, and the performance confirms that none of this was purpose built for real time RT. It just works 'okay ish' on Turing as well. That is a somewhat different, but far more realistic perspective to the 'why' behind this release, and I daresay a tad more believable than Nvidia saying they've worked on it for 10 years.
So, bottom line, for me:
- this can easily turn out to be PhysX 2.0
- industry 'support' says about nothing. Remember DX12's MGPU support? If you leave it to devs to implement, it will be scarce.
- Nvidia is destroying all incentive to saturate the market with RT capable cards (only high end, no content to trigger buyers on release, killing SLI @ midrange all contribute to this)
- Implementation today means spending resources and time for a tiny niche of the gaming market
- Ray Tracing isn't new, and trying to get it done in realtime isn't new either. If its such a holy grail, why approach it with such a sad attempt and for such a tiny subgroup of buyers? If Nvidia truly believed in it, the approach would be different.
Turing is a test vehicle for RTRT. Mark my words. If it doesn't stick, it'll remain an afterthought that will barely see implementation.