- Sep 17, 2014
- 10,231 (5.43/day)
|Processor||i7 8700k 4.7Ghz @ 1.26v|
|Motherboard||AsRock Fatal1ty K6 Z370|
|Cooling||beQuiet! Dark Rock Pro 3|
|Memory||16GB Corsair Vengeance LPX 3200/C16|
|Video Card(s)||MSI GTX 1080 Gaming X @ 2100/5500|
|Storage||Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD|
|Display(s)||Eizo Foris FG2421|
|Case||Fractal Design Define C TG|
|Power Supply||EVGA G2 750w|
|Mouse||Logitech G502 Protheus Spectrum|
|Keyboard||Sharkoon MK80 (Brown)|
Irony of this story; if you put the ground rules and assets in place for a raster scene it behaves exactly the same way. Many engines run simulations just the same as RT is a simulation. And it does handle the code thats there much more efficiently, as it does not calculate all sorts of crap it wont use (no probing, culling).
Its potato potatoe material, and it all takes work while Nvidia has provided zero proof that workflows magically require fewer man hours for similar results. Just guesstimates induced by a healthy dose of marketing for the next best thing.
Nothing just works, all those things RT does 'on its own' are useless as we lack the horsepower to push it anyway. So you end up spending an equal amount of time fixing all of that.
The only thing you need less off with RT, is talented devs and designers. Raster takes more skill to get right. Not more time. RT is just a lazy package brute forcing it for you and passing the bill to end users.
Ive seen it too often. New ways of working, new algorithms... and yet, every half serious dev squad has a backlog to keep going for years...