Monday, August 20th 2018

NVIDIA's Move From GTX to RTX Speaks to Belief in Revolutionary Change in Graphics

Editorial
NVIDIA at its Gamescom-imbued presentation finally took the lid off its long-awaited refresh to its GeForce lineup - and there's more than a thousand increase (and a consonant change) to it. At the Palladium venue in Koln, Germany (which was choke-full with press and NVIDIA-invited attendees), Jensen Huang went on stage to present a video on the advancements of graphics simulation that brought about animations such as Tron, the first Star Wars, the original Tomb Raider, Multi-Texturing on RIVA TNT, and passed through special effects in Hollywood... Every incarnation of the pixels and triangles we've been accustomed to.

We already know the juicy tidbits - the three models being released, when, and their pricing (with a hike to boot on the 2070 graphics card, which sees its price increased by $100 compared to last gen's 1070). We know the cooling solution official NVIDIA cards will sport, and how the company will be pairing efforts with game developers to ensure the extra hardware they've invested time, money, and a name change into will bear fruits. But what's behind this change? What brought us to this point in time? What powered the company's impressive Sol Demo?
It's been a long road for NVIDIA ever since its contributor Turner Whitted worked on Multi-bounce Recursive Ray-tracing started way back in 1978. Jensen Huang says that GPU development and improvement has been moving at ten times what was being demanded by Moore's Law to CPUs - 1000 times every ten years. But ray-tracing is - or was - expected to require Petaflops of computing power. Yet another step that would take some 10 years to achieve.
NVIDIA, naturally, didn't want any of that. According to Jensen Huang, that meant the company had to achieve an improvement equivalent to 1000 more performance - ten years earlier. The answer to that performance conundrum is RTX - a simultaneous hardware, software, SDK and library push, united in a single platform. RTX hybrid rendering unifies rasterization and ray tracing, with a first rasterization pass (highly parallel) and a second ray tracing pass that only acts upon the rendered pixels, but allows for materialization of effects and reflections and light sources that would be outside of the scene - and thus, virtually inexistent with pre-ray-tracing rendering techniques. Now, RT cores can work in tandem with rasterization compute solutions to achieve reasonable rendering times for ray-traced scenes that would, according to Jensen Huang, take ten times more to render in Pascal-based hardware.
(NVIDIA CEO Jensen Huang quipped that for gamers to be able to achieve ray-tracing before the RT cores were added in the silicon and architecture design mix, they'd have to pay $68,000 dollars for the DGX with four Tesla V100 graphics cards. He even offered to do so in 3,000 facile $19.95 payments.)
Turing has been ten years in the making, and Jensen Huang says this architecture and its RT Cores are the greatest jump in graphics computing for the company - and he likely meant the industry as well - since CUDA. The pairing of the three new or revised processing engines inside each Turing piece of silicon brings about this jump. The Turing SM, which allows for 14 TFLOPS and 14 TIPS (Integer Operations) of concurrent FP and INT Execution; the Tensor cores with their 110 TFLOPs of FP16, 220 TFLOPS if INT8, and a doubling again at 440 TFLOPS of INT4 performance; and the RT Core, with its 10 Giga Rays/sec (which Jensen Huang loves saying). For comparison, the 1080 Ti would be able to achieve, in peak conditions, 1.21 Giga Rays per second - almost 10 times lower performance.
And the overall effect on performance is nothing short of breathtaking, at least in the terms put out by Jensen Huang: a single Turing chip replaces the 4 V100 GPUs found within the DGX - and with lowered render times of just 45 ms against the V100's 55 ms for rendering a ray-traced scene. Pascal, on the other hand, would take 308 ms to render the same scene - in its 1080 Ti rendition no less.
A New Standard of Performance
Ray Tracing is being done all the time within 1 Turing Frame; this happens at the same time as part of the FP32 shading process - without RT cores, the green Ray tracing bar would be ten times larger. Now, it can be done completely within FP32 shading, followed by INT shading. And there are resources enough to add in some DNN (Deep Neural Network) processing to boot - NVIDIA is looking to generate Artificially-designed pixels with its DNN processing - essentially, the 110 TFLOPS powered by Tensor Cores, which in Turing render some 10x 1080 Ti equivalent performance, will be used to fill in some pixels - true to life - as if they had been actually rendered. Perhaps some Super Resolution applications will be found - this might well be a way of increasing pixel density by filling in additional pixels to an image.
Perhaps one of the least "sexy" tidbits out of NVIDIA's new generation launch is one of the most telling. The change from GTX to RTX speaks to years of history being paid respects to, but left behind, unapollogeticaly, for a full push towards ray-tracing. It speaks of leaving behind years upon years of pixel rasterization improvement in search of that which was only theoretically possible not that long ago - real-time ray-tracing of lighting across multiple, physically-based bodies.

The move from GTX to RTX means NVIDIA is putting its full weight behind the importance of its RTX platform for product iterations and the future of graphics computing. It manifests in a re-imagined pipeline for graphics production, where costly, intricate, but ultimately faked solutions gave way to steady improvements to graphics quality. And it speaks of a dream where AIs can write software themselves (and maybe themselves), and the perfect, Ground Truth Image is generated via DLSS in deep-learning powered networks away from your local computing power, sent your way, and we see true cloud-assisted rendering - of sorts. It's bold, and it's been emblazoned on NVIDIA's vision, professional and gamer alike. We'll be here to see where it leads - with actual ray-traced graphics, of course.
Show 65 Comments