NVIDIA Turing GeForce RTX Technology & Architecture 53

NVIDIA Turing GeForce RTX Technology & Architecture

Turing TU102, TU104 & TU106 Architecture »

Introduction

NVIDIA Logo


NVIDIA Turing is the company's best kept secret, if it's indeed 15 years in the making. The architecture introduces a feature NVIDIA feels is so big that it could be the biggest innovation in real-time 3D graphics since programmable shaders from early last decade. Real-time ray tracing has indeed been regarded as the holy grail of 3D graphics because of the sheer amount of computation needed to make it work. The new GeForce RTX family of graphics cards promises to put a semblance of ray tracing in the hands of gamers. At this point, we are calling it a semblance because NVIDIA has adopted some very clever tricks to make it work, and the resulting 3D scenes do tend to resemble renders that have undergone hours of ray tracing.

Around this time last year, when we first heard the codename "Turing," we discounted it for a silicon NVIDIA could develop to cash in on the blockchain boom of the time since the architecture is named after the mathematician who saved millions of lives by cracking the Nazi "Enigma" cryptography, which helped bring World War II to a speedy end. Little did we know that NVIDIA's tribute to Alan Turing wouldn't merely be to his achievements in cryptography, but, rather, his overall reputation as the Father of Artificial Intelligence and Theoretical Computing.



Over the past five years, NVIDIA invested big in AI, developing the first deep-learning neural network models that leverage its CUDA technology and powerful GPUs. Initial attempts at building and training neural nets proved to be a very time-consuming task for even the most powerful GPUs, requiring hardware components that accelerate tensor operations. NVIDIA thus built the first fixed-function component for tensor ops, called simply "Tensor Cores". These are large, specialized components that compute 3x3x3 matrix multiplications. Tensor Cores debuted with the "Volta" architecture, which we thought at the time would be the natural successor to "Pascal." However, NVIDIA decided the time was ripe to bring the RTX technology out of the oven.

The Turing graphics architecture introduces a third (and final) piece of the hardware puzzle that makes NVIDIA's ambitious consumer ray-tracing plans work—RT cores. An RT core is a fixed-function hardware that does what the spiritual ancestor of RTX, NVIDIA Optix, did over CUDA cores. You input the mathematical representation of a ray, and it will transverse the scene to calculate the point of intersection with any triangle in the scene.

NVIDIA RTX is an all-encompassing, highly flexible, real-time ray-tracing model for consumer graphics. It seeks to minimize the toolset and learning curve for today's 3D graphics programmers. It seeks to bring as tangible an impact on realism as anti-aliasing, programmable shaders, and tessellation (all of which triggered leaps in GPU compute power). On Turing, a combination of the latest-generation CUDA cores work with a new component called RT Core, and Tensor Cores, to make RTX work.

NVIDIA debuted RTX with the Quadro RTX line of professional graphics cards first, at SIGGRAPH 2018, not only because the event precedes Gamescom, but also because it gives content creators a head start into the technology. The GeForce RTX family is the first in a decade to lack "GTX" in its branding, which speaks for just how much weight is on RTX to succeed.

In this article, we dive deep into the inner workings of the NVIDIA RTX technology and Turing GPU architecture, and how the two are put together in the first three GeForce RTX 20-series graphics cards you'll be able to purchase later this month.

Very soon, when the NVIDIA review embargo is lifted, we'll also provide our own review with Turing performance results in lots of games.
Next Page »Turing TU102, TU104 & TU106 Architecture
View as single page
Apr 23rd, 2024 06:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts