NVIDIA GeForce RTX 2070 Founders Edition Review 52

NVIDIA GeForce RTX 2070 Founders Edition Review

(52 Comments) »

Introduction

NVIDIA Logo

Back in September, NVIDIA launched its GeForce RTX 20-series graphics card family with two high-end SKUs: the RTX 2080 and RTX 2080 Ti. Close to a month later, the company is launching its third-fastest card in the family, the GeForce RTX 2070. This is an important product for NVIDIA because even at a relatively steep price of $500, it is the most affordable one offering real-time ray-tracing in games, or at least a semblance of it. The RTX 2070 is being offered to the vast bulk of gamers that play at 1440p resolution or lower.

NVIDIA has had a less than stellar track record in making sure its products are actually available at the MSRP prices announced. $500 is the baseline price for the RTX 2070. The NVIDIA GeForce RTX 2070 Founders Edition card, which we have with us for review today, is priced at $599. With the lack of a reference-design card in the market at the baseline price, NVIDIA's board partners have often had a free hand at pricing even their cheapest custom-design offerings above the baseline. With the RTX 2070, however, NVIDIA reportedly cracked the whip on this practice. All partners are required to have at least one RTX 2070 product priced at $500.



NVIDIA needs more RTX 2070 cards to be sold at $500 than consumers need RTX at that price, because higher-end previous-generation GPU models are readily available around this price, and people opting for those instead of the RTX 2070 cuts down the captive audience size for RTX-varnished content, which is already not available on the console platform.

NVIDIA has also made certain interesting design choices for the RTX 2070. Predecessors of this card, such as the GTX 1070 and GTX 970, have historically been based on the same chips as the SKU just above them, such as the GTX 1080 and GTX 980. NVIDIA is basing the RTX 2070 on its third-largest "Turing" chip, the TU106, instead of the TU104.

It's important to mention here, though, that the TU106 isn't exactly a successor of chips such as the GP106 or GM206. While those two have exactly half the muscle of the GP104 or GM204 respectively, the TU106 has half the muscle of the top-dog TU102 instead of the TU104. This chip also gets the same 256-bit wide GDDR6 memory interface, which is unchanged from the TU104. The philosophy behind the TU106 may have been to design a lean chip that is cheaper to build for the simple fact that it has a smaller die than the TU104, minimizing wastage while giving NVIDIA the ability to carve out lesser SKUs from it. It can now better synchronize production of the larger TU104 to demand of the $800 RTX 2080.

At $599, NVIDIA's value addition to the supposedly-$500 RTX 2070 in its Founders Edition card comes in the form of an elegantly designed dual-fan cooler, besides factory-overclocked speeds.

GeForce RTX 2070 Market Segment Analysis
 PriceShader
Units
ROPsCore
Clock
Boost
Clock
Memory
Clock
GPUTransistorsMemory
RX Vega 56$400 3584641156 MHz1471 MHz800 MHzVega 1012500M8 GB, HBM2, 2048-bit
GTX 1070 Ti$4002432641607 MHz1683 MHz2000 MHzGP1047200M8 GB, GDDR5, 256-bit
GTX 1080$470 2560641607 MHz1733 MHz1251 MHzGP1047200M8 GB, GDDR5X, 256-bit
RX Vega 64$570 4096641247 MHz1546 MHz953 MHzVega 1012500M8 GB, HBM2, 2048-bit
GTX 1080 Ti$675 3584881481 MHz1582 MHz1376 MHzGP10212000M11 GB, GDDR5X, 352-bit
RTX 2070$4992304641410 MHz1620 MHz1750 MHzTU10610800M8 GB, GDDR6, 256-bit
RTX 2070 FE$5992304641410 MHz1710 MHz1750 MHzTU10610800M8 GB, GDDR6, 256-bit
RTX 2080$6992944641515 MHz1710 MHz1750 MHzTU10413600M8 GB, GDDR6, 256-bit
RTX 2080 FE$7992944641515 MHz1800 MHz1750 MHzTU10413600M8 GB, GDDR6, 256-bit
RTX 2080 Ti$9994352641350 MHz1545 MHz1750 MHzTU10218600M11 GB, GDDR6, 352-bit
RTX 2080 Ti FE$11994352641350 MHz1635 MHz1750 MHzTU10218600M11 GB, GDDR6, 352-bit

Architecture

On the 14th of September, we published a comprehensive NVIDIA "Turing" architecture deep-dive article including coverage of its three new silicon implementations and the new RTX Technology. Be sure to catch that article for more technical details.


The "Turing" architecture caught many of us by surprise because it wasn't visible on GPU architecture roadmaps until a few quarters ago. NVIDIA took this roadmap detour over carving out client-segment variants of "Volta" as it realized it had achieved sufficient compute power to bring its ambitious RTX Technology to the client segment. NVIDIA RTX is an all-encompassing, real-time ray-tracing model for consumer graphics, which seeks to bring a semblance of real-time ray tracing to 3D games.


To enable RTX, NVIDIA has developed an all-new hardware component that sits next to CUDA cores, called the RT core. An RT core is a fixed-function hardware that does what the spiritual ancestor of RTX, NVIDIA OptiX, did over CUDA cores. You input the mathematical representation of a ray and it will transverse the scene to calculate the point of intersection with any triangle in the scene. This is a computationally heavy task that would have otherwise bogged down the CUDA cores.

The other major introduction is the Tensor Core, which made its debut with the "Volta" architecture. These too are specialized components tasked with 3x3x3 matrix multiplication, which speeds up AI deep-learning neural net building and training. Its relevance to gaming is limited at this time, but NVIDIA is introducing a few AI-accelerated image-quality enhancements that could leverage Tensor operations.


The component hierarchy of a "Turing" GPU isn't much different from its predecessors, but the new-generation Streaming Multiprocessor is significantly different. It packs 64 CUDA cores, 8 Tensor Cores, and a single RT core.

TU106 Graphics Processor


The TU106 is the third-largest based on the "Turing" architecture, and as we mentioned earlier, it is divergent from chips such as the GP106 in that it has half the number-crunching machinery of the largest TU102 chip, and not half that of the TU104. This allows NVIDIA to design the RTX 2070 to have over 3/4th the number of CUDA cores as the RTX 2080 without wasting valuable TU104 die by disabling CUDA cores that are sometimes perfectly functional.

At the topmost level, the GPU takes host connectivity from PCI-Express 3.0 x16 and connects to GDDR6 memory across a 256-bit wide GDDR6 memory bus, which is the same exact memory interface as the RTX 2080 and the TU104 it's based on.

The GigaThread engine marshals load between three GPCs (graphics processing clusters). Each GPC has a dedicated raster engine and six TPCs (texture processing clusters). A TPC shares a PolyMorph engine between two SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and an RT core.

There are, hence, 768 CUDA cores, 96 Tensor cores, and 12 RT cores per GPC, and a grand total of 2,304 CUDA cores, 288 Tensor cores, and 36 RT cores across the TU106 silicon. The GeForce RTX 2070 maxes out this silicon with no disabled components. The GPU is endowed with 144 TMUs and 64 ROPs. You'll notice that the composition of the GPC is identical to that of the TU102, in comparison to that of the TU104.


At its given memory clock of 14 Gbps, the RTX 2070 has the same memory bandwidth on tap as the RTX 2080, at 448 GB/s.

Features

Again, we highly recommend you read our article from the 14th of September for intricate technical details about the "Turing" architecture feature set, which we are going to briefly summarize here.


NVIDIA RTX is a brave new feature that has triggered a leap in GPU compute power, just like other killer real-time consumer graphics features, such as anti-aliasing, programmable shading, and tessellation. It provides a programming model for 3D scenes with ray-traced elements that improve realism. RTX introduces several turnkey effects that game developers can implement with specific sections of their 3D scenes, rather than ray-tracing everything on the screen (we're not quite there yet). A plethora of next-generation GameWorks effects could leverage RTX.


Perhaps more relevant architectural features to gamers come in the form of improvements to the GPU shaders. In addition to concurrent INT and FP32 operations in the SM, "Turing" introduces Mesh Shading, Variable Rate Shading, Content-Adaptive Shading, Motion-Adaptive Shading, Texture-Space Shading, and Foveated Rendering.


Deep Learning Anti-Aliasing (DLSS) is an ingenious new post-processing AA method that leverages deep-neural networks built ad hoc with the purpose of guessing how an image could look upscaled. DNNs are built on-chip, accelerated by Tensor cores. Ground-truth data on how objects in most common games should ideally look upscaled are fed via driver updates, or GeForce Experience. The DNN then uses this ground-truth data to reconstruct detail in 3D objects. 2x DLSS image quality is comparable to 64x "classic" super sampling.

Packaging and Contents

Package Front
Package Back




You will receive:
  • Graphics card
  • Documentation
Our Patreon Silver Supporters can read articles in single-page format.
Discuss(52 Comments)
Apr 23rd, 2024 11:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts