EVGA GeForce RTX 2080 Ti XC Ultra 11 GB Review 11

EVGA GeForce RTX 2080 Ti XC Ultra 11 GB Review

(11 Comments) »

Introduction

EVGA Logo

NVIDIA launched the GeForce RTX 20-series with the introduction of the GeForce RTX 2080 and RTX 2080 Ti. It comes at a time when the silicon fabrication technology isn't advancing at the rate it used to four years ago, wrecking the architecture roadmaps of several semiconductor giants, including Intel, NVIDIA, AMD, and Qualcomm, which is forcing them to design innovative new architectures on existing foundry nodes. Brute transistor-count increases, as would have been the case with "Volta," are no longer a viable option, and NVIDIA needed a killer feature to sell new GPUs. That killer feature is the RTX Technology. This feature is so big for NVIDIA that it has changed the nomenclature of its client-segment graphics cards with the introduction of the GeForce RTX 20-series.

NVIDIA RTX is a near-turnkey real-time ray-tracing model for game developers that lets them fuse real-time ray-traced objects into 3D scenes that have been rasterized. Ray-tracing the whole scene in existence isn't quite possible yet, but the results with using RTX are still better-looking than anything rasterizing can achieve. To even get those few bits of ray tracing done right, an enormous amount of compute power is required. NVIDIA has hence deployed purpose-built hardware components on its GPUs that sit alongside all-purpose CUDA cores, called RT cores.



NVIDIA invested heavily to stay at the bleeding edge of the hardware that drives pioneering AI research and has, over the years, developed Tensor cores, specialized components that are tasked with matrix multiplication, which speeds up deep-learning neural-net building and training via Tensor ops. Although it's a client-segment GPU for gaming, NVIDIA feels GPU-accelerated AI could play an increasingly big role in the company's turnkey GameWorks effects and a new image quality enhancement called Deep-Learning Super-Sampling (DLSS). The chips are hence endowed with Tensor cores, just like the TITAN Volta. All that it lacks compared to the $3,000 graphics card from last year is FP64 CUDA cores.

NVIDIA GeForce RTX 20-series graphics cards debut at unusually high prices compared to their predecessors, perhaps because NVIDIA doesn't count the GTX 10-series as a predecessor to begin with. These chips pack not just CUDA cores, but also RT cores and Tensor cores, adding to the transistor count which, along with generational increases in performance, contributes to scorching 15%–70% increases in launch prices over the GTX 10-series. The GeForce RTX 2080 is the second-fastest graphics card from the series and is priced at $700 for the base model.

EVGA's RTX 2080 Ti XC Ultra is based on the NVIDIA reference design PCB, but comes with a much more capable triple-slot, dual-fan thermal solution. In terms of clocks, EVGA has bumped Boost to 1650 MHz, which is 25 MHz higher than the Founders Edition. Pricing is a bit higher than the Founders Edition: $1,250.

GeForce GTX 2080 Ti Market Segment Analysis
 PriceShader
Units
ROPsCore
Clock
Boost
Clock
Memory
Clock
GPUTransistorsMemory
GTX 1070$390 1920641506 MHz1683 MHz2002 MHzGP1047200M8 GB, GDDR5, 256-bit
RX Vega 56$400 3584641156 MHz1471 MHz800 MHzVega 1012500M8 GB, HBM2, 2048-bit
GTX 1070 Ti$4002432641607 MHz1683 MHz2000 MHzGP1047200M8 GB, GDDR5, 256-bit
GTX 1080$470 2560641607 MHz1733 MHz1251 MHzGP1047200M8 GB, GDDR5X, 256-bit
RX Vega 64$570 4096641247 MHz1546 MHz953 MHzVega 1012500M8 GB, HBM2, 2048-bit
GTX 1080 Ti$675 3584881481 MHz1582 MHz1376 MHzGP10212000M11 GB, GDDR5X, 352-bit
RTX 2070$4992304641410 MHz1620 MHz1750 MHzTU10610800M8 GB, GDDR6, 256-bit
RTX 2070 FE$5992304641410 MHz1710 MHz1750 MHzTU10610800M8 GB, GDDR6, 256-bit
RTX 2080$6992944641515 MHz1710 MHz1750 MHzTU10413600M8 GB, GDDR6, 256-bit
RTX 2080 FE$7992944641515 MHz1800 MHz1750 MHzTU10413600M8 GB, GDDR6, 256-bit
RTX 2080 Ti$9994352641350 MHz1545 MHz1750 MHzTU10218600M11 GB, GDDR6, 352-bit
RTX 2080 Ti FE$11994352641350 MHz1635 MHz1750 MHzTU10218600M11 GB, GDDR6, 352-bit
EVGA RTX 2080 Ti XC Ultra$12494352641350 MHz1650 MHz1750 MHzTU10218600M11 GB, GDDR6, 352-bit

Architecture

On the 14th of September, we published a comprehensive NVIDIA "Turing" architecture deep-dive article including coverage of its three new silicon implementations and the new RTX Technology. Be sure to catch that article for more technical details.


The "Turing" architecture caught many of us by surprise because it wasn't visible on GPU architecture roadmaps until a few quarters ago. NVIDIA took this roadmap detour over carving out client-segment variants of "Volta" as it realized it had achieved sufficient compute power to bring its ambitious RTX Technology to the client segment. NVIDIA RTX is an all-encompassing, real-time ray-tracing model for consumer graphics, which seeks to bring a semblance of real-time ray tracing to 3D games.


To enable RTX, NVIDIA has developed an all-new hardware component that sits next to CUDA cores, called the RT core. An RT core is a fixed-function hardware that does what the spiritual ancestor of RTX, NVIDIA OptiX, did over CUDA cores. You input the mathematical representation of a ray and it will transverse the scene to calculate the point of intersection with any triangle in the scene. This is a computationally heavy task that would have otherwise bogged down the CUDA cores.

The other major introduction is the Tensor Core, which made its debut with the "Volta" architecture. These too are specialized components tasked with 3x3x3 matrix multiplication, which speeds up AI deep-learning neural net building and training. Its relevance to gaming is limited at this time, but NVIDIA is introducing a few AI-accelerated image-quality enhancements that could leverage Tensor operations.


The component hierarchy of a "Turing" GPU isn't much different from its predecessors, but the new-generation Streaming Multiprocessor is significantly different. It packs 64 CUDA cores, 8 Tensor Cores, and a single RT core.

TU102 Graphics Processor


The TU102 is the largest silicon based on the "Turing" architecture and powers the GeForce RTX 2080 Ti. It's also the biggest GPU die NVIDIA ever built with over 18.6 billion transistors sitting on a 775 mm² die that has been fabricated on the 12 nanometer process by TSMC. As we mentioned earlier, the essential component hierarchy on the "Turing" architecture hasn't changed. What has changed, however, is that the Streaming Multiprocessor (SM), the indivisible sub-unit of the GPU, now packs CUDA cores, RT cores, and Tensor cores orchestrated by a new Warp Scheduler that supports concurrent INT and FP32 ops, which should improve the GPU's asynchronous compute performance.

At the topmost level, the GPU takes host connectivity from PCI-Express 3.0 x16, an NVLink interface, and connects to GDDR6 memory across a 384-bit wide memory bus. On the RTX 2080 Ti, the memory interface is narrowed to 352-bit and wired to 11 GB of memory. The GigaThread engine marshals load between six GPCs (graphics processing clusters). Each GPC has a dedicated raster engine and six TPCs (texture processing clusters). A TPC shares a PolyMorph engine between two SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and an RT core.

There are, hence, 768 CUDA cores, 96 Tensor cores, and 12 RT cores per GPC, and a grand total of 4,608 CUDA cores, 576 Tensor cores, and 72 RT cores across the TU102 silicon. The GeForce RTX 2080 Ti is carved out of the TU102 by disabling four SMs, resulting in 4,352 CUDA cores, 544 Tensor cores, and 68 RT cores. The GPU is endowed with 272 TMUs and 96 ROPs.


As we mentioned, the memory bus is narrowed down slightly to 352-bit, which holds 11 GB of GDDR6 memory clocked at 14 Gbps, resulting in a memory bandwidth of 616 GB/s.

Features

Again, we highly recommend you read our article from the 14th of September for intricate technical details about the "Turing" architecture feature set, which we are going to briefly summarize here.


NVIDIA RTX is a brave new feature that has triggered a leap in GPU compute power, just like other killer real-time consumer graphics features, such as anti-aliasing, programmable shading, and tessellation. It provides a programming model for 3D scenes with ray-traced elements that improve realism. RTX introduces several turnkey effects that game developers can implement with specific sections of their 3D scenes, rather than ray-tracing everything on the screen (we're not quite there yet). A plethora of next-generation GameWorks effects could leverage RTX.


Perhaps more relevant architectural features to gamers come in the form of improvements to the GPU shaders. In addition to concurrent INT and FP32 operations in the SM, "Turing" introduces Mesh Shading, Variable Rate Shading, Content-Adaptive Shading, Motion-Adaptive Shading, Texture-Space Shading, and Foveated Rendering.


Deep Learning Anti-Aliasing (DLSS) is an ingenious new post-processing AA method that leverages deep-neural networks built ad hoc with the purpose of guessing how an image could look upscaled. DNNs are built on-chip, accelerated by Tensor cores. Ground-truth data on how objects in most common games should ideally look upscaled are fed via driver updates, or GeForce Experience. The DNN then uses this ground-truth data to reconstruct detail in 3D objects. 2x DLSS image quality is comparable to 64x "classic" super sampling.

Packaging and Contents

Package Front
Package Back




You will receive:
  • Graphics card
  • Documentation
  • EVGA case badge
  • DVI adapter
Our Patreon Silver Supporters can read articles in single-page format.
Discuss(11 Comments)
Apr 19th, 2024 13:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts