NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB Review 207

NVIDIA GeForce RTX 2080 Ti Founders Edition 11 GB Review

(207 Comments) »

Value and Conclusion

  • The GeForce RTX 2080 Ti Founders Edition is priced at $1,200, which is nearly double that of its predecessor at launch.
  • Fastest graphics card, 4K 60 Hz is second-nature, 4K 120 Hz possible with lower settings
  • RTX Technology not gimmicky, does bring tangible IQ improvements
  • Deep-learning feature set
  • DLSS is an effective new AA method
  • Highly energy efficient
  • Overclocked out of the box
  • Quiet in gaming
  • Backplate included
  • HDMI 2.0b, DisplayPort 1.4, 8K support
  • Terrible pricing
  • No Windows 7 support for RTX, requires Windows 10 Fall 2018 Update
  • Bogged down by power limits
  • No idle fan-off
  • High non-gaming power consumption (fixable, says NVIDIA)
Our exhaustive coverage of the NVIDIA GeForce RTX 20-series "Turing" debut also includes the following reviews:
NVIDIA GeForce RTX 2080 Founders Edition 8 GB | ASUS GeForce RTX 2080 Ti STRIX OC 11 GB | ASUS GeForce RTX 2080 STRIX OC 8 GB | Palit GeForce RTX 2080 Gaming Pro OC 8 GB | MSI GeForce RTX 2080 Gaming X Trio 8 GB | MSI GeForce RTX 2080 Ti Gaming X Trio 11 GB | MSI GeForce RTX 2080 Ti Duke 11 GB | NVIDIA RTX and Turing Architecture Deep-dive

For the last few weeks, the Internet has been abuzz with news of NVIDIA's new Turing architecture, and the unique features it brings. NVIDIA is the first company to put graphics cards with hardware-accelerated ray tracing and artificial intelligence in the hands of gamers.

The GeForce RTX 2080 Ti is the company's flagship card; built around the large Turing TU102 graphics processor, which features an incredible 4,352 CUDA cores, using 18.6 billion transistors. This time, the NVIDIA Founders Edition comes overclocked out of the box, but at a higher price point too. Our GeForce RTX 2080 Ti Founders Edition sample breezed through our large selection of benchmarks with ease, with impressive results. The card is 38% faster than the GTX 1080 Ti on average at 4K resolution, which makes it the perfect choice for 4K 60 FPS gaming at highest details. Compared to the RTX 2080, the performance uplift is 28%. AMD's fastest, the Vega 64, is far behind, reaching only about half the performance of the RTX 2080 Ti.

NVIDIA has made only small changes in their Boost 4.0 algorithm compared to what we saw with Pascal. For example, instead of dropping all the way to base clock when the card reaches its temperature target, there is now a grace zone in which temperatures drop slowly towards the base clock, which is reached when a second temperature cut-off point is hit. Temperatures of the RTX 2080 Ti Founders Edition are good; with 77°C under load, the card isn't even close to thermal throttling.

However, every single Turing card we tested today will sit in its power limit all the time during gaming. This means that the highest boost clocks are never reached during regular gameplay, which is in stark contrast to Pascal, where custom-designs were almost always running at peak boost clocks. Just to clarify, the "rated" boost clock on vendor pages is a conservative value that's much lower than the highest reachable boost clock, and lower than what we measured during gaming as well. The rated boost clock for the RTX 2080 Ti Founders Edition is 1635 MHz. The peak boost clock we recorded (even if it was active for only a short moment) was 1950 MHz, with the average clock being 1824 MHz. It simply looks like that with Turing, the bottleneck is no longer temperature, but power consumption, or, rather, the BIOS-defined limit for it. Manually adjusting the power limit didn't solve the power throttling problem, but provided additional performance, of course, making this the easiest way to increase FPS, besides manual overclocking.

NVIDIA has once more made significant improvements in power efficiency with their Turing architecture, which has roughly 10%-15% better performance per watt compared to Pascal. Compared to AMD, NVIDIA is now almost twice as power efficient and twice as fast at the same time! The red team has some catching up to do as power, which generates heat, which requires fan noise to get rid of, is now the number one limiting factor in graphics card design.

Our power consumption readings for non-gaming states, like single-monitor and multi-monitor, showed terrible numbers. Especially multi-monitor power is a major issue with 57 W, which is 5x that of the GTX 1080 Ti. When asked, NVIDIA told us that they are aware of the issue and that it will be fixed in a coming driver update. I specifically asked "are you just looking into it, or will it definitely be fixed", and the answer was that it will definitely be fixed. This update will also reduce fan speed in idle, which will help bring down noise levels. I just wonder why NVIDIA doesn't just add fan-stop in idle on their cards. It's one of the most popular features these days.

Gaming noise levels of the RTX 2080 Ti Founders Edition are comparable to previous generation Founders Edition cards, which of course means that the cooler has received a long due update, since on the new cooler, power draw is higher and temperatures are lower, with similar noise. Still, 37 dBA is not whisper quiet even though it is highly acceptable, especially considering the massive performance and dual-slot design. This provides great opportunity for board partners to design quieter cards; we tested a few of them today with pretty impressive results.

Overclocking has once more become more complicated with this generation. Since the cards are always running in the power limiter, you can no longer just dial in stable clocks for the highest boost state to find the maximum overclock. The biggest issue is that you can't just reach that state reliably, so your testing is limited to whatever frequency your test load is running at. Nevertheless, we managed to pull through and achieved a decent overclock on our RTX 2080 Ti Founders Edition, which translates into 11% additional real-life performance. Overclocking potential seems quite similar on most cards, with the maximum boost clock being around 2100 MHz and maximum GDDR6 clock ending up roughly between 1950 and 2050 MHz.

NVIDIA GeForce RTX doesn't just give you more performance in existing games. It introduces RTX cores, which accelerate ray tracing—a rendering technique that can give you realism that's impossible with today's rasterization rendering. Unlike in the past, NVIDIA's new technology is designed to work with various APIs, from multiple vendors (Microsoft DXR, NVIDIA OptiX, Vulkan Vulkan RT), which will make it much easier for developers to get behind ray tracing. At this time, not a single game has RTX support, but the number of titles that will support it is growing by the day. We had the chance to check out a few demos and were impressed by the promise of ray tracing in games. I mentioned it before, but just to make sure: RTX will not turn games into fully ray-traced experiences. Rather, the existing rendering technologies will be used to generate most of the frame, with ray tracing adding specific effects, like lighting, reflections, or shadows for specific game objects that are tagged as "RTX" by the developer. It is up to the game developers what effect to choose and implement; they may go with one or several, as long as they stay within the available performance budget of the RTX engine. NVIDIA clarified to us that games will not just have RTX "on"/"off", but rather, you'll be able to choose between several presets; for example, RTX "low", "medium", and "high". Also, unlike Gameworks, developers have full control over what and how they implement. RTX "only" accelerates ray generation, traversal, and hit calculation, which are the fundamentals, and the most complicated operations to develop; everything else is up to the developer, so I wouldn't be surprised if we see a large number of new rendering techniques developed over time as studios get more familiar with the technology.

The second big novelty of Turing is acceleration for artificial intelligence. While it was at first thought that it won't do much for gamers, the company devised a clever new anti-aliasing algorithm called DLSS (Deep Learning Super-Sampling), which utilizes Turing's artificial intelligence engine. DLSS is designed to achieve quality similar to temporal anti-aliasing and to solve some of its shortcomings, while coming with a much smaller performance hit at the same time. We tested several tech demos for this feature and had difficulty telling the difference between TAA and DLSS in most scenes. The difference only became obvious in cases where TAA fails; for example, when it estimates motion vectors incorrectly. Under the hood, DLSS renders the scene at lower resolution (typically 50%, so for 4K, 2880x1620), and feeds the frame to the tensor cores, which use a predefined deep neural network to enhance that image. For each DLSS game, NVIDIA receives early builds from game developers and trains that neural network to recognize common forms and shapes of the models, textures, and terrain, to build a "ground truth" database that is distributed through Game Ready driver updates. On the other hand, this means that gamers and developers are dependent on NVIDIA to train that network and provide the data with the driver for new games. Apparently, an auto-update mechanism exists that downloads new neural networks from NVIDIA without the need for a reboot or update to the graphics card driver itself.

While we understand that Turing GPUs are bigger and pack more components, bring more performance to the table, and that they're "more than a GPU from 2017," the current pricing is hard to justify with the RTX 2080 starting at $700 and the RTX 2080 Ti at $1000. The once basic Founders Edition cards add another $100/$200 on top of that respectively, and custom board designs will be even more expensive. Similar leaps in technology in the past did not trigger such hikes in graphics card prices over a generation. We hence feel that in their current form, the RTX 20-series is overpriced by at least 20% across the board, which could deter not just bleeding-edge enthusiasts, but people upgrading from older generations, such as "Maxwell." For games that don't use RTX, the generational performance gains are significant, but not as big as those between "Maxwell" and "Pascal." On the other hand, I doubt that many gamers will opt for Pascal when they choose to upgrade their graphics card, especially with the promises of RTX and AI that NVIDIA is definitely going to market big. The key factor here will be game support, which looks to be gaining steam fast, if going by recent announcements.
Editor's Choice
Discuss(207 Comments)
View as single page
Apr 25th, 2024 04:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts