Last month, NVIDIA released the GeForce GTX 1660 Ti and with it split its client-segment discrete graphics lineup into the GeForce GTX series and GeForce RTX series. The RTX 20-series starts at the $350-mark with the RTX 2060, while models below it are relegated to the GTX brand. The best part? Both are based on NVIDIA's latest 12 nm "Turing" architecture. What sets the two apart is right in the name—RTX real-time raytracing technology.
NVIDIA probably figured that getting RTX to work even at 1080p requires a minimum number of RT cores and CUDA core horsepower, which cannot be scaled down beyond a certain point because enabling RTX features already exacts a roughly 30 percent performance tax, and NVIDIA wouldn't want $200–$300 graphics cards being unable to play RTX-enabled games at 1080p with acceptable frame rates. The RTX 2060 appears to be positioned on that limit. In games without raytracing, the RTX 2060 has enough muscle for 1440p resolution, but on games with RTX-enabled, playability swings halfway between 1080p and 1440p.
The easiest way out of this problem for NVIDIA would be to not bother with RTX below the $350-mark and instead focus on making the GPU as cost efficient as possible. With RTX out of the way, NVIDIA could physically remove RT cores that add billions of transistors to the silicon, making the chips smaller. Interestingly, NVIDIA also decided to axe tensor cores, specialized hardware that accelerate deep-learning neural net building and training, shedding even more transistor load. The remaining CUDA cores are very much from the "Turing" architecture and benefit from the increased IPC and higher clock-speed headroom obtained from the switch to 12 nm. The largest such GTX Turing chip is the new "TU116". The second TU116-based card was announced very recently with the GTX 1660 (non-Ti).
NVIDIA carved the GTX 1660 out of the "TU116" silicon by disabling 2 out of 24 streaming multiprocessors, resulting in a CUDA core count of 1,408 and 88 TMUs, which is still higher than what the "Pascal" based GTX 1060 6 GB packs. With 48 ROPs and a 192-bit GDDR5 memory bus driving 6 GB of memory, the rendering and memory subsystem is practically carried over.
Today, we're reviewing the ASUS GeForce RTX 1660 Ti STRIX OC, which is the company's premium offering based on this GPU, featuring a large triple-fan, triple-slot cooler, RGB support, dual BIOS, a metal backplate and an overclock out of the box. Seems the ASUS STRIX ticks all the feature checkboxes, but it isn't cheap at $330.
|RX 570||$150||2048||32||1168 MHz||1244 MHz||1750 MHz||Ellesmere||5700M||4 GB, GDDR5, 256-bit|
|RX 580||$185||2304||32||1257 MHz||1340 MHz||2000 MHz||Ellesmere||5700M||8 GB, GDDR5, 256-bit|
|GTX 1060 3 GB||$185||1152||48||1506 MHz||1708 MHz||2002 MHz||GP106||4400M||3 GB, GDDR5, 192-bit|
|GTX 1060||$200||1280||48||1506 MHz||1708 MHz||2002 MHz||GP106||4400M||6 GB, GDDR5, 192-bit|
|RX 590||$260||2304||32||1469 MHz||1545 MHz||2000 MHz||Polaris 30||5700M||8 GB, GDDR5, 256-bit|
|GTX 1070||$310||1920||64||1506 MHz||1683 MHz||2002 MHz||GP104||7200M||8 GB, GDDR5, 256-bit|
|RX Vega 56||$370||3584||64||1156 MHz||1471 MHz||800 MHz||Vega 10||12500M||8 GB, HBM2, 2048-bit|
|GTX 1660 Ti||$280||1536||48||1500 MHz||1770 MHz||1500 MHz||TU116||6600M||6 GB, GDDR6, 192-bit|
|ASUS GTX 1660 Ti|
|$330||1536||48||1500 MHz||1860 MHz||1500 MHz||TU116||6600M||6 GB, GDDR6, 192-bit|
|GTX 1070 Ti||$450||2432||64||1607 MHz||1683 MHz||2000 MHz||GP104||7200M||8 GB, GDDR5, 256-bit|
|RTX 2060 FE||$350||1920||48||1365 MHz||1680 MHz||1750 MHz||TU106||10800M||6 GB, GDDR6, 192-bit|
|GTX 1080||$500||2560||64||1607 MHz||1733 MHz||1251 MHz||GP104||7200M||8 GB, GDDR5X, 256-bit|
|RX Vega 64||$400||4096||64||1247 MHz||1546 MHz||953 MHz||Vega 10||12500M||8 GB, HBM2, 2048-bit|
|GTX 1080 Ti||$700||3584||88||1481 MHz||1582 MHz||1376 MHz||GP102||12000M||11 GB, GDDR5X, 352-bit|
|RTX 2070||$490||2304||64||1410 MHz||1620 MHz||1750 MHz||TU106||10800M||8 GB, GDDR6, 256-bit|