MSI GeForce GTX 1660 Gaming X 6 GB Review 0

MSI GeForce GTX 1660 Gaming X 6 GB Review

Introduction

MSI Logo

NVIDIA launched its GeForce GTX 1660 graphics card, doubling down on its idea of Turing-based GeForce GTX graphics cards that lack ray-tracing capabilities, but feature performance uplifts at normal raster graphics. At $280, the GTX 1660 Ti from last month ended up leagues ahead of the similarly priced and recently launched Radeon RX 590 and could play any game at 1080p with details maxed out, and 1440p with a little tweaking. At its price point, though, the GTX 1660 Ti wasn't really a successor to the current mainstream market leader in terms of sheer sales, the GTX 1060 6 GB. That distinction now goes to the GTX 1660.

At $220, the GTX 1660 is built to a cost, and NVIDIA has made sure it has ample headroom to cut costs further in the future if AMD comes out with competitive products in this segment, such as the fabled "Navi." It is carved out of the same 12 nm "TU116" silicon as the GTX 1660 Ti with fewer CUDA cores and slower 8 Gbps GDDR5 memory replacing 12 Gbps GDDR6. NVIDIA is hence looking to offer a product that's incrementally faster than the GTX 1060 6 GB and anything AMD has to offer in this segment, which can still deliver on Full HD gameplay with maximum quality.



As we detailed the "TU116" in our GTX 1660 Ti reviews, this silicon is derived from the "Turing" architecture by removing RT cores and tensor cores, leaving just the CUDA cores, which have the same IPC and clock-speed uplifts as any other RTX 20-series card. The target audience for the GTX 1660 is that colossal mass of gamers into online multiplayer e-Sports titles and just in need a card that can keep them ticking at Full HD, perhaps even at high refresh rates.

NVIDIA carved the GTX 1660 out of the "TU116" silicon by disabling 2 out of 24 streaming multiprocessors, resulting in a CUDA core count of 1,408 and 88 TMUs, which is still higher than what the "Pascal" based GTX 1060 6 GB packs. With 48 ROPs and a 192-bit GDDR5 memory bus driving 6 GB of memory, the rendering and memory subsystem is practically carried over.

Today, we have with us the MSI GeForce GTX 1660 Gaming X, the company's premium offering based on this GPU, featuring its Twin Frozr 7 cooling solution, a back-plate, idle fan-stop, and factory-overclocked speeds.

GeForce GTX 1660 Market Segment Analysis
 PriceShader
Units
ROPsCore
Clock
Boost
Clock
Memory
Clock
GPUTransistorsMemory
GTX 1050$140640321354 MHz1455 MHz1752 MHzGP1073300M2 GB, GDDR5, 128-bit
GTX 1050 Ti$180 768321290 MHz1392 MHz1752 MHzGP1073300M4 GB, GDDR5, 128-bit
RX 570$130 2048321168 MHz1244 MHz1750 MHzEllesmere5700M4 GB, GDDR5, 256-bit
RX 580$170 2304321257 MHz1340 MHz2000 MHzEllesmere5700M8 GB, GDDR5, 256-bit
GTX 1060 3 GB$1851152481506 MHz1708 MHz2002 MHzGP1064400M3 GB, GDDR5, 192-bit
GTX 1060$2101280481506 MHz1708 MHz2002 MHzGP1064400M6 GB, GDDR5, 192-bit
RX 590$2402304321469 MHz1545 MHz2000 MHzPolaris 305700M8 GB, GDDR5, 256-bit
GTX 1660$220 1408481530 MHz1785 MHz2000 MHzTU1166600M6 GB, GDDR5, 192-bit
MSI GTX 1660
Gaming X
$2501408481530 MHz1860 MHz2000 MHzTU1166600M6 GB, GDDR5, 192-bit
GTX 1070$3101920641506 MHz1683 MHz2002 MHzGP1047200M8 GB, GDDR5, 256-bit
RX Vega 56$320 3584641156 MHz1471 MHz800 MHzVega 1012500M8 GB, HBM2, 2048-bit
GTX 1660 Ti$280 1536481500 MHz1770 MHz1500 MHzTU1166600M6 GB, GDDR6, 192-bit
GTX 1070 Ti$4502432641607 MHz1683 MHz2000 MHzGP1047200M8 GB, GDDR5, 256-bit
RTX 2060 FE$3501920481365 MHz1680 MHz1750 MHzTU10610800M6 GB, GDDR6, 192-bit

Architecture


The GeForce GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month. NVIDIA has carved out the GTX 1660 by disabling two out of 24 streaming multiprocessors (SMs), and pairing the GPU with 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. Memory bus width itself is unchanged at 192-bit wide, but the switch to GDDR5 has reduced memory bandwidth by a third (192 GB/s vs. 288 GB/s). GPU clock speeds are negligibly increased if not largely the same with 1530 MHz base and 1785 MHz GPU Boost, compared to the 1500/1770 MHz of the GTX 1660 Ti.

NVIDIA has significantly re-engineered the Graphics Processing Clusters (GPCs) of the silicon to lack RT cores and tensor cores. The chip's hierarchy is similar to other "Turing" GPUs. The GigaThread Engine and L2 cache are town-square for the GPU, which bind three GPCs with the chip's PCI-Express 3.0 x16 host and 192-bit GDDR5 memory interfaces. Each GPC has four indivisible TPCs (Texture Processing Cluster) that share a Polymorph Engine between two streaming multiprocessors (SM). Each Turing SM packs 64 CUDA cores, and thus, we end up with 128 CUDA cores per TPC, 512 per GPC, and 1,536 across the silicon. On the GTX 1660, there are 22 out of 24 SMs (or 11 out of 12 TPCs) enabled, which results in 1,408 CUDA cores. This is still more than the 1,280 of the GTX 1060 6 GB, and one has to also consider the increased IPC of the "Turing" architecture.


Much of NVIDIA's CUDA core specific innovation for Turing centers on improving the architecture's concurrent-execution capabilities. This is not the same as asynchronous compute, but the two concepts aren't too far out from each other. Turing CUDA cores are designed to execute integer and floating-point instructions per clock cycle in parallel, while older architectures, such as Pascal, can only handle one kind of execution at a time. Asynchronous compute is more of a macro concept and concerns the GPU's ability to handle various graphics and compute workloads in tandem.


Cushioning the CUDA cores is an improved L1 cache subsystem. The L1 caches are enlarged three-fold, with a four-fold increase in load/store bandwidth. The caches are configurable on the fly as either two 32 KB partitions per SM or a unified 64 KB block per TPC. NVIDIA has also substituted tensor cores with dedicated FP16 cores per SM to execute FP16 operations. These are physically separate components to the 64 FP32 and 64 INT32 cores per SM and execute FP16 at double the speed of FP32 cores. On the RTX 2060, for example, there are no dedicated FP16 cores per SM, and the tensor cores are configured to handle FP16 ops at an enormous rate.

NVIDIA has deployed the older generation GDDR5 memory on the GTX 1660, clocked at 8 Gbps data rate compared to 12 Gbps GDDR6 on the GTX 1660 Ti. This is a massive 33 percent decrease in memory bandwidth compared to the GTX 1660 Ti and exactly the same as on the GTX 1060 6 GB. The memory amount is unchanged at 6 GB.

Features


Let's talk about the two elephants in the room first. The GTX 1660 will not give you real-time raytracing because it lacks RT cores, and won't give you DLSS for want of tensor cores. What you will get is Variable Rate Shading. The Adaptive Shading (aka variable-rate shading) feature introduced with Turing is carried over to the GTX 1660. Both its key algorithms, content-adaptive shading (CAS) and motion-adaptive shading (MAS), are available. CAS senses color or spatial coherence in scenes to minimize repetitive shading of details in pursuit of increasing detail where it matters. MAS senses high motion in a scene (e.g.: race simulators) and minimizes shading of details in favor of performance.

Packaging and Contents

Package Front
Package Back




You will receive:
  • Graphics card
  • Documentation
  • Driver disc

The Card

Graphics Card Front
Graphics Card Back

MSI's card follows the theme set by their previous GeForce 20 cards. This is the same cooler as on the MSI GTX 1660 Ti Gaming X. A backplate is included, too. Dimensions of the card are 25.0 x 13.0 cm.

Graphics Card Height

Installation requires two slots in your system.

Monitor Outputs, Display Connectors

Display connectivity options include three standard DisplayPort 1.4a and one HDMI 2.0b.

NVIDIA has updated their display engine with the Turing microarchitecture, which now supports DisplayPort 1.4a with support for VESA's nearly lossless Display Stream Compression (DSC). Combined, this enables support for 8K@30Hz with a single cable, or 8K@60Hz when DSC is turned on. For context, DisplayPort 1.4a is the latest version of the standard that was published in April, 2018.

At CES 2019, NVIDIA announced that all their graphics cards will now support VESA Adaptive Sync (aka FreeSync). While only a small number of FreeSync monitors have been fully qualified for G-SYNC, users can enable the feature in NVIDIA's control panel, no matter whether the monitor is certified or not.

Graphics Card Power Plugs

The board uses a single 8-pin power connector. This input configuration is specified for up to 225 watts of power draw.

Multi-GPU Area

GeForce GTX 1660 does not support SLI.

Our Patreon Silver Supporters can read articles in single-page format.