News Posts matching #TU116

Return to Keyword Browsing

MSI Launches Two CMP 30HX MINER Series Cards

MSI has recently announced two new CMP (Cryptocurrency Mining Processor) 30HX MINER Series cards the MSI CMP 30HX MINER, and MSI CMP 30HX MINER XS. These new cards borrow the coolers from the ARMOR series and VENTUS XS respectively, MSI is one of the first manufacturers to offer multiple versions of a CMP card so it will be interesting to see the differences in performance and pricing given the two cards feature identical specifications. The two cards both feature the TU116-100 GPU with 1408 cores and a base clock of 1530 MHz along with a boost clock of 1785 MHz and 6 GB GDDR6 memory. The cards also lack any display outputs as per all CMP cards and pricing or availability information is not available.

First NVIDIA Palit CMP 30HX Mining GPU Available at a Tentative $723

NVIDIA's recently-announced CMP (Cryptocurrency Mining Processor) products seem to already be hitting the market - at least in some parts of the world. Microless, a retailer in Dubai, listed the cryptocurrency-geared graphics card for $723 - $723 which are equivalent to some 26 MH/s, as per NVIDIA, before any optimizatons have been enacted on the clock/voltage/BIOS level, as more serious miners will undoubtedly do.

The CMP 30HX is a re-released TU116 chip (Turing, sans RT hardware), which powered the likes of the GeForce GTX 1660 Super in NVIDIA's previous generation of graphics cards. The card features a a 1,530 MHz base clock; a 1,785 MHz boost clock; alongside 6 GB of GDDR6 memory that clocks in at 14 Gbps (which actually could soon stop being enough to hold the entire workload completely in memory). Leveraging a 192-bit memory interface, the graphics card supplies a memory bandwidth of up to 336 GB/s. It's also a "headless" GPU, meaning that it has no display outputs that would only add to cost in such a specifically-geared product. It's unclear how representative the pricing from Microless actually is of NVIDIA's MSRP for the 30HX products, but considering current graphics cards' pricing worldwide, this pricing seems to be in line with GeForce offerings capable of achieving the same hash rates, so its ability to concentrate demand from miners compared to other NVIDIA mainstream, GeForce offerings depends solely on the prices that are both set by NVIDIA and practiced by retailers.

NVIDIA Crypto Mining Processor 30HX Card Pictured

The first NVIDIA CMP (Crypto Mining Processor) 30HX card from Gigabyte has been pictured and it closely resembles that of Gigabyte's GTX 1660 SUPER OC 6G. This resemblance makes sense considering the 30HX uses the same TU116-100 GPU found in the GTX 1660 SUPER and is paired with the same 6 GB of GDDR6 memory. The NVIDIA CMP 30HX features a TDP of 125 W and achieves a hash rate of 26 MH/s in Ethereum mining similar to that of the RTX 3060 with it's anti-mining algorithm. The card features no display outputs which limits the cards capabilities once it's no longer profitable to operate. The card should run cool with the dual-fan cooling solution and improved airflow due to the lack of outputs.

NVIDIA's New 30HX & 40HX Crypto Mining Cards Are Based on Turing Architecture

We have recently discovered that NVIDIA's newly announced 30HX and 40HX Crypto Mining Processors are based on the last-generation Turing architecture. This news will come as a pleasant surprise to gamers as the release shouldn't affect the availability of Ampere RTX 30 Series GPUs. The decision to stick with Turing for these new devices is reportedly due to the more favorable power-management of the architecture which is vital for profitable cryptocurrency mining operations. The NVIDIA CMP 40HX will feature a custom TU106 processor while the 30HX will include a custom TU116. This information was discovered in the latest GeForce 461.72 WHQL drivers which added support for the two devices.

KFA2 Intros GeForce GTX 1650 GDDR6 EX PLUS Graphics Card

GALAX's European brand KFA2 launched the GeForce GTX 1650 GDDR6 EX PLUS graphics card. The card looks identical to the one pictured below, but with the 6-pin PCIe power input removed, relying entirely on the PCIe slot for power. Based on the 12 nm "TU116" silicon, the GPU features 896 "Turing" CUDA cores, and talks to 4 GB of GDDR6 memory across a 128-bit wide memory interface. With a memory data rate of 12 Gbps, the chip has 192 GB/s of memory bandwidth on tap. The GPU max boost frequency is set at 1605 MHz, with a software-based 1635 MHz "one click OC" mode. The cooling solution consists of an aluminium mono-block heatsink that's ventilated by a pair of 80 mm fans. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and dual-link DVI-D. Available now in the EU, the KFA2 GeForce GTX 1650 GDDR6 EX PLUS is priced at 129€ (including taxes).

NVIDIA Seemingly Producing Yet Another GTX 1650 Variant Based on TU-116

NVIDIA's GTX 1650 has already seen more action and revisions within its own generation than most GPUs ever have in the history of graphics cards, with NVIDIA having updated not only its memory (from 4 GB GDDR5 with 128 GB/s bandwidth to 4 GB GDDR6 memory for 192 GB/s bandwidth), but also by carving up different silicon chips to provide the same part to market. The original GTX 1650 made use of NVIDIA's TU117 chips with 896 CUDA cores, which was then superseded by the TU116-based GTX 1650 SUPER, which mightily increased the GTX 1650's execution units (1280) and bandwidth (256-bit bus). There was also a TU106-based GTX 1650, which was just bonkers - a chip originally used on the RTX 2060 was thus repurposed and cut-down.

Now, another TU-116 variant is also available, which NVIDIA carved down from its GTX 1650 SUPER chips. These go back to the original releases' 896 CUDA cores and 128-bit bus, whilst keeping the GDDR6 memory ticking at 12 Gbps and clocks set at 1410 MHz Base and 1590 MHz Boost. This card achieves feature parity with the TU106-based GTX 1650, but trades in the crazy 445 mm² TU106 die for the much more svelte 284 mm² TU116 one. NVIDIA seems to be doing what it can by cleaning house of any and all leftover chips in preparation for their next-gen release - consumer confusion be damned.

NVIDIA Readies GeForce GTX 1650 SUPER with GDDR6 Memory for Late November

It turns out that the GeForce GTX 1660 Super will be joined by another "Super" SKU by NVIDIA, the GeForce GTX 1650 Super, according to a VideoCardz report. Slated for a November 22 launch, the GTX 1650 Super appears to be NVIDIA's response to the Radeon RX 5500, which is being extensively compared to the current GTX 1650 in AMD's marketing material. While the core-configuration of the GTX 1650 Super is unknown, NVIDIA is giving it 4 GB of GDDR6 memory across a 128-bit wide memory interface, with a data-rate of 12 Gbps, working out to 192 GB/s of memory bandwidth. In comparison, the GTX 1650 uses 8 Gbps GDDR5 and achieves 128 GB/s memory bandwidth.

It remains to be seen just how much the improved memory subsystem helps the GTX 1650 Super catch up to the RX 5500, given that a maxed out TU117 silicon only has 128 more CUDA cores on offer, and AMD is claiming a 37% performance lead over the current GTX 1650 for its RX 5500. One possible way it can create the GTX 1650 Super is by tapping into the larger "TU116" silicon with 1/3rd of its memory interface disabled, and fewer CUDA cores than the GTX 1660. We'll know more in the run up to November 22.

NVIDIA GeForce GTX 1660 SUPER Launching October 29th, $229 With GDDR6

NVIDIA's GeForce GTX 1660 SUPER, the first non raytracing-capable Turing-based SUPER graphics card from the company, is set to drop on October 29th. Contrary to other SUPER releases though, the GTX 1660 SUPER won't feature a new GPU ship brought down from the upwards performance tier. This means it will make use of the same TU116-300 as the GTX 1660 with 1408 CUDA cores, not the 1536 CUDA count of the GTX 1660 Ti. Instead, NVIDIA has increased performance of this SUPER model by endowing it with GDDR6 memory.

The new GDDR6 memory ticks at 14 Gbps, which gives it an advantage over the GTX 1660 Ti model which will still cost more than it. When all is said and done, the GTX 1660 SUPER will feature memory bandwidth in the range of 336 GB/s, significantly more than the GTX 1660 Ti's 288 GB/s, and a huge differentiating factor from the 192 GB/s of the GTX 1660. Of course, the fewer CUDA core resources compared to the GTX 1660 Ti mean it should still deliver lower performance than that graphics card. This justifies its price-tag set at $229 - $20 higher than the GTX 1660, but $50 less than the GTX 1660 Ti.

NVIDIA GeForce GTX 1660 Super Releases on Oct 29nd

Chinese website ITHome has new info on the release of NVIDA's GeForce GTX 1660 Super graphics cards. According to their website, the release is expected for October 22nd, which seems credible, considering NVIDIA always launches on a Tuesday. As expected, the card will be built around the Turing TU116 graphics processor, which also powers the GTX 1660 and GTX 1660 Ti. Shader counts should either be 1472, because NVIDIA wants to position their card between GTX 1660 (1408 cores) and GTX 1660 Ti (1536 cores). The memory size will be either 4 GB or 6 GB. Specifications of the memory are somewhat vague, it is rumored that NVIDIA's GTX 1660 Super will use GDDR6 chips, just like GTX 1660 Ti — the plain GTX 1660 uses GDDR5 memory. Another possibility is that shader count matches GTX 1660, and the only difference (other than clock speeds) is that GTX 1660 Super uses GDDR6 VRAM.

The Chinese pricing is expected around 1100 Yuan, which converts to $150 — surprisingly low, considering GTX 1660 retails at $210 and GTX 1660 Ti is priced at $275. Maybe NVIDIA is adjusting their pricing to preempt AMD's upcoming Radeon RX 5500/5600 Series. Videocardz has separately confirmed this rumor with their sources at ASUS Taiwan, who are expected to launch at least three SKUs based on the new NVIDIA offering, among them DUAL EVO, Phoenix and TUF3 series.

Update Oct 24th: Seems the actual launch is October 29th, this post has more info: https://www.techpowerup.com/260391/nvidia-geforce-gtx-1660-super-launching-october-29th-usd-229-with-gddr6

ASUS Unveils GeForce GTX 1660 Ti EVO Series Graphics Cards

ASUS today unveiled the Dual GeForce GTX 1660 Ti EVO series graphics card, which comes in three variants, a base model, a moderately overclocked A6G model, and the fastest O6G overclocked variant. All three stick to a common board design, which involves a 24.2 cm-long and 13 cm-tall PCB, and a 3-slot thick custom-design cooler. This cooler features a DirectCU II heatsink ventilated by a pair of 80 mm Axial Tech fans. These fans feature barrier rings that run along the periphery of the impeller to prevent lateral airflow, guiding all of it axially (downwards onto the heatsink). The cooler features idle fan-stop.

The cooling solution uses a pair of nickel-plated copper heat pipes that make direct contact with the "TU116" ASIC, conveying heat to the edges of the aluminium fin-stack. The shroud features a tiny RGB LED diffuser. A metal back-plate is included. The card draws power from a single 8-pin PCIe power connector. Display outputs include two HDMI 2.0b, and one each of DisplayPort 1.4 and dual-link DVI-D. The base variant comes with NVIDIA-reference clock speeds of 1500 MHz core and 1770 MHz GPU Boost. The A6G variant has a negligibly increased GPU Boost frequency of 1785 MHz. The O6G variant leads this pack at 1845 MHz. The memory remains untouched at 12 Gbps on all three variants. The company didn't reveal pricing.

NVIDIA RTX Logic Increases TPC Area by 22% Compared to Non-RTX Turing

Public perception on NVIDIA's new RTX series of graphics cards was sometimes marred by an impression of wrong resource allocation from NVIDIA. The argument went that NVIDIA had greatly increased chip area by adding RTX functionality (in both its Tensor ad RT cores) that could have been better used for increased performance gains in shader-based, non-raytracing workloads. While the merits of ray tracing oas it stands (in terms of uptake from developers) are certainly worthy of discussion, it seems that NVIDIA didn't dedicate that much more die area to their RTX functionality - at least not to the tone of public perception.

After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.

Palit Unveils its GeForce GTX 1660 Lineup

Palit Microsystems Ltd, the leading graphics card manufacturer, releases the new NVIDIA Turing architecture GeForce GTX 16 series in Palit GeForce product line-up, GeForce GTX 1660 Dual OC, Dual, StormX OC and StormX.

Palit GeForce GTX 1660 is built with the latest NVIDIA Turing architecture which performs great at 120FPS, so it's an ideal model for eSports gaming titles It can also reach an amazing performance and image quality while livestreaming to Twitch or YouTube. Like its big brother, the GeForce GTX 1660 utilizes the "TU116" Turing GPU that's been carefully architected to balance performance, power, and cost. TU116 includes all of the new Turing Shader innovations that improve performance and efficiency, including support for Concurrent Floating Point and Integer Operations, a Unified Cache Architecture with larger L1 cache, and Adaptive Shading.

MSI Reveals New GeForce GTX 1660 Series Graphics Cards

As the world's most popular GAMING graphics card vendor, MSI is proud to announce its new graphics card line-up based on the new GeForce GTX 1660 GPU, the latest addition to the NVIDIA Turing GTX family.

The GeForce GTX 1660 utilizes the "TU116" Turing GPU that's been carefully architected to balance performance, power, and cost. TU116 includes all of the new Turing Shader innovations that improve performance and efficiency, including support for Concurrent Floating Point and Integer Operations, a Unified Cache Architecture with larger L1 cache, and Adaptive Shading.

NVIDIA Launches the GeForce GTX 1660 6GB Graphics Card

NVIDIA today launched the GeForce GTX 1660 6 GB graphics card, its successor to the immensely popular GTX 1060 6 GB. With prices starting at $219.99, the GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month; with fewer CUDA cores and a slower memory interface. NVIDIA carved the GTX 1660 out by disabling 2 out of 24 "Turing" SMs on the TU116, resulting in 1,408 CUDA cores, 88 TMUs, and 48 ROPs. The company is using 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6, which makes its memory sub-system 33 percent slower. The GPU is clocked at 1530 MHz, with 1785 MHz boost, which are marginally higher than the GTX 1660 Ti. The GeForce GTX 1660 is a partner-driven launch, meaning that there won't be any reference-design cards, although NVIDIA made should every AIC partner has at least one product selling at the baseline price of $219.99.

Read TechPowerUp Reviews: Zotac GeForce GTX 1660 | EVGA GeForce GTX 1660 XC Ultra | Palit GeForce GTX 1660 StormX OC | MSI GTX 1660 Gaming X

Update: We have updated our GPU database with all GTX 1660 models announced today, so you can easily get an overview over what has been released.

EVGA and GIGABYTE GeForce GTX 1660 Graphics Cards Pictured

Here are some of the first pictures of EVGA's and GIGABYTE's upcoming GeForce GTX 1660 graphics cards reportedly slated for launch later this week. It should come as no surprise that these cards resemble the companies' GTX 1660 Ti offerings, since they're based on the same 12 nm "TU116" silicon, with fewer CUDA cores. The underlying PCBs could be slightly different as the GTX 1660 uses older generation 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. The "TU116" silicon is configured with 1,408 CUDA cores out of the 1,536 physically present; the memory amount is 6 GB, across a 192-bit wide memory bus. The GTX 1660 baseline price is reportedly USD $219, and the card replaces the GTX 1060 6 GB from NVIDIA's product stack.

EVGA is bringing two designs to the market, a short-length triple-slot card with a single fan; and a more conventional longer card with 2-slot, dual-fan design. The baseline "Black" card could be offered in the shorter design; while the top-tier XC Ultra could be exclusive to the longer design. GIGABYTE, on the other hand, has two designs, a shorter-length dual-fan; and a longer-length triple-fan. Both models are dual-slot. The baseline SKU will be restricted to the shorter board design, while premium Gaming OC SKUs could come in the longer board design.

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).

NVIDIA GeForce GTX 1660 and GTX 1650 Pricing and Availability Revealed

(Update 1: Andreas Schilling, at Hardware Luxx, seems to have obtained confirmation that NVIDIA's GTX 1650 graphics cards will pack 4 GB of GDDR5 memory, and that the GTX 1660 will be offering a 6 GB GDDR5 framebuffer.)

NVIDIA recently launched its GeForce GTX 1660 Ti graphics card at USD $279, which is the most affordable desktop discrete graphics card based on the "Turing" architecture thus far. NVIDIA's GeForce 16-series GPUs are based on 12 nm "Turing" chips, but lack RTX real-time ray-tracing and tensor cores that accelerate AI. The company is making two affordable additions to the GTX 16-series in March and April, according to Taiwan-based PC industry observer DigiTimes.

The GTX 1660 Ti launch will be followed by that of the GeForce GTX 1660 (non-Ti) on 15th March, 2019. This SKU is likely based on the same "TU116" silicon as the GTX 1660 Ti, but with fewer CUDA cores and possibly slower memory or lesser memory amount. NVIDIA is pricing the GTX 1660 at $229.99, a whole $50 cheaper than the GTX 1660 Ti. That's not all. We recently reported on the GeForce GTX 1650, which could quite possibly become NVIDIA's smallest "Turing" based desktop GPU. This product is real, and is bound for 30th April, at $179.99, $50 cheaper still than the GTX 1660. This SKU is expected to be based on the smaller "TU117" silicon. Much like the GTX 1660 Ti, these two launches could be entirely partner-driven, with the lack of reference-design cards.

NVIDIA Unveils the GeForce GTX 1660 Ti 6GB Graphics Card

NVIDIA today unveiled the GeForce GTX 1660 Ti graphics card, which is part of its new GeForce GTX 16-series product lineup based on the "Turing" architecture. These cards feature CUDA cores from the "Turing" generation, but lack RTX real-time raytracing features due to a physical lack of RT cores, and additionally lack tensor cores, losing out on DLSS. What you get instead with the GTX 1660 Ti is a upper-mainstream product that could play most eSports titles at resolutions of up to 1440p, and AAA titles at 1080p with details maxed out.

The GTX 1660 Ti is based on the new 12 nm "TU116" silicon, and packs 1,536 "Turing" CUDA cores, 96 TMUs, 48 ROPs, and a 192-bit wide memory interface holding 6 GB of GDDR6 memory. The memory is clocked at 12 Gbps, yielding 288 GB/s of memory bandwidth. The launch is exclusively partner-driven, and NVIDIA doesn't have a Founders Edition product based on this chip. You will find custom-design cards priced anywhere between USD $279 to $340.

We thoroughly reviewed four GTX 1660 Ti variants today: MSI GTX 1660 Ti Gaming X, EVGA GTX 1660 Ti XC Black, Zotac GTX 1660 Ti, MSI GTX 1660 Ti Ventus XS.

Tight Squeeze Below $350 as Price of GTX 1660 Ti Revealed

NVIDIA is reportedly pricing the GeForce GTX 1660 Ti at USD $279 (baseline pricing), which implies pricing of custom-designed and factory-overclocked cards scraping the $300-mark. The card is also spaced $70 apart from the RTX 2060, which offers not just 25% more CUDA cores, but also NVIDIA RTX and DLSS technologies. In media reporting of the card so far, it is being compared extensively to the GTX 1060 6 GB, which continues to go for under $230. Perhaps NVIDIA is planning a slower non-Ti version to replace the GTX 1060 6 GB under the $250-mark. That entry would place three SKUs within $50-70 of each other, a tight squeeze. Based on the 12 nm TU116 silicon, the GTX 1660 Ti is rumored to feature 1,536 CUDA cores, 96 TMUs, 48 ROPs, and a 192-bit wide GDDR6 memory interface, handling 6 GB of memory at 12 Gbps (288 GB/s). This GPU lacks RT cores.

NVIDIA TU116 GPU Pictured Up Close: Noticeably Smaller than TU106

Here is the first picture of NVIDIA's 12 nm "TU116" silicon, which powers the upcoming GeForce GTX 1660 Ti graphics card. While the size of the package itself is identical to that of the "TU106" on which the RTX 2060 and RTX 2070 are based; the die of the TU116 is visibly smaller. This is because the chip physically lacks RT cores, and only has two-thirds the number of CUDA cores as the TU106, with 1,536 against the latter's 2,304. The die area, too, is about 2/3rds that of the TU106. The ASIC version of TU116 powering the GTX 1660 Ti is "TU116-400-A1."

VideoCardz scored not just pictures of the ASIC, but also the PCB of an MSI GTX 1660 Ti Ventus graphics card, which reveals something very interesting. The PCB has traces for eight memory chips, across a 256-bit wide memory bus, although only six of them are populated with memory chips, making up 6 GB over a 192-bit bus. The GPU's package substrate, too, is of the same size. It's likely that NVIDIA is using a common substrate, with an identical pin-map between the TU106 and TU116, so AIC partners could reduce PCB development costs.

Palit and EVGA GeForce GTX 1660 Ti Cards Pictured

As we inch closer to the supposed 15th February launch of the GeForce GTX 1660 Ti, pictures of more AIC partner branded custom-design cards. The first two of these are from Palit and EVGA. Palit is bringing two very compact cards to the table under its StormX banner. These cards appear to be under 18 cm in length, and use an aluminium fin-stack cooler that's ventilated by a single 100 mm fan. There are two grades based on factory-overclock. The base model ticks at 1770 MHz boost, while the OC variant offers 1815 MHz boost.

EVGA's GTX 1660 Ti lineup includes two cards under its XC brand, with both cards being under 20 cm in length, but are 3 slots thick. Both cards appear to be using the same 3-slot single-fan cooling solution as the company's RTX 2060 XC. Once again, we see two variants based on clock-speeds, with the "Black" variant sticking to 1770 MHz boost, and the XC version slightly dialing up that frequency. Based on the 12 nm "TU116" silicon, the GTX 1660 Ti is rumored to feature 1,536 CUDA cores based on the "Turing" architecture, but lacking in RTX technology. The SKU succeeds the GTX 1060 6 GB.

More GeForce GTX 1660 Ti Specs Emerge

A Russian retailer has leaked more specifications of NVIDIA's upcoming GeForce GTX 1660 Ti graphics card. Based on the 12 nm "TU116" silicon, this card will be configured with 1,536 "Turing" CUDA cores, but have no RT cores, and hence no RTX features. The chip could end up with 96 TMUs and 48 ROPs. The GPU is clocked at 1500 MHz nominal, and the boost frequency is set at 1770 MHz, however, the latter could be a factory-overclock set by AIC partner Palit for their GTX 1660 Ti StormX graphics card.

The memory subsystem of the GTX 1660 Ti is interesting. While it's still 6 GB of GDDR6 memory across a 192-bit wide memory bus, the memory clock itself is lower than that of the RTX 2060. The memory ticks at 12 Gbps, which results in 288 GB/s of memory bandwidth, compared to the RTX 2060, which thanks to its 14 Gbps memory achieves 336 GB/s. The card draws power from a single 8-pin PCIe power connectors. Outputs include HDMI, DisplayPort, and DVI, we don't expect any cards to ship with VirtualLink.

MSI GeForce GTX 1660 Ti SKUs Listed on the Eurasian Economic Commission, Adds Fuel to 1660 Ti Fire

It seems only yesterday that we were discussing a Turing microarchitecture-based TU116 die that would power the yet-to-be-confirmed GeForce GTX 1660 Ti. With no RTX technology support, this was speculated to be NVIDIA's attempt to appease the mainstream gaming market that understands the GPU does not have enough horsepower to satisfactorily drive real-time ray tracing in games while still maintaining an optimal balance of visual fidelity and performance alike. Reports indicated an announcement next month, followed by retail availability in March, and today we got word of more concrete evidence pointing towards all these coming to fruition.

It appears that trade listings in various organizations are going to be a big source of leaks in the present and future, with MSI GeForce GTX 1660 Ti SKUs, including the Gaming Z, Armor, Ventus, and Gaming X, all listed on the Eurasian Economic Commission (EEC). The listing covers the associated trademarks, all awarded to MSI, and is one of the last steps towards setting up a retail channel for new and upcoming products. Does the notion of a Turing GTX GPU without real-time ray tracing interest you? Let us know in the comments section below.

NVIDIA GeForce GTX 1660 Ti Put Through AoTS, About 16% Faster Than GTX 1060

Thai PC enthusiast TUM Apisak posted a screenshot of an alleged GeForce GTX 1660 Ti Ashes of the Singularity (AoTS) benchmark. The GTX 1660 Ti, if you'll recall, is an upcoming graphics card based on the TU116 silicon, which is a derivative of the "Turing" architecture but with a lack of real-time raytracing capabilities. Tested on a machine powered by an Intel Core i9-9900K processor, the AoTS benchmark was set to run at 1080p and DirectX 11. At this resolution, the GTX 1660 Ti returned a score of 7,400 points, which roughly compares with the previous-generation GTX 1070, and is about 16-17 percent faster than the GTX 1060 6 GB. NVIDIA is expected to launch the GTX 1660 Ti some time in Spring-Summer, 2019, as a sub-$300 successor to the GTX 1060 series.

NVIDIA Readies GeForce GTX 1660 Ti Based on TU116, Sans RTX

It looks like RTX technology won't make it to sub-$250 market segments as the GPUs aren't fast enough to handle real-time raytracing, and it makes little economic sense for NVIDIA to add billions of additional transistors for RT cores. The company is hence carving out a sub-class of "Turing" GPUs under the TU11x ASIC series, which will power new GeForce GTX family SKUs, such as the GeForce GTX 1660 Ti, and other GTX 1000-series SKUs. These chips offer "Turing Shaders," which are basically CUDA cores that have the IPC and clock-speeds rivaling existing "Turing" GPUs, but no RTX capabilities. To sweeten the deal, NVIDIA will equip these cards with GDDR6 memory. These GPUs could still have tensor cores which are needed to accelerate DLSS, a feature highly relevant to this market segment.

The GeForce GTX 1660 Ti will no doubt be slower than the RTX 2060, and be based on a new ASIC codenamed TU116. According to a VideoCardz report, this 12 nm chip packs 1,536 CUDA cores based on the "Turing" architecture, and the same exact memory setup as the RTX 2060, with 6 GB of GDDR6 memory across a 192-bit wide memory interface. The lack of RT cores and a lower CUDA core count could make the TU116 a significantly smaller chip than the TU106, and something NVIDIA can afford to sell at sub-$300 price-points such as $250. The GTX 1060 6 GB is holding the fort for NVIDIA in this segment, besides other GTX 10-series SKUs such as the GTX 1070 occasionally dropping below the $300 mark at retailers' mercy. AMD recently improved its sub-$300 portfolio with the introduction of Radeon RX 590, which convincingly outperforms the GTX 1060 6 GB.
Return to Keyword Browsing
Apr 24th, 2024 07:53 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts