News Posts matching "GDDR5"

Return to Keyword Browsing

NVIDIA Launches the GeForce GTX 1660 6GB Graphics Card

NVIDIA today launched the GeForce GTX 1660 6 GB graphics card, its successor to the immensely popular GTX 1060 6 GB. With prices starting at $219.99, the GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month; with fewer CUDA cores and a slower memory interface. NVIDIA carved the GTX 1660 out by disabling 2 out of 24 "Turing" SMs on the TU116, resulting in 1,408 CUDA cores, 88 TMUs, and 48 ROPs. The company is using 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6, which makes its memory sub-system 33 percent slower. The GPU is clocked at 1530 MHz, with 1785 MHz boost, which are marginally higher than the GTX 1660 Ti. The GeForce GTX 1660 is a partner-driven launch, meaning that there won't be any reference-design cards, although NVIDIA made should every AIC partner has at least one product selling at the baseline price of $219.99.

Read TechPowerUp Reviews: Zotac GeForce GTX 1660 | EVGA GeForce GTX 1660 XC Ultra | Palit GeForce GTX 1660 StormX OC | MSI GTX 1660 Gaming X

Update: We have updated our GPU database with all GTX 1660 models announced today, so you can easily get an overview over what has been released.

ZOTAC Unveils its GeForce GTX 1660 Series

ZOTAC Technology, a global manufacturer of innovation, is pleased to expand the GeForce GTX 16 series with the ZOTAC GAMING GeForce GTX 1660 series featuring GDDR5 memory and the NVIDIA Turing Architecture.

Founded in 2017, ZOTAC GAMING is the pioneer movement that comes forth from the core of the ZOTAC brand that aims to create the ultimate PC gaming hardware for those who live to game. It is the epitome of our engineering prowess and design expertise representing over a decade of precision performance, making ZOTAC GAMING a born leading force with the goal to deliver the best PC gaming experience. The logo shows the piercing stare of the robotic eyes, where behind it, lies the strength and future technology that fills the ego of the undefeated and battle experienced.

EVGA and GIGABYTE GeForce GTX 1660 Graphics Cards Pictured

Here are some of the first pictures of EVGA's and GIGABYTE's upcoming GeForce GTX 1660 graphics cards reportedly slated for launch later this week. It should come as no surprise that these cards resemble the companies' GTX 1660 Ti offerings, since they're based on the same 12 nm "TU116" silicon, with fewer CUDA cores. The underlying PCBs could be slightly different as the GTX 1660 uses older generation 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. The "TU116" silicon is configured with 1,408 CUDA cores out of the 1,536 physically present; the memory amount is 6 GB, across a 192-bit wide memory bus. The GTX 1660 baseline price is reportedly USD $219, and the card replaces the GTX 1060 6 GB from NVIDIA's product stack.

EVGA is bringing two designs to the market, a short-length triple-slot card with a single fan; and a more conventional longer card with 2-slot, dual-fan design. The baseline "Black" card could be offered in the shorter design; while the top-tier XC Ultra could be exclusive to the longer design. GIGABYTE, on the other hand, has two designs, a shorter-length dual-fan; and a longer-length triple-fan. Both models are dual-slot. The baseline SKU will be restricted to the shorter board design, while premium Gaming OC SKUs could come in the longer board design.

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).

NVIDIA GeForce GTX 1650 Memory Size Revealed

NVIDIA's upcoming entry-mainstream graphics card based on the "Turing" architecture, the GeForce GTX 1650, will feature 4 GB of GDDR5 memory, according to tech industry commentator Andreas Schilling. Schilling also put out mast box-art by NVIDIA for this SKU. The source does not mention memory bus width. In related news, Schilling also mentions NVIDIA going with 6 GB as the memory amount for the GTX 1660. NVIDIA is expected to launch the GTX 1660 mid-March, and the GTX 1650 late-April.

Intel Readies Crimson Canyon NUC with 10nm Core i3 and AMD Radeon

Intel is giving final touches to a "Crimson Canyon" fully-assembled NUC desktop model which combines the company's first 10 nm Core processor, and AMD Radeon discrete graphics. The NUC8i3CYSM desktop from Intel packs a Core i3-8121U "Cannon Lake" SoC, 8 GB of dual-channel LPDDR4 memory, and discrete AMD Radeon RX 540 mobile GPU with 2 GB of dedicated GDDR5 memory. A 1 TB 2.5-inch hard drive comes included, although you also get an M.2-2280 slot with both PCIe 3.0 x4 (NVMe) and SATA 6 Gbps wiring. The i3-8121U packs a 2-core/4-thread CPU clocked up to 3.20 GHz and 4 MB of L3 cache; while the RX 540 packs 512 stream processors based on the "Polaris" architecture.

The NUC8i3CYSM offers plenty of modern connectivity, including 802.11ac + Bluetooth 5.0 powered by an Intel Wireless-AC 9560 WLAN card, wired 1 GbE from an Intel i219-V controller, consumer IR receiver, an included beam-forming microphone, an SDXC card reader, and stereo HD audio. USB connectivity includes four USB 3.1 type-A ports including a high-current port. Display outputs are care of two HDMI 2.0b, each with 7.1-channel digital audio passthrough. The company didn't reveal pricing, although you can already read a performance review of this NUC from the source link below.

Sapphire Outs an RX 570 Graphics Card with 16GB Memory, But Why?

Sapphire has reportedly developed an odd-ball Radeon RX 570 graphics card, equipped with 16 GB of GDDR5 memory, double the memory amount the SKU is possibly capable of. The card is based on the company's NITRO+ board design common to RX 570 thru RX 590 SKUs, and uses sixteen 8 Gbit GDDR5 memory chips that are piggybacked (i.e. chips on both sides of the PCB). When Chinese tech publication MyDrivers reached out to Sapphire for an explanation behind such a bizarre contraption, the Hong Kong-based AIB partner's response was fascinating.

Sapphire in its response said that they wanted to bolster the card's crypto-currency mining power, and giving the "Polaris 20" GPU additional memory would improve its performance compared to ASIC miners using the Cuckoo Cycle algorithm. This can load up the video memory anywhere between 5.5 GB to 11 GB, and giving the RX 570 16 GB of it was Sapphire's logical next step. Of course Cuckoo Cycle is being defeated time and again by currency curators. This card will be a stopgap for miners until ASIC mining machines with expanded memory come out, or the proof-of-work systems are significantly changed.

Hands On with a Pack of RTX 2060 Cards

NVIDIA late Sunday announced the GeForce RTX 2060 graphics card at $349. With performance rivaling the GTX 1070 Ti and RX Vega 56 on paper, and in some cases even the GTX 1080 and RX Vega 64, the RTX 2060 in its top-spec trim with 6 GB of GDDR6 memory, could go on to be NVIDIA's best-selling product from its "Turing" RTX 20-series. At the CES 2019 booth of NVIDIA, we went hands-on with a few of these cards, beginning NVIDIA's de-facto reference-design Founders Edition. This card indeed feels smaller and lighter than the RTX 2070 Founders Edition.

The Founders Edition still doesn't compromise on looks or build quality, and is bound to look slick in your case, provided you manage to find one in retail. The RTX 2060 launch will be dominated by NVIDIA's add-in card partners, who will dish out dozens of custom-design products. Although NVIDIA didn't announce them, there are still rumors of other variants of the RTX 2060 with lesser memory amounts, and GDDR5 memory. You get the full complement of display connectivity, including VirtualLink.

GDDR6 Memory Costs 70 Percent More than GDDR5

The latest GDDR6 memory standard, currently implemented by NVIDIA in its GeForce RTX 20-series graphics cards, pulls great premium. According to a 3DCenter.org report citing list-prices sourced from electronics components wholeseller DigiKey, 14 Gbps GDDR6 memory chips from Micron Technology cost over 70 percent more than common 8 Gbps GDDR5 chips of the same density, from the same manufacturer. Besides obsolescence, oversupply could be impacting GDDR5 chip prices.

Although GDDR6 is available in marginally cheaper 13 Gbps and 12 Gbps trims, NVIDIA has only been sourcing 14 Gbps chips. Even the company's upcoming RTX 2060 performance-segment graphics card is rumored to implement 14 Gbps chips in variants that feature GDDR6. The sheer disparity in pricing between GDDR6 and GDDR5 could explain why NVIDIA is developing cheaper GDDR5 variants of the RTX 2060. Graphics card manufacturers can save around $22 per card by using six GDDR5 chips instead of GDDR6.

Sapphire Outs Radeon RX 590 Nitro+ OC Sans "Special Edition"

Sapphire debuted its Radeon RX 590 series last month with the RX 590 Nitro+ Special Edition, which at the time was advertised as a limited-edition SKU. The company over Holiday weekend updated its product stack to introduce a new mass-production SKU, the RX 590 Nitro+ OC, minus "Special Edition" branding. There are only cosmetic changes between the two SKUs. Sapphire's favorite shade of blue on the Special Edition SKU makes way for matte-black on the cooler shroud, as do the black accents on the back-plate, instead of blue. The fan impellers are opaque matte black instead of frosty and translucent.

Thankfully, Sapphire hasn't changed the specs that matter - factory-overclock. The card still ships with 1560 MHz engine clocks (boost), and 8.40 GHz (GDDR5-effective) memory, and a "quiet" second BIOS that dials down the clocks to 1545 MHz boost and 8.00 GHz memory. The underlying PCB is unchanged, too, drawing power from a combination of 8-pin and 6-pin PCIe power connectors, and conditioning it with a 6+1 phase VRM. Display outputs include two each of DisplayPort 1.4 and HDMI 2.0, and a dual-link DVI-D. The company didn't reveal pricing, although we expect it to be marginally lower than the Special Edition SKU.

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!

There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.

AMD Radeon RX 590 Launch Price, Other Details Revealed

AMD is very close to launching its new Radeon RX 590 graphics card, targeting a middle-of-market segment that sells in high volumes, particularly with Holiday around the corner. The card is based on the new 12 nm "Polaris 30" silicon, which has the same exact specifications as the "Polaris 20" silicon, and the original "Polaris 10," but comes with significantly higher clock-speed headroom thanks to the new silicon fabrication process, which AMD and its partners will use to dial up engine clock speed by 10-15% over those of the RX 580. While the memory is still 8 Gbps 256-bit GDDR5, some partners will ship overclocked memory.

According to a slide deck seen by VideoCardz, AMD is setting the baseline price of the Radeon RX 590 at USD $279.99, which is about $50 higher than RX 580 8 GB, and $40 higher than the price the RX 480 launched at. AMD will add value to that price by bundling three AAA games, including "Tom Clancy's The Division 2," "Devil May Cry 5," and "Resident Evil 2." The latter two titles are unreleased, and the three games together pose a $120-150 value. AMD will also work with monitor manufacturers to come up with graphics card + AMD FreeSync monitor bundles.

HIS Radeon RX 590 IceQ X² Detailed

With a little Javascript trickery, Redditor "BadReIigion" succeeded in making the company website of AMD partner HIS to spit out details of its upcoming Radeon RX 590 IceQ X² graphics card (model number: HIS-590R8LCBR). Pictured below is the RX 580 IceQ X², but we expect the RX 590-based product to be mostly similar, with cosmetic changes such as a different cooler shroud or back-plate design. The website confirms some details like the ASIC being "Polaris 30 XT," a rendition of the 2,304-SP "Polaris 20" die on the 12 nm FinFET node, and that the card features 8 GB of GDDR5 memory. Some of the other details, such as the engine clock being mentioned as "2000 MHz" is unlikely.

The consensus emerging on engine clock boost frequencies from RX 590 leaks so far, put RX 590 custom-design, factory-overclocked cards to tick around 1500-1550 MHz, a 100-200 MHz improvement over the RX 580. Some board vendors such as Sapphire are even overclocking the memory by about 5%. "Polaris 30" is likely pin-compatible with "Polaris 20," because most board vendors are reusing their RX 580 PCBs, some of which are even carried over from the RX 480. For the HIS RX 590 IceQ X² this means drawing power from a single 8-pin PCIe power connector.

Sapphire Radeon RX 590 NITRO+ Special Edition Detailed

Sapphire is developing a premium variant of its upcoming Radeon RX 590 series, called the RX 590 NITRO+ Special Edition, much like the "limited edition" branding it gave its premium RX 580-based card. Komachi Ensaka accessed leaked brochures of this card, which will bear an internal SKU code 11289-01. The brochure also confirms that the RX 590 features an unchanged 2,304 stream processor count from the RX 580, and continues to feature 8 GB of GDDR5 memory across a 256-bit wide memory interface. All that's new is improved thermals from a transition to the new 12 nm FinFET silicon fabrication process.

The Sapphire RX 590 NITRO+ SE ships with two clock-speed profiles, that can be probably toggled on the hardware by switching between two BIOS ROMs. The first profile is called NITRO+ Boost, and it runs the GPU at 1560 MHz, and the memory at 8400 MHz (GDDR5-effective). The second profile, called Silent Mode, reduces the engine clock boost to 1545 MHz, and the memory to 8000 MHz. For both profiles, the fan settings are unchanged. The fans stay off until the GPU is warming up to 54 °C, and spins at its nominal speed at 75 °C. It cuts off at 45 °C. The nominal speed is 0 - 2,280 RPM and the maximum speed is 3200 RPM.

ASUS Prepares GPP-Ridden Radeon RX 590 ROG STRIX Graphics Card for Launch

Videocardz, through their industry sources, say they've confirmed that ASUS is working on their own Radeon RX 590 ROG STRIX graphics card. The naming isn't a typo: the GPP-fueled AREZ moniker has apparently gone off the window for ASUS by now, and the RX 590 should be marketed under its (again) brand-agnostic ROG lineup. The product code (ASUS Radeon RX 590 ROG STRIX GAMING (ROG-STRIX-RX590-8G-GAMING) indicates the usage of 8 GB of graphics memory just like the RX 580, and we all expect this to be of the GDDR5 kind with no further refinements. It's all in the die, as they (could) say.

Alleged AMD RX 590 3D Mark Time Spy Scores Surface

Benchmark scores for 3D Mark's Time Spy have surface, and are purported to represent the performance level of an unidentified "Generic VGA" - which is being identified as AMD's new 12 nm Polaris revision. The RX 590 product name makes almost as much sense as it doesn't, though; for one, there's no real reason to release another entire RX 600 series, unless AMD is giving the 12 nm treatment to the entire lineup (which likely wouldn't happen, due to the investment in fabrication process redesign and node capacity required for such). As such, the RX 590 moniker makes sense if AMD is only looking to increase its competitiveness in the sub-$300 space as a stop-gap until they finally have a new graphics architecture up their shader sleeves.

Intel "Crimson Canyon" NUCs with Discrete GPUs Up for Pre-order

One of the first Intel NUC (next unit of computing) mini PCs to feature completely discrete GPUs (and not MCMs of CPUs and GPUs), the "Crimson Canyon" NUC8i3CYSM and NUC8i3CYSN, are up for pre-order. The former is priced at USD $529, while the latter goes for $574. The two combine Intel's 10 nm Core i3-8121U "Cannon Lake" SoC with AMD Radeon 540 discrete GPU. Unlike the "Hades Canyon" NUC, which features an MCM with a powerful AMD Radeon Vega M GPU die and a quad-core "Kaby Lake" CPU die; the "Crimson Canyon" features its processor and GPU on separate packages. The Radeon 540 packs 512 stream processors, 32 TMUs, and 16 ROPs; with 2 GB of GDDR5 memory.

All that's differentiating the NUC8i3CYSM from the NUC8i3CYSN is memory. You get 4 GB of LPDDR4 memory with the former, and 8 GB of it with the latter. Both units come with a 2.5-inch 1 TB HDD pre-installed. You also get an M.2-2280 slot with PCIe 3.0 x4 wiring, and support for Optane caching. Intel Wireless-AC 9560 WLAN card handles wireless networking, while an i219-V handles wired. Connectivity includes four USB 3.0 type-A ports, one of which has high current; an SDXC card reader, CIR, two HDMI 2.0 outputs, and 7.1-channel HD audio. The NUC has certainly grown in size over the years. This one measures 117 mm x 112 mm x 52 mm (WxDxH). An external 90W power-brick adds to the bulk.

NVIDIA GeForce GT 1030 Shipping with DDR4 Instead of GDDR5

Low-end graphics cards usually don't attract much attention from the enthusiasts crowd. Nevertheless, not all computer users are avid gamers, and most average-joe users are perfectly happy with an entry-level graphics card, for example, a GeForce GT 1030. To refresh our memories a bit, NVIDIA launched the GeForce GT 1030 last year to compete against AMD's Radeon RX 550. It was recently discovered that several manufacturers have been shipping a lower-spec'd version of the GeForce GT 1030. According to NVIDIA's official specifications, the reference GeForce GT 1030 was shipped with 2 GB of GDDR5 memory running at 6008 MHz (GDDR5-effective) across a 64-bit wide memory bus which amassed to a memory bandwidth of 48 GB/s. However, some models from MSI, Gigabyte, and Palit come with DDR4 memory operating at 2100 MHz instead. If you do the math, that comes down to a memory bandwidth of 16.8 GB/s which certainly is a huge downgrade, on paper at least. The good news amid the bad is that the DDR4-based variants consume 10W less than the reference model.

Will this memory swap affect real-world performance? Probably. However, we won't know till what extent without proper testing. Unlike the GeForce MX150 fiasco, manufacturers were kind enough to let consumers know the difference between both models this time around. The lower-end DDR4 variant carries the "D4" denotation as part of the graphics card's model or consumers can find the denotation on the box. Beware, though, as not all manufacturers will give you the heads up. For example, Palit doesn't.

ASUS Intros Radeon RX 570 Expedition Graphics Card

ASUS today introduced the Radeon RX 570 Expedition graphics card (model: EX-RX570-O8G). The card is part of the company's Expedition family of graphics cards and motherboards designed for the rigors of gaming i-cafes, and is built with slightly more durable electrical components, and IP5X-certified dust-proof fans. The card features an engine clock (GPU clock) of up to 1256 MHz out of the box (against 1240 MHz reference), while its memory clock is untouched at 7.00 GHz (GDDR5-effective). It features 8 GB of memory.

The card is cooled by a custom-design aluminium fin-stack cooler to which heat drawn by a pair of 8 mm-thick nickel-plated copper heat-pipes is vented out by a pair of IP5X-certified 80 mm dual ball-bearing fans that are programmed to stay off when the GPU temperature is under 55 °C. The card is put through 144 hours of extreme stress-testing before being packaged. Power is drawn from a single 8-pin PCIe power connector. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and dual-link DVI-D. The company didn't reveal pricing.

Gigabyte GeForce GTX 1060 5GB Windforce OC Already Spotted in the Wild

Yesterday we broke the news that NVIDIA was preparing to launch a fourth variant of their GeForce GTX 1060 graphics card. The Gigabyte GeForce GTX 1060 5GB Windforce OC is the first custom model to appear in the wild so far. The overall design is similar to its GTX 1060 6GB Windforce OC sibling. The dual-fan Windforce 2X cooling system is present once again, as is the full cover backplate with the Gigabyte engraving. The similarities don't just stop there either, the technical specifications are identical as well. The GTX 1060 5GB Windforce OC runs with a base clock of 1556 MHz and a boost clock of 1771 MHz in "Gaming" Mode. While in "OC" Mode, the graphics card cranks the base clock up to 1582 MHz and the boost clock to 1797 MHz. In terms of memory, the GTX 1060 5GB Windforce OC is shipped with 5GB of GDDR5 memory running at 8008 MHz which comes down to a bandwidth of 160 GB/s across a 160-bit bus. The video outputs consist of two DVI ports, one HDMI port and a DisplayPort.

NVIDIA Prepares a GeForce GTX 1060 5GB for Internet Cafes

NVIDIA is expanding their GeForce GTX 1060 offerings with a new 5GB model. The GTX 1060 5GB will utilize the GP106-350-K3-A1 GPU and feature 1280 CUDA Cores. It's equipped with 5GB of GDDR5 memory connected by a 160-bit memory interface. Let's remember that the GTX 1060 already comes in three variants - 6GB (9 Gbps), 6GB, and 3GB. So, the question here is: why did NVIDIA suddenly decided to add a fourth member to the already big GTX 1060 family. Apparently, the main motivation behind the 5GB model's creation is to provide internet cafes with a cost-effective option to deliver a 60 FPS gaming experience at 1080p. According to Expreview, the GTX 1060 5GB is exclusive to the Chinese market, and it won't be available at retail. That means you won't find the GTX 1060 5GB on any shelves. If you really want to get your hands on one, online e-commerce websites like Taobao or Alibaba are your only options.

Micron Analyses 2017, Looks at the Future of Memory Business

It was a banner year for graphics, both in terms of market strength and technology advancements. Gaming, virtual reality, crypto mining, and artificial intelligence fueled demand for GPUs in 2017. The market responded with a wide array of products: high-performance discrete PC graphics cards that let gamers run multiple 4K displays; game consoles and VR headsets; and workstation-class GPUs that can build the stunning effects we have all come to expect. And since these products are full of our GDDR5 or G5X memory, it was an exciting year for Micron's graphics team too. We had a record-breaking year in GDDR5 shipments and further solidified Micron's industry leadership in graphics memory with the launch of our 12 Gb/s G5X, the highest-performance mass production GDDR memory.

Inno3D Launches New P104-100 Crypto-Mining Accelerator

INNO3D, a leading manufacturer of awesome high-end multimedia components and various innovations enriching your life, introduces its new P104-100 Crypto-Mining Accelerator. The new range will be available in TWIN X2 edition. The P104-100 has been designed with no less than 40% more mining power than its predecessor allowing the miner to enhance ETH, ZEC, etc. number crunching to levels, that have never been seen before. The freshly forged radical comes packed with 11 Gbps GDDR5X memory, and 4GB memory for optimizing cryptocurrency calculations. By deploying the INNO3D P104-100, miners now enjoy the ultimate power and utilize best-in-class hash rate today.

NVIDIA GeForce GTX 1070 Ti Overclocking to be Restricted

NVIDIA could severely limit the overclocking capabilities of its upcoming "almost GTX 1080" performance-segment graphics card, the GeForce GTX 1070 Ti. The company will tightly control the non-reference clock-speeds at which its add-in card (AIC) partners ship their custom-design graphics cards; and there could even be tighter limits to which you can overclock these cards. NVIDIA is probably doing this to ensure it doesn't completely cannibalize its GeForce GTX 1080 graphics card, which has been recently refreshed with faster 11 Gbps GDDR5X memory.

The GTX 1070 Ti is based on a "GP104" Pascal silicon with a core-configuration that's vastly higher than the current GTX 1070, and too close to that of the GTX 1080. It features 2,432 CUDA cores, just 128 fewer than the GTX 1080, and core clock speed of 1608 MHz that's on-par with the pricier card, too. The GPU Boost frequency is set to 1683 MHz, which is lower than the 1733 MHz of the GTX 1080. It also features slower GDDR5 memory. The GTX 1070 Ti is expected to launch by the 26th of October, priced at $429.

NVIDIA GeForce GTX 1070 Ti Could Feature 9 Gbps GDDR5 Memory

NVIDIA's upcoming GeForce GTX 1070 Ti performance-segment graphics card, which could be launched toward the end of this month, with market-availability following in early-November; could feature 9 Gbps GDDR5 memory, and not the previously-thought 8 Gbps GDDR5. This "almost-GTX 1080" answer of NVIDIA to AMD's RX Vega 56 features 2,432 CUDA cores, 152 TMUs, 64 ROPs, and a 256-bit wide GDDR5 memory interface, holding 8 GB of memory. It will be available at a price-point competitive with AMD's RX Vega series, and could come in custom-designs by NVIDIA's add-in card partners.

The GTX 1070 Ti will be NVIDIA's second SKU to max-out the GDDR5 clock band. The company had, in late-2016, refreshed the mid-range GeForce GTX 1060 6 GB to feature 9 Gbps memory in an effort to compensate for its narrower 192-bit wide memory interface, improving its competitiveness against the Radeon RX 480 8 GB. The company had also, at the time, refreshed the GTX 1080 with faster 11 Gbps GDDR5X memory, which means the GTX 1080 cards with the SKU's original 10 Gbps GDDR5X memory clock could be phased out of the market. NVIDIA will ride into the crucial Holiday 2017 season with its existing GeForce "Pascal" family, bolstered by the new GTX 1070 Ti.
Return to Keyword Browsing