News Posts matching #GDDR5

Return to Keyword Browsing

ASUS Debuts Latest VivoBook and ZenBook Series Lineup at CES 2020

Today, at CES 2020, ASUS debuted its latest VivoBook and ZenBook laptop refreshes. The first in the lineup is the VivoBook S series, which is consisting out of VivoBook S333, S433 and S533 models. At the heart of these new laptops is the latest 10th generation of processors from Intel that is giving the ultra-portable notebooks power to perform everyday tasks with ease. In some configurations like S433 and S533, there is a choice of Core i7 and Core i5 Comet Lake CPUs, while the S333 model is rocking Ice Lake at its core. For Comet Lake configurations, there is a choice of Intel Core i7-10510U or Core i5-10210U processor, while Ice Lake powered models have a choice between Intel Core i7-1065G7 or Core i5-1035G1 processors. As far as graphics power goes, users can choose to use Intel iGPU, or add NVIDIA MX series GPU for an additional price increase.

Desktop AMD Radeon RX 5300 XT Rears its Head in HP Specs Sheet

A desktop variant of the AMD Radeon RX 5300 XT graphics card surfaced in a specs sheet of an upcoming HP Pavilion desktop model (TP01-0004ng). The listing describes the RX 5300 XT as featuring 4 GB of GDDR5 memory. This is the first instance of an RX 5300 XT in the wild. An older report citing driver files pointed to there being only two RX 5300-series products, the RX 5300 (desktop), and the mobile RX 5300M. The "XT" brand extension could denote a higher CU count than the RX 5300, although the series is still differentiated from the RX 5500-series with cheaper GDDR5 memory.

The RX 5300 series and RX 5500 series are based on a common silicon, the 7 nm "Navi 14," featuring up to 24 RDNA compute units (up to 1,536 stream processors), and a 128-bit wide memory interface that supports up to 8 GB of GDDR6 or GDDR5 memory. AMD hopes to phase out its 14 nm and 12 nm "Polaris 10" and "Polaris 30" chips with "Navi 14." It's possible that the RX 5300 XT is OEM-only, just like the other key component in this HP desktop, the Ryzen 5 3500 6-core/6-thread processor.

ASRock Launches the Phantom Gaming Radeon 550 2G Graphics Card

No, that's not a typo. AsRock has actually launched an AMD Radeon 550 graphics card this late into the game. There is some sense behind the business decision, though; AMD's Radeon 550 is the company's entry-line offering, which aims only to improve upon the performance of integrated graphics solutions, and no more. In that sense, the Radeon 550 certainly delivers, though it does so in an underwhelming way for any tech enthusiasts. The Radeon 550 2G features 2 GB of GDDR5 memory clocked at 1,750 MHz (7,000 MHz effective) across a 64-bit bus, which feeds a die powered by 512 Stream Processors.

There are three operating modes on offer: Silent, for HTPC environments, which clocks the graphics card at 1,183 MHz boost clock and 7,000 MHz memory clocks. Default mode runs the card at AMD's defaults (1,183 MHz boost, 7,000 MHz memory). The OC mode overclocks the graphics card's boost clock to 1,230 MHz and the memory to 7,038 MHz. I/O stands at 1x dual-link DVI-D connector, 1x HDMI 2.0b port and 1x DisplayPort 1.4. A 50 W TDP should make it easy to cool in cramped spaces, and the graphics card doesn't require any power pins. No pricing was available at time of writing.

AMD Announces New Radeon Embedded E9000 Series GPU Models

The AMD Embedded business provides SoCs and discrete GPUs that enable casino gaming companies to create immersive and beautiful graphics for the latest in casino gaming platforms, which are adopting the same high-quality motion graphics and experiences seen in modern consumer gaming devices. AMD Embedded provides casino and gaming customers a breadth of solutions to drive virtually any gaming system. The AMD Ryzen Embedded V1000 SoC brings CPU and GPU technology together in one package, providing the capability to run up to four 4K displays from one system. The AMD Ryzen Embedded R1000 SoC is a power efficient option while providing up to 4X better CPU and graphics performance per dollar than the competition.

Beyond SoCs, AMD also offers embedded GPUs to enable stunning, immersive visual experiences while supporting efficient thermal design power (TDP) profiles. AMD delivers three discrete GPU classes to customers with the AMD Embedded Radeon ultra-high-performance embedded GPUs, the AMD Embedded Radeon high-performance embedded GPUs and the AMD Embedded Radeon power-efficient embedded GPUs. These three classes enable a wide range of performance and power consumption, but most importantly offer features that the embedded industry demands including planned longevity, enhanced support and support for embedded operating systems.

NVIDIA GeForce GTX 1660 Super Releases on Oct 29nd

Chinese website ITHome has new info on the release of NVIDA's GeForce GTX 1660 Super graphics cards. According to their website, the release is expected for October 22nd, which seems credible, considering NVIDIA always launches on a Tuesday. As expected, the card will be built around the Turing TU116 graphics processor, which also powers the GTX 1660 and GTX 1660 Ti. Shader counts should either be 1472, because NVIDIA wants to position their card between GTX 1660 (1408 cores) and GTX 1660 Ti (1536 cores). The memory size will be either 4 GB or 6 GB. Specifications of the memory are somewhat vague, it is rumored that NVIDIA's GTX 1660 Super will use GDDR6 chips, just like GTX 1660 Ti — the plain GTX 1660 uses GDDR5 memory. Another possibility is that shader count matches GTX 1660, and the only difference (other than clock speeds) is that GTX 1660 Super uses GDDR6 VRAM.

The Chinese pricing is expected around 1100 Yuan, which converts to $150 — surprisingly low, considering GTX 1660 retails at $210 and GTX 1660 Ti is priced at $275. Maybe NVIDIA is adjusting their pricing to preempt AMD's upcoming Radeon RX 5500/5600 Series. Videocardz has separately confirmed this rumor with their sources at ASUS Taiwan, who are expected to launch at least three SKUs based on the new NVIDIA offering, among them DUAL EVO, Phoenix and TUF3 series.

Update Oct 24th: Seems the actual launch is October 29th, this post has more info: https://www.techpowerup.com/260391/nvidia-geforce-gtx-1660-super-launching-october-29th-usd-229-with-gddr6

MSI Releases a Low-profile GeForce GTX 1650 Graphics Card

MSI released one of first low-profile (half-height) graphics cards based on the GeForce GTX 1650. The card uses a monolithic aluminium heatsink that's ventilated by two 60 mm fans. Although there's just one row of display outputs, the cooler is over 1 slot thick, and so you get dual-slot I/O shields for both full-height and half-height (low-profile). The card relies on the PCI-Express 3.0 x16 slot for all its power, and sticks to NVIDIA-reference clock speeds of 1665 MHz boost, and 8.00 GHz (GDDR5-effective) memory. Based on the 12 nm "TU117" silicon, the GeForce GTX 1650 features 896 "Turing" CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface, holding 4 GB of memory. Display outputs on this MSI low-profile card surprisingly lack DisplayPort, you only get an HDMI 2.0b, and a dual-link DVI-D (lacks analog D-Sub pins).

AMD Memory Tweak Tool Lets You OC and Tweak AMD Radeon Memory Timings On-the-fly

Eliovp, who describes himself on GitHub as a Belgian [crypto] mining enthusiast, created what could go down as the best thing that happened to AMD Radeon users all decade. The AMD Memory Tweak Tool is a Windows and Linux based GUI utility that lets you not just overclock AMD Radeon graphics card memory on the fly, but also lets you tweak its memory timings. Most timings apply live, while your machine is running within Windows/Linux GUI, some require memory retraining via a reboot, which means they can't be changed at this time, because rebooting reverts the timings to default. The author is trying to figure out a way to run memory training at runtime, which would let you change those timings, too, in the future. While you're at it, the tool also lets you play with GPU core frequency and fan-control.

The AMD Memory Tweak tool supports both Windows and Linux (GUI), and works with all recent AMD Radeon GPUs with GDDR5 and HBM2 memory types. It requires Radeon Software Adrenalin 19.4.1 or later in case of Windows, or amdgpu-pro ROCM to be actively handling the GPU in case of Linux. The Linux version further has some dependencies, such as pciutils-dev, libpci-dev, build-essential, and git. The source-code for the utility is up on GitHub for you to inspect and test.

DOWNLOAD: AMD Memory Tweak Tool by Eliovp

ZOTAC Announces the ZBOX QX Series Mini PC Powered by Xeon and Quadro

ZOTAC Technology, a global manufacturer of innovation, today introduced the more capable ZBOX Q Series Mini Creator PC featuring the advanced NVIDIA Quadro GPU and powerful workstation focused Intel Xeon processor. The new addition to the ZBOX Q Series leverages the ZBOX Mini PC's sleek and minimal design without compromising the powerful hardware components inside. From stunning industrial design and advanced special effects, to complex scientific visualization and sophisticated data modeling, to creating and editing images and videos, the ZBOX Q Series enables limitless creations.

The new ZBOX Q Series features the industry certified NVIDIA Quadro with up to 16GB GDDR5 memory. It's a tested and certified fully compatible hardware on many major professional design applications. The new Q Series models come equipped with an Intel Xeon processor to deliver fast and responsive performance.

AMD Readies Radeon RX 640, an RX 550X Re-brand

One of our readers discovered an interesting entry in the INF file of AMD's Adrenalin 19.4.3 graphics drivers. It includes two instances of "Radeon RX 640," and has the same device ID as the Radeon RX 550X from the current generation. The branding flies in the face of reports suggesting that with its next-generation "Navi" GPUs, AMD could refresh its client-segment nomenclature to follow the "Radeon RX 3000" series, but it's possible that the RX 600 series was carved out to re-brand the existing "Polaris" based low-end chips one step-down (i.e. RX 550X re-branding as RX 640, RX 560 possibly as RX 650, etc.).

The move to create the RX 600 series could also be driven by AMD's need to contain all "Navi" based SKUs in the RX 3000 series, and re-branded "Polaris" based ones in the RX 600, so that, at least initially, consumers aren't led to believe they're buying a re-branded "Polaris" SKU opting for an RX 3000-series graphics card. It's also possible that AMD may not create low-end chips based on "Navi" initially, and focus on the performance-segment with the highest sale volumes among serious gamers, the $200-400 price-range. Based on the 14 nm "Lexa" silicon, the RX 550X is equipped with 640 stream processors, 32 TMUs, 16 ROPs, and 2 GB of GDDR5 memory across a 128-bit wide memory bus. Given the performance gains expected from Intel's Gen11 "Ice Lake" iGPU and AMD's own refreshed "Picasso" APU, the RX 640 could at best be a cheap iGPU replacement for systems that lack it.
Image Credit: Just Some Noise (TechPowerUp Forums)

Manli Introduces its GeForce GTX 1650 Graphics Card Lineup

Manli Technology Group Limited, the major Graphics Cards, and other components manufacturer, today announced the affordable new member within the 16 series family - Manli GeForce GTX 1650. Manli GeForce GTX 1650 is powered by award-winning NVIDIA Turing architecture. It is also equipped with 4 GB of GDDR5, 128-bit memory controller, and built-in 896 CUDA Cores with core frequency set at 1485 MHz which can dynamically boost up to 1665 MHz. Moreover, Manli GeForce GTX 1650 has less power consumption with only 75W, and no external power supply required.

NVIDIA GeForce GTX 1650 Released: TU117, 896 Cores, 4 GB GDDR5, $150

NVIDIA today rolled out the GeForce GTX 1650 graphics card at USD $149.99. Like its other GeForce GTX 16-series siblings, the GTX 1650 is derived from the "Turing" architecture, but without RTX real-time raytracing hardware, such as RT cores or tensor cores. The GTX 1650 is based on the 12 nm "TU117" silicon, which is the smallest implementation of "Turing." Measuring 200 mm² (die area), the TU117 crams 4.7 billion transistors. It is equipped with 896 CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface, holding 4 GB of memory clocked at 8 Gbps (128 GB/s bandwidth). The GPU is clocked at 1485 MHz, and the GPU Boost at 1665 MHz.

The GeForce GTX 1650 at its given price is positioned competitively with the Radeon RX 570 4 GB from AMD. NVIDIA has been surprisingly low-key about this launch, by not just leaving it up to the partners to drive the launch, but also sample reviewers. There are no pre-launch Reviewer drivers provided by NVIDIA, and hence we don't have a launch-day review for you yet. We do have GTX 1650 graphics cards, namely the Palit GTX 1650 StormX, MSI GTX 1650 Gaming X, and ASUS ROG GTX 1650 Strix OC.

Update: Catch our reviews of the ASUS ROG Strix GTX 1650 OC and MSI GTX 1650 Gaming X

Colorful Announces GeForce GTX 1650 4GB Ultra Graphics Card

Colorful Technology Company Limited, professional manufacturer of graphics cards, motherboards and high-performance storage solutions is proud to announce the launch of its latest graphics card for the entry-level gaming market. The COLORFUL iGame GeForce GTX 1650 Ultra 4G brings NVIDIA's Turing graphics architecture to the masses, and COLORFUL brings the best out of the GPU thanks to its years of experience of working with gamers.

For new gamers and those upgrading from integrated graphics and want a taste of what's to come, the COLORFUL iGame GeForce GTX 1650 Ultra 4G is a prime choice to start with. Featuring the latest NVIDIA GPU technology: 12nm Turing architecture that brings with the best of Geforce, including GeForce Experience, NVIDIA Ansel, G-Sync and G-Sync Compatible monitor supports, Game Ready Drivers and so much more. COLORFUL has given the iGame GTX 1650 Ultra 4G with a performance boost via One-key OC button so you can get extra performance without tinkering.

NVIDIA to Flesh out Lower Graphics Card Segment with GeForce GTX 1650 Ti

It seems NVIDIA's partners are gearing up for yet another launch, sometime after the GTX 1650 finally becomes available. ECC Listings have made it clear that partners are working on another TU117 variant, with improved performance, sitting between the GTX 1650 and the GTX 1660, which will should bring the fight to AMD's Radeon RX 580. Of course, with the GTX 1660 sitting pretty at a $219 price, this leaves anywhere between the GTX 1650's $149 and the GTX 1660's $229 for the GTX 1650 Ti to fill. With the GTX 1660 being an average of 13% faster than the RX 580, it makes sense for NVIDIA to look for another SKU to cover that large pricing gap between the 1650 and the 1660.

It's speculated that the GeForce GTX 1650 could feature 1024 CUDA Cores, 32 ROPs and 64 TMUs. These should be paired with the same 4 GB GDDR5 VRAM running across a 128-bit bus at the same 8000 MHz effective clock speeds as the GTX 1650, delivering a bandwidth of 128 GB/s. Should NVIDIA be able to pull the feat of keeping the same 75W TDP between its Ti and non-Ti GTX 1650 (as it did with the GTX 1660), that could mean that a 75 W graphics card would be contending with AMD's 185 W RX 580 - a mean, green feet in the power efficiency arena. A number of SKUs for the GTX 1650 Ti have been leaked on ASUS' side of the field, which you can find after the break.

GAINWARD, PALIT GeForce GTX 1650 Pictured, Lack DisplayPort Connectors

In the build-up to NVIDIA's GTX 1650 release, more and more cards are being revealed. While GAINWARD and PALIT's designs won't bring much in the way of interesting PCB designs and differences to be perused, since the PCBs are exactly the same. The GAINWARD Pegasus and the PALIT Storm X only differ in terms of the used shroud design, and both cards carry the same TU117 GPU paired with 4GB of GDDR5 memory.

ZOTAC GeForce GTX 1650 Pictured: No Power Connector

Here are some of the first clear renders of an NVIDIA GeForce GTX 1650 graphics card, this particular one from ZOTAC. The GTX 1650, slated for April 22, will be the most affordable GPU based on the "Turing" architecture, when launched. The box art confirms this card features 4 GB of GDDR5 memory. The ZOTAC card is compact and SFF-friendly, is no longer than the PCIe slot itself, and is 2 slots-thick. Its cooler is a simple fan-heatsink with an 80 mm fan ventilating an aluminium heatsink with radially-projecting fins. The card can make do with the 75W power drawn from the PCIe slot, and has no additional power connectors. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and a dual-link DVI-D.

NVIDIA GeForce GTX 1650 Availability Revealed

NVIDIA is expected to launch its sub-$200 GeForce GTX 1650 graphics card on the 22nd of April, 2019. The card was earlier expected to launch towards the end of April. With it, NVIDIA will introduce the 12 nm "TU117," its smallest GPU based on the "Turing" architecture. The GTX 1650 could replace the current GTX 1060 3 GB, and may compete with AMD offerings in this segment, such as the Radeon RX 570 4 GB, in being Full HD-capable if not letting you max your game settings out at that resolution. The card could ship with 4 GB of GDDR5 memory.

ZOTAC Unveils its GeForce GTX 1660 Series

ZOTAC Technology, a global manufacturer of innovation, is pleased to expand the GeForce GTX 16 series with the ZOTAC GAMING GeForce GTX 1660 series featuring GDDR5 memory and the NVIDIA Turing Architecture.

Founded in 2017, ZOTAC GAMING is the pioneer movement that comes forth from the core of the ZOTAC brand that aims to create the ultimate PC gaming hardware for those who live to game. It is the epitome of our engineering prowess and design expertise representing over a decade of precision performance, making ZOTAC GAMING a born leading force with the goal to deliver the best PC gaming experience. The logo shows the piercing stare of the robotic eyes, where behind it, lies the strength and future technology that fills the ego of the undefeated and battle experienced.

NVIDIA Launches the GeForce GTX 1660 6GB Graphics Card

NVIDIA today launched the GeForce GTX 1660 6 GB graphics card, its successor to the immensely popular GTX 1060 6 GB. With prices starting at $219.99, the GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month; with fewer CUDA cores and a slower memory interface. NVIDIA carved the GTX 1660 out by disabling 2 out of 24 "Turing" SMs on the TU116, resulting in 1,408 CUDA cores, 88 TMUs, and 48 ROPs. The company is using 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6, which makes its memory sub-system 33 percent slower. The GPU is clocked at 1530 MHz, with 1785 MHz boost, which are marginally higher than the GTX 1660 Ti. The GeForce GTX 1660 is a partner-driven launch, meaning that there won't be any reference-design cards, although NVIDIA made should every AIC partner has at least one product selling at the baseline price of $219.99.

Read TechPowerUp Reviews: Zotac GeForce GTX 1660 | EVGA GeForce GTX 1660 XC Ultra | Palit GeForce GTX 1660 StormX OC | MSI GTX 1660 Gaming X

Update: We have updated our GPU database with all GTX 1660 models announced today, so you can easily get an overview over what has been released.

EVGA and GIGABYTE GeForce GTX 1660 Graphics Cards Pictured

Here are some of the first pictures of EVGA's and GIGABYTE's upcoming GeForce GTX 1660 graphics cards reportedly slated for launch later this week. It should come as no surprise that these cards resemble the companies' GTX 1660 Ti offerings, since they're based on the same 12 nm "TU116" silicon, with fewer CUDA cores. The underlying PCBs could be slightly different as the GTX 1660 uses older generation 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. The "TU116" silicon is configured with 1,408 CUDA cores out of the 1,536 physically present; the memory amount is 6 GB, across a 192-bit wide memory bus. The GTX 1660 baseline price is reportedly USD $219, and the card replaces the GTX 1060 6 GB from NVIDIA's product stack.

EVGA is bringing two designs to the market, a short-length triple-slot card with a single fan; and a more conventional longer card with 2-slot, dual-fan design. The baseline "Black" card could be offered in the shorter design; while the top-tier XC Ultra could be exclusive to the longer design. GIGABYTE, on the other hand, has two designs, a shorter-length dual-fan; and a longer-length triple-fan. Both models are dual-slot. The baseline SKU will be restricted to the shorter board design, while premium Gaming OC SKUs could come in the longer board design.

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).

NVIDIA GeForce GTX 1650 Memory Size Revealed

NVIDIA's upcoming entry-mainstream graphics card based on the "Turing" architecture, the GeForce GTX 1650, will feature 4 GB of GDDR5 memory, according to tech industry commentator Andreas Schilling. Schilling also put out mast box-art by NVIDIA for this SKU. The source does not mention memory bus width. In related news, Schilling also mentions NVIDIA going with 6 GB as the memory amount for the GTX 1660. NVIDIA is expected to launch the GTX 1660 mid-March, and the GTX 1650 late-April.

Intel Readies Crimson Canyon NUC with 10nm Core i3 and AMD Radeon

Intel is giving final touches to a "Crimson Canyon" fully-assembled NUC desktop model which combines the company's first 10 nm Core processor, and AMD Radeon discrete graphics. The NUC8i3CYSM desktop from Intel packs a Core i3-8121U "Cannon Lake" SoC, 8 GB of dual-channel LPDDR4 memory, and discrete AMD Radeon RX 540 mobile GPU with 2 GB of dedicated GDDR5 memory. A 1 TB 2.5-inch hard drive comes included, although you also get an M.2-2280 slot with both PCIe 3.0 x4 (NVMe) and SATA 6 Gbps wiring. The i3-8121U packs a 2-core/4-thread CPU clocked up to 3.20 GHz and 4 MB of L3 cache; while the RX 540 packs 512 stream processors based on the "Polaris" architecture.

The NUC8i3CYSM offers plenty of modern connectivity, including 802.11ac + Bluetooth 5.0 powered by an Intel Wireless-AC 9560 WLAN card, wired 1 GbE from an Intel i219-V controller, consumer IR receiver, an included beam-forming microphone, an SDXC card reader, and stereo HD audio. USB connectivity includes four USB 3.1 type-A ports including a high-current port. Display outputs are care of two HDMI 2.0b, each with 7.1-channel digital audio passthrough. The company didn't reveal pricing, although you can already read a performance review of this NUC from the source link below.

Sapphire Outs an RX 570 Graphics Card with 16GB Memory, But Why?

Sapphire has reportedly developed an odd-ball Radeon RX 570 graphics card, equipped with 16 GB of GDDR5 memory, double the memory amount the SKU is possibly capable of. The card is based on the company's NITRO+ board design common to RX 570 thru RX 590 SKUs, and uses sixteen 8 Gbit GDDR5 memory chips that are piggybacked (i.e. chips on both sides of the PCB). When Chinese tech publication MyDrivers reached out to Sapphire for an explanation behind such a bizarre contraption, the Hong Kong-based AIB partner's response was fascinating.

Sapphire in its response said that they wanted to bolster the card's crypto-currency mining power, and giving the "Polaris 20" GPU additional memory would improve its performance compared to ASIC miners using the Cuckoo Cycle algorithm. This can load up the video memory anywhere between 5.5 GB to 11 GB, and giving the RX 570 16 GB of it was Sapphire's logical next step. Of course Cuckoo Cycle is being defeated time and again by currency curators. This card will be a stopgap for miners until ASIC mining machines with expanded memory come out, or the proof-of-work systems are significantly changed.

Hands On with a Pack of RTX 2060 Cards

NVIDIA late Sunday announced the GeForce RTX 2060 graphics card at $349. With performance rivaling the GTX 1070 Ti and RX Vega 56 on paper, and in some cases even the GTX 1080 and RX Vega 64, the RTX 2060 in its top-spec trim with 6 GB of GDDR6 memory, could go on to be NVIDIA's best-selling product from its "Turing" RTX 20-series. At the CES 2019 booth of NVIDIA, we went hands-on with a few of these cards, beginning NVIDIA's de-facto reference-design Founders Edition. This card indeed feels smaller and lighter than the RTX 2070 Founders Edition.

The Founders Edition still doesn't compromise on looks or build quality, and is bound to look slick in your case, provided you manage to find one in retail. The RTX 2060 launch will be dominated by NVIDIA's add-in card partners, who will dish out dozens of custom-design products. Although NVIDIA didn't announce them, there are still rumors of other variants of the RTX 2060 with lesser memory amounts, and GDDR5 memory. You get the full complement of display connectivity, including VirtualLink.

GDDR6 Memory Costs 70 Percent More than GDDR5

The latest GDDR6 memory standard, currently implemented by NVIDIA in its GeForce RTX 20-series graphics cards, pulls great premium. According to a 3DCenter.org report citing list-prices sourced from electronics components wholeseller DigiKey, 14 Gbps GDDR6 memory chips from Micron Technology cost over 70 percent more than common 8 Gbps GDDR5 chips of the same density, from the same manufacturer. Besides obsolescence, oversupply could be impacting GDDR5 chip prices.

Although GDDR6 is available in marginally cheaper 13 Gbps and 12 Gbps trims, NVIDIA has only been sourcing 14 Gbps chips. Even the company's upcoming RTX 2060 performance-segment graphics card is rumored to implement 14 Gbps chips in variants that feature GDDR6. The sheer disparity in pricing between GDDR6 and GDDR5 could explain why NVIDIA is developing cheaper GDDR5 variants of the RTX 2060. Graphics card manufacturers can save around $22 per card by using six GDDR5 chips instead of GDDR6.
Return to Keyword Browsing