News Posts matching "GDDR6"

Return to Keyword Browsing

Palit Announces GeForce RTX 2070 Series Graphics Cards

Palit Microsystems Ltd, the leading graphics card manufacturer, releases the new NVIDIA Turing architecture GeForce RTX series in Palit GeForce product line-up, GeForce RTX 2070 Dual. NVIDIA GeForce RTX delivers the ultimate PC gaming experience. Powered by the new NVIDIA Turing GPU architecture and the revolutionary RTX platform, RTX graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is a whole new way to experience games.

GeForce RTX 2070 is based on the NVIDIA Turing architecture TU106 GPU armed with 2304 CUDA cores and equipped with 8GB of GDDR6 memory running at an effective clock rate of 14 GHz on a 256-bit bus. Palit GeForce RTX 2070 is designed to deliver 6x more performance than previous-generation video card and brings the power of real-time ray tracing and AI to games. The Palit GeForce RTX 2070 Dual is targeting at MSRP $499 which is available in the market now.

TechPowerUp GPU-Z v2.12.0 Released

TechPowerUp GPU-Z v2.12.0 released today with useful new features and several stability updates. We worked extensively on the ability of GPU-Z to detect fake NVIDIA graphics cards (i.e cards not really having the GPU advertised on the box). GPU-Z now prepends "[FAKE]" to the Graphics Card name field, and lights up with a caution triangle. This capability is forward compatible for the supported GPUs (listed in the changelog), so for example, it will be able to detect a fake RTX 2060, which in reality uses a GK106 GPU. The second big feature is the ability to extract and upload graphics card BIOS of NVIDIA GeForce RTX 2000 graphics cards. Graphics cards with multiple independent fans (each with its own speed control) are gaining popularity, and we've added the ability to read and log fan-speeds of individual fans on NVIDIA "Turing" graphics cards that support the feature, in addition to fan speed percentage monitoring.

Our feature-rich "Advanced" tab now also shows information on HDMI and DisplayPort connectors of your graphics cards. Power-draw on NVIDIA graphics cards is now reported both as a percentage of TDP and as an absolute value in Watts. Among the bugs fixed are a system hang due to Valve Anti-Cheat (VAC) kicking in when GPU-Z is running in the background; memory bandwidth reading on RTX 2080 & RTX 2080 Ti with GDDR6 memory, AMD Radeon RX 400-series GPU utilization monitoring, and improved texts for system memory usage sensors.
DOWNLOAD: TechPowerUp GPU-Z v2.12.0

The change-log follows.

NVIDIA Quadro RTX 6000 and RTX 5000 Up for Pre-Order, Full TU102 at $6,300

NVIDIA opened up its "Turing" based Quadro RTX 6000 and RTX 5000 graphics cards up for pre-order on its website. The RTX 6000 is priced at USD $6,300, and a quantity limitation of 5 per customer is in place. The RTX 5000, on the other hand, is priced at $2,300, and is out of stock at the time of this writing. The RTX 6000 maxes out the TU102 silicon with 4,608 CUDA cores, 576 Tensor cores, 72 RT cores, and is armed with 24 GB of GDDR6 memory, across the chip's full 384-bit memory bus width, making it the cheapest graphics card that maxes out the silicon, unless NVIDIA comes up with a "TITAN X Turing." The Quadro series comes with an enterprise feature-set and certifications for major content-creation applications not available on the GeForce series.

The Quadro RTX 5000, on the other hand, maxes out the TU104 silicon with 3,072 CUDA cores, 384 Tensor cores, 48 RT cores, and 16 GB of GDDR6 memory across the chip's 256-bit wide memory interface. The $10,000 RTX 8000, which isn't open to pre-orders yet, arms the TU102 with a whopping 48 GB of memory, and higher clocks than the RTX 6000. NVIDIA debuted the "Turing" graphics architecture with the Quadro RTX series a week before the new GeForce RTX 20-series.

TechPowerUp GPU-Z v2.11.0 Released

TechPowerUp today released the latest version of TechPowerUp GPU-Z, the popular graphics subsystem information and diagnostics utility. Version 2.11.0 introduces support for NVIDIA GeForce RTX 20-series "Turing" graphics cards, including the RTX 2080 Ti, RTX 2080, and RTX 2070. Support is also added for a few exotic OEM variants we discovered over the months, including GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200. From the AMD stable, we add support for "Vega 20," "Fenghuang" semi-custom SoC for Zhongshan Subor, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, and Embedded V1807B. Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U), are now supported.

Among the new features are system RAM usage sensors, temperature monitoring offsets for AMD Ryzen Threadripper 2000 series processors, and the ability to identify USB-C display output, GDDR6 memory standard, and 16 Gbit density memory chips. Several under-the-hood improvements were made, including WDDM-based memory monitoring for AMD GPUs, replacing ADL sensors that tend to be buggy. GPU-Z also cleans up QueryExternal files from your Temp folder. Grab GPU-Z from the link below.
DOWNLOAD: TechPowerUp GPU-Z v2.11.0

The change-log follows.

Micron Announces Its Initial Launch Partner Status for NVIDIA RTX 20-Series GDDR6 Implementation

Memory subsystems are an important part of graphics workloads, and both AMD and NVIDIA have always been looking to cross the cutting-edge of tech in both GPU production and memory fabrication technologies. AMD has been hitching itself to the HBM bandwagon with much more fervor than NVIDIA, albeit with somewhat lukewarm results - at least from a consumer, gaming GPU perspective. NVIDIA has been more cautious: lock HBM's higher costs and lower availability to higher-margin products that can leverage the additional bandwidth, and leave GDDR to muscle its way through consumer products - a strategy that has likely helped in keeping BOM costs for its graphics cards relatively low.

As it stands, Micron was the only company with both the roadmap and production volume to be NVIDIA's partner in launching the RTX 20-series, with products above (and including) the GTX 2070 all carrying the new high-performance memory subsystem. Micron has already announced GDDR6 memory as a product back in 2017, with sampling by the beginning of 2018 and mass volume production by June - just enough time to spool up a nice inventory for new, shiny graphics cards to come out in September. Of course, this ramp-up and initial Micron leadership doesn't mean they will be the only suppliers for NVIDIA - however, it's safe to say they'll be the most relevant one for at least a good while.

First Time Spy Benchmark of Upcoming NVIDIA RTX 2080 Graphics Card Leaks

A Time Spy benchmark score of one of NVIDIA's upcoming RTX 20-series graphics cards has come out swinging in a new leak. We say "one of NVIDIA's" because we can't say for sure which core configuration this graphics card worked on: the only effective specs we have are the 8 GB of GDDR6 memory working at 14 Gbps, which translates to either NVIDIA's RTX 2070 or RTX 2080 graphics cards. If we were of the betting type, we'd say these scores are likely from an NVIDIA RTX 2080, simply because the performance improvement over the last generation 1080 (which usually scores around the 7,300's) sits pretty at some 36% - more or less what NVIDIA has been doing with their new generation introductions.

The 10,030 points scored in Time Spy by this NVIDIA RTX graphics card brings its performance levels up to GTX 1080 Ti levels, and within spitting distance of the behemoth Titan Xp. This should put to rest questions regarding improved performance in typical (read, non-raytracing) workloads on NVIDIA's upcoming RTX series. It remains to be seen, as it comes to die size, which part of this improvement stems from actual rasterization performance improvements per core, or if this comes only from increased number of execution units (NVIDIA says it doesn't, by the way).

NVIDIA "TU102" RT Core and Tensor Core Counts Revealed

The GeForce RTX 2080 Ti is indeed based on an ASIC codenamed "TU102." NVIDIA was referring to this 775 mm² chip when talking about the 18.5 billion-transistor count in its keynote. The company also provided a breakdown of its various "cores," and a block-diagram. The GPU is still laid out like its predecessors, but each of the 72 streaming multiprocessors (SMs) packs RT cores and Tensor cores in addition to CUDA cores.

The TU102 features six GPCs (graphics processing clusters), which each pack 12 SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and 1 RT core. Each GPC packs six geometry units. The GPU also packs 288 TMUs and 96 ROPs. The TU102 supports a 384-bit wide GDDR6 memory bus, supporting 14 Gbps memory. There are also two NVLink channels, which NVIDIA plans to later launch as its next-generation multi-GPU technology.

COLORFUL Adds NVIDIA GeForce RTX 2080 Ti & RTX 2080 Into AD Series

Colorful Technology Company Limited, professional manufacturer of graphics cards, motherboards and high-performance storage solutions, adds two new cards into its AD series, which were named as iGame GeForce RTX 2080 Ti Advanced OC and iGame GeForce RTX 2080 Advanced OC.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games - this is the ultimate PC gaming experience.

The new GPUs were unveiled at a special NVIDIA two-day event called the "GeForce Gaming Celebration" which kicked off on August 20th at the Palladium in Cologne, Germany ahead of Gamescom 2018.

ASUS Announces GeForce RTX-20 Series Graphics Cards

ASUS today announced the Republic of Gamers (ROG) Strix, ASUS Dual, and ASUS Turbo graphics cards based on the new NVIDIA GeForce RTX 2080 Ti and RTX 2080 GPUs, which feature support for high-speed GDDR6 memory and VirtualLink for quick, single-cable connections for next-generation VR headsets.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games-this is the ultimate PC gaming experience.

The new GPUs were unveiled at a special NVIDIA two-day event called the "GeForce Gaming Celebration" which kicked off tonight at the Palladium in Cologne, Germany ahead of Gamescom 2018.

Inno3D Announces New iChill GeForce RTX-20 Series

INNO3D, a leading manufacturer of awesome high-end graphics hardware components and various innovations enriching your life, introduces a new family of INNO3D graphics cards based on the NVIDIA GeForce RTX 2080 Ti, RTX 2080 and RTX 2070 GPUs. The new generation of gaming cards will exist of a range of TWIN X2 products, a series of JET editions and the Brutal iCHILL BLACK editions.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games-this is the ultimate PC gaming experience.

NVIDIA Announces the GeForce RTX: 10 Years in the Making

NVIDIA today unveiled the GeForce RTX series, the first gaming GPUs based on the new NVIDIA Turing architecture and the NVIDIA RTX platform, which fuses next-generation shaders with real-time ray tracing and all-new AI capabilities.

This new hybrid graphics capability represents the biggest generational leap ever in gaming GPUs. Turing -- which delivers 6x more performance than its predecessor, Pascal -- redefines the PC as the ultimate gaming platform, with new features and technologies that deliver 4K HDR gaming at 60 frames per second on even the most advanced titles.

NVIDIA GeForce RTX 2080, 2070, and 2080 Ti Specifications Revealed

(Update 1: NVIDIA at its Koln event also revealed that these graphics cards were made for overclocking, with highly-improved power regulation and management systems designed just for that purpose. Jensen Huang himself added that these new graphics cards, with their dual 13-blade radial fan design, work at one fifth the audio levels of a previous-gen GeForce GTX 1080 Ti.

With product pages and pricing of the GeForce RTX 2080, RTX 2070, and the RTX 2080 Ti going up ahead of formal unveiling. The RTX 2080 leads the pack with 2944 CUDA cores, 1515 MHz GPU clocks, 1710 MHz boost, and 14 Gbps memory; while the RTX 2070 is equipped with 2304 CUDA cores, 1410 MHz GPU clocks, 1620 MHz boost, and an unchanged 14 Gbps GDDR6 memory. The RTX 2080 Ti leads the pack with 4352 CUDA cores, 1350 MHz core, 1545 MHz boost, and 14 Gbps memory, but a wider 352-bit memory interface.

GALAX Confirms Specs of RTX 2080 and RTX 2080 Ti

GALAX spilled the beans on the specifications of two of NVIDIA's upcoming high-end graphics cards, as it's becoming increasingly clear that the company could launch the GeForce RTX 2080 and the GeForce RTX 2080 Ti simultaneously, to convince GeForce "Pascal" users to upgrade. The company's strategy appears to be to establish 40-100% performance gains over the previous generation, along with a handful killer features (such as RTX, VirtuaLink, etc.,) to trigger the upgrade-itch.

Leaked slides from GALAX confirm that the RTX 2080 will be based on the TU104-400 ASIC, while the RTX 2080 Ti is based on the TU102-300. The RTX 2080 will be endowed with 2,944 CUDA cores, and a 256-bit wide GDDR6 memory interface, holding 8 GB of memory; while the RTX 2080 Ti packs 4,352 CUDA cores, and a 352-bit GDDR6 memory bus, with 11 GB of memory. The memory clock on both is constant, at 14 Gbps. The RTX 2080 has its TDP rated at 215W, and draws power from a combination of 6-pin and 8-pin PCIe power connectors; while the RTX 2080 Ti pulls 250W TDP, drawing power through a pair of 8-pin PCIe power connectors. You also get to spy GALAX' triple-fan non-reference cooling solution in the slides below.

NVIDIA GeForce RTX 2080 ASIC is "TU104," Smiles for the Camera

Here are some of the first pictures of the "TU104," the chip at at heart of NVIDIA GeForce RTX 2080, in its top-spec TU104-400A-A1 variant. While the package itself isn't bigger than that of its predecessors, the GP104 and GM204, the die of the TU104 is huge! If it's built on existing silicon fabrication processes such as 14 nm or 12 nm, the TU104 could end up with transistor-counts higher than those of even the GP102.

You'll recall that we posted bare-PCB pictures of this board recently. In its production (or close-to-production) variant, the board draws power from a combination of 6-pin and 8-pin PCIe connectors, conditioning it with an 8+2 phase VRM. The GPU is surrounded by eight GDDR6 memory chips. Display connectivity includes three DisplayPorts, an HDMI, and a USB type-C (VirtuaLink).

Samsung 16Gb GDDR6 Memory Powers Latest NVIDIA Quadro Professional Graphics Solution

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced that its 16-gigabit (Gb) Graphics Double Data Rate 6 (GDDR6) memory is being used in NVIDIA's new Turing architecture-based Quadro RTX GPUs.

Thanks to Samsung's industry-leading 16Gb GDDR6 memory, end users can expect improved performance and energy efficiency in the widest array of graphics-intensive applications, including computer-aided design (CAD), digital content creation (DCC) and scientific visualization applications. Samsung's 16Gb GDDR6 can also be used in rapidly growing fields such as 8K Ultra HD video processing, virtual reality (VR), augmented reality (AR) and artificial intelligence (AI).

TechPowerUp GPU-Z v2.10.0 Released

TechPowerUp today released a major update to GPU-Z, our graphics sub-system information, monitoring, and diagnostic utility that no PC enthusiast can leave home without. Version 2.10.0 introduces a slew of changes. To begin with, we have added CPU temperature monitoring, supported on all Intel processors 2nd generation Core and newer; and AMD Ryzen. The graph drawing code for the "Sensors" tab has been improved for faster rendering and reduced CPU usage. The "Reset" button in this tab can be used to clear all graphs with a click. Various under the hood improvements work to avoid error messages with Query External. GPU-Z now reports Override limits on the ATI/AMD BIOS Advanced view page.

Support has been added for the GDDR6 memory type. Among the new GPUs supported are AMD Vega M GH, Vega M GL, WX 4130, WX 4150, WX 8100, Radeon Pro Vega 56, Pro SSG; AMD Vega Graphics in Ryzen 3 2200U, Ryzen 3 2300U, Ryzen 3 Pro 2200GE, Ryzen 5 2400GE, Ryzen 7 2400U; Intel UHD 605, UHD 610, UHD 630, P580; NVIDIA GeForce GTX 1060 6 GB, GTX 1050 3 GB, GTX 1050 2 GB (GP106), GTX 1050 Ti Max-Q, MX110, MX130, GTX 750 Ti (GM107B); Quadro GV100, P600 Mobile, P620, P2000, P3200, P4200, Grid M3-3020; and Tesla V100, V100-PCIE-32GB, M4, P106-090, P5200, P6, P40, P100 SXM2, P100 PCIe 12 GB.
DOWNLOAD: TechPowerUp GPU-Z 2.10.0

The change-log follows.

NVIDIA "GT104" Based GeForce GTX 1180 Surfaces on Vietnamese Stores

A Vietnamese online store put up the first listing of a GeForce GTX 1180 based ASUS ROG Strix graphics card. The store even put out some specifications of the card, beginning with it being based on the "GT104" silicon, based on the "Turing" series. With "Turing" NVIDIA appears to be forking its GPU architectures on the basis of chips that feature DPFP (double-precision floating point) cores and Tensor cores, and those that lack both (and only feature SPFP cores). "Turing" is probably a fork of "Volta" that lacks both DPFP CUDA cores and Tensor cores; and sticks to the cheaper GDDR6 memory architecture, while "Volta" based GPUs, such as the TITAN V, implement pricier HBM2 memory.

Among the specifications of the GeForce GTX 1180 are 3,584 CUDA cores, and 16 GB of GDDR6 memory across a 256-bit wide memory interface. The memory is clocked at 14 GHz (GDDR6-effective), which works out to 409.6 GB/s of memory bandwidth. Pre-launch prices, just like most specifications, tend to be bovine excrement, which in this case converts to a little over USD $1,500, and isn't really relevant. What is, however, interesting is the availability date of September 28.

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.

Micron Begins Volume Production of GDDR6 High Performance Memory

Micron Technology, Inc. today announced volume production on its 8 GB GDDR6 memory. Built on experience and execution for several generations of GDDR memory, GDDR6 - Micron's fastest and most powerful graphics memory designed in Micron's Munich Development Center - is optimized for a variety of applications that require high performance memory, including artificial intelligence (AI), networking, automotive and graphics processing units (GPUs). Additionally, Micron has worked with core ecosystem partners to ramp GDDR6 documentation and interoperability, enabling faster time to market for designs.

"Micron is a pioneer in developing advanced high bandwidth memory solutions and continues that leadership with GDDR6. Micron demonstrated this leadership by recently achieving throughput up to 20 GB/s on our GDDR6 solutions," said Andreas Schlapka, director, Compute Networking Business Unit, Micron. "In addition to performance increases, Micron has developed a deep partner ecosystem to enable rapid creation of GDDR6 designs, enabling faster time to market for customers looking to leverage this powerful new memory technology."

NVIDIA GeForce "Volta" Graphics Cards to Feature GDDR6 Memory According to SK Hynix Deal

NVIDIA's upcoming GeForce GTX graphics cards based on the "Volta" architecture, could feature GDDR6 memory, according to a supply deal SK Hynix struck with NVIDIA, resulting in the Korean memory manufacturer's stock price surging by 6 percent. It's not known if GDDR6 will be deployed on all SKUs, or if like GDDR5X, it will be exclusive to a handful high-end SKUs. The latest version of SK Hynix memory catalogue points to an 8 Gb (1 GB) GDDR6 memory chip supporting speeds of up to 14 Gbps at 1.35V, and up to 12 Gbps at 1.25V.

Considering NVIDIA already got GDDR5X to run at 11 Gbps, it could choose the faster option. Memory remains a cause for concern. If 8 Gb is the densest chip from SK Hynix, then the fabled "GV104" (GP104-successor), which could likely feature a 256-bit wide memory interface, will only feature up to 8 GB of memory, precluding the unlikely (and costly) option of piggy-backing chips to achieve 16 GB.

Rambus Announces Their High-Performance GDDR6 PHY Solution

Rambus Inc. today announced the GDDR6 (Graphics Double Data Rate) Memory PHY IP Core targeted for high-performance applications including cryptocurrency mining, artificial intelligence (AI), ADAS (advanced driver assistance systems) and networking. Leveraging almost 30 years of high-speed interface design expertise and using advanced leading-edge FinFET process nodes, the Rambus GDDR6 PHY architecture will provide the industry's highest speed of up to 16 Gbps, while utilizing established packaging and testing techniques.

"The high bandwidth delivered by GDDR6 makes it uniquely qualified to perform data-intensive applications such as HPC (high performance computing), AI, autonomous vehicles, and high-speed networking," said Luc Seraphin, SVP and general manager of the Rambus Memory and Interfaces Division. "We are excited to be the first IP provider to offer a GDDR6 PHY solution with industry-leading performance designed with power efficiency and high signal margins for these applications."

Micron and Industry Partners to Deliver Comprehensive GDDR6 Solutions

Micron Technology, Inc., a leading memory and storage provider, today announced with Rambus Inc., Northwest Logic and Avery Design, their efforts to deliver a comprehensive solution for GDDR6, the world's fastest discrete memory. This first-of-its-kind solution would enable GDDR6 use in advanced applications such as high-performance networking, autonomous vehicles, artificial intelligence and 5G infrastructure. Prior generations of GDDR memories, enabled by GPU vendors, were focused exclusively on the graphics market. While this allowed graphics and game console designs to take advantage of the significant performance advantage offered by GDDR, other applications could not because the necessary building blocks were not available.

SK Hynix Marks its 8Gb GDDR6 Memory Chips "Available" in Latest Catalog Update

SK Hynix, the other Korean DRAM and NAND flash giant than Samsung, updated its DRAM product catalog to reflect immediate availability of its 8-gigabit (1 GB) GDDR6 memory chips. The company is selling four SKUs, part "H56C8H24MJR-S2C" in 14 Gbps and 12 Gbps variants; and part "H56C8H24MJR-S0C" in 12 Gbps and 10 Gbps variants. The -S2C chips are more energy efficient, in achieving 14 Gbps at 1.35V and 12 Gbps at 1.25V; while the -S0C achieves 12 Gbps at 1.35V, and 10 Gbps at 1.25V. Unless NVIDIA decides that the GTX 1080-successor should feature 16 GB of memory, the company could be in the market for 8 Gb GDDR6 chips.

Meanwhile, rival Samsung announced that it began mass-production of 16 Gb (2 GB) GDDR6 memory chips, which should enable 16 GB of memory across a 256-bit memory interface, or 8 GB over a 128-bit interface. Samsung's latest chips not only have double the density as SK Hynix's, but also tick faster, at 18 Gbps, with a voltage of 1.35V. It is widely expected that GDDR6 will be the dominant memory standard for entry, mainstream, and even high-end graphics cards, which launch through 2018-19. NVIDIA is expected this year to launch its new "Volta" graphics architecture across various consumer-graphics market-segments.

Samsung Starts Producing Industry's First 16-Gigabit GDDR6 Memory

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced that it has started mass production of the industry's first 16-gigabit (Gb) Graphics Double Data Rate 6 (GDDR6) memory for use in advanced graphics processing for gaming devices and graphics cards as well as automotive, network and artificial intelligence systems.

"Beginning with this early production of the industry's first 16 Gb GDDR6, we will offer a comprehensive graphics DRAM line-up, with the highest performance and densities, in a very timely manner," said Jinman Han, senior vice president, Memory Product Planning & Application Engineering at Samsung Electronics. "By introducing next-generation GDDR6 products, we will strengthen our presence in the gaming and graphics card markets and accommodate the growing need for advanced graphics memory in automotive and network systems."

Micron Analyses 2017, Looks at the Future of Memory Business

It was a banner year for graphics, both in terms of market strength and technology advancements. Gaming, virtual reality, crypto mining, and artificial intelligence fueled demand for GPUs in 2017. The market responded with a wide array of products: high-performance discrete PC graphics cards that let gamers run multiple 4K displays; game consoles and VR headsets; and workstation-class GPUs that can build the stunning effects we have all come to expect. And since these products are full of our GDDR5 or G5X memory, it was an exciting year for Micron's graphics team too. We had a record-breaking year in GDDR5 shipments and further solidified Micron's industry leadership in graphics memory with the launch of our 12 Gb/s G5X, the highest-performance mass production GDDR memory.
Return to Keyword Browsing