News Posts matching #HBM2

Return to Keyword Browsing

NVIDIA Tesla A100 "Ampere" AIC (add-in card) Form-Factor Board Pictured

Here's the first picture of a Tesla A100 "Ampere" AIC (add-in card) form-factor board, hot on the heals of the morning big A100 reveal. The AIC card is a bare PCB, which workstation builders will add compatible cooling solutions on. The PCB features the gigantic GA100 processor with its six HBM2E stacks, in the center, surrounded by VRM components, and I/O on three sides. On the bottom side, you will find a conventional PCI-Express 4.0 x16 host interface. Above it, are NVLink fingers. The rear I/O has high-bandwidth network interfaces (likely 200 Gbps InfiniBand), by Mellanox. The tail end has hard points for 12 V power input. Find juicy details of the GA100 in our older article.

NVIDIA Tesla A100 GPU Pictured

Thanks to the sources of VideoCardz, we now have the first picture of the next-generation NVIDIA Tesla A100 graphics card. Designed for computing oriented applications, the Tesla A100 is a socketed GPU designed for NVIDIA's proprietary SXM socket. In a post few days ago, we were suspecting that you might be able to fit the Tesla A100 GPU in the socket of the previous Volta V100 GPUs as it is a similar SXM socket. However, the mounting holes have been re-arranged and this one requires a new socket/motherboard. The Tesla A100 GPU is based on GA100 GPU die, which we don't know specifications of. From the picture, we can only see that there is one very big die attached to six HBM modules, most likely HBM2E. Besides that everything else is unknown. More details are expected to be announced today at the GTC 2020 digital keynote.
NVIDIA Tesla A100

AMD Announces Radeon Pro VII Graphics Card, Brings Back Multi-GPU Bridge

AMD today announced its Radeon Pro VII professional graphics card targeting 3D artists, engineering professionals, broadcast media professionals, and HPC researchers. The card is based on AMD's "Vega 20" multi-chip module that incorporates a 7 nm (TSMC N7) GPU die, along with a 4096-bit wide HBM2 memory interface, and four memory stacks adding up to 16 GB of video memory. The GPU die is configured with 3,840 stream processors across 60 compute units, 240 TMUs, and 64 ROPs. The card is built in a workstation-optimized add-on card form-factor (rear-facing power connectors and lateral-blower cooling solution).

What separates the Radeon Pro VII from last year's Radeon VII is full double precision floating point support, which is 1:2 FP32 throughput compared to the Radeon VII, which is locked to 1:4 FP32. Specifically, the Radeon Pro VII offers 6.55 TFLOPs double-precision floating point performance (vs. 3.36 TFLOPs on the Radeon VII). Another major difference is the physical Infinity Fabric bridge interface, which lets you pair up to two of these cards in a multi-GPU setup to double the memory capacity, to 32 GB. Each GPU has two Infinity Fabric links, running at 1333 MHz, with a per-direction bandwidth of 42 GB/s. This brings the total bidirectional bandwidth to a whopping 168 GB/s—more than twice the PCIe 4.0 x16 limit of 64 GB/s.

Fujitsu Completes Delivery of Fugaku Supercomputer

Fujitsu has today officially completed the delivery of the Fugaku supercomputer to the Riken scientific research institute of Japan. This is a big accomplishment as the current COVID-19 pandemic has delayed many happenings in the industry. However, Fujitsu managed to play around that and deliver the supercomputer on time. The last of 400 racks needed for the Fugaku supercomputer was delivered today, on May 13th, as it was originally planned. The supercomputer is supposed to be fully operational starting on the physical year of 2021, where the installation and setup will be done before.

As a reminder, the Fugaku is an Arm-based supercomputer consisting out of 150 thousand A64FX CPUs. These CPUs are custom made processors by Fujitsu based on Arm v8.2 ISA, and they feature 48 cores built on TSMC 7 nm node and running above 2 GHz. Packing 8.786 billion transistors, this monster chips use HBM2 memory instead of a regular DDR memory interface. Recently, a prototype of the Fugaku supercomputer was submitted to the Top500 supercomputer list and it came on top for being the most energy-efficient of all, meaning that it will be as energy efficient as it will be fast. Speculations are that it will have around 400 PetaFlops of general compute power for Dual-Precision workloads, however, for the specific artificial intelligence applications, it should achieve ExaFLOP performance target.
K SuperComputer

Micron to Launch HBM2 Memory This Year

Micron Technologies, in the latest earnings report, announced that they will start shipping High-Bandwidth Memory 2 (HBM2) DRAM. Used for high-performance graphics cards, server processors and all kinds of processors, HBM2 memory is wanted and relatively expensive solution, however, when Micron enters the market of its manufacturing, prices, and the market should adjust for the new player. Previously, only SK-Hynix and Samsung were manufacturing the HBM2 DRAM, however, Micron will join them and they will again form a "big-three" pact that dominates the memory market.

Up until now, Micron used to lay all hopes on its proprietary Hybrid Memory Cube (HMC) DRAM type, which didn't gain much traction from customers and it never really took off. Only a few rare products used it, as Fujitsu SPARC64 XIfx CPU used in Fujitsu PRIMEHPC FX100 supercomputer introduced in 2015. Micron announced to suspend works on HMC in 2018 and decided to devote their efforts to GDDR6 and HBM development. So, as a result, we are seeing that they will launch HBM2 DRAM products sometime this year.
Micron HMC High-Bandwidth Memory

Rambus Designs HBM2E Controller and PHY

Rambus, a maker of various Interface IP solutions, today announced the latest addition to its high-speed memory interface IP product portfolio in form of High Bandwidth Memory 2E (HBM2E) controller and physical layer (PHY) IP solution. The two IPs are enabling customers to completely integrate the HBM2E memory into their products, given that Rambus provides a complete solution for controlling and interfacing the memory. The design that Ramus offers can support for 12-high DRAM stacks of up to 24 Gb devices, making for up to 36 GB of memory per 3D stack. This single 3D stack is capable of delivering 3.2 Gbps over a 1024-bit wide interface, delivering 410 GB/s of bandwidth per stack.

The HBM2E controller core is DFI 3.1 compatible and has support for logic interfaces like AXI, OCP, or a custom one, so the customer can choose a way to integrate this core in their design. With a purchase of their HBM2E IP, Rambus will provide source code written in Hardware Description Language (HDL) and GDSII file containing the layout of the interface.

AMD Announces the CDNA and CDNA2 Compute GPU Architectures

AMD at its 2020 Financial Analyst Day event unveiled its upcoming CDNA GPU-based compute accelerator architecture. CDNA will complement the company's graphics-oriented RDNA architecture. While RDNA powers the company's Radeon Pro and Radeon RX client- and enterprise graphics products, CDNA will power compute accelerators such as Radeon Instinct, etc. AMD is having to fork its graphics IP to RDNA and CDNA due to what it described as market-based product differentiation.

Data centers and HPCs using Radeon Instinct accelerators have no use for the GPU's actual graphics rendering capabilities. And so, at a silicon level, AMD is removing the raster graphics hardware, the display and multimedia engines, and other associated components that otherwise take up significant amounts of die area. In their place, AMD is adding fixed-function tensor compute hardware, similar to the tensor cores on certain NVIDIA GPUs.
AMD Datacenter GPU Roadmap CDNA CDNA2 AMD CDNA Architecture AMD Exascale Supercomputer

AMD Radeon Instinct MI100 "Arcturus" Hits the Radar, We Have its BIOS

AMD's upcoming large post-Navi graphics chip, codenamed "Arcturus," will debut as "Radeon Instinct MI100", which is an AI-ML accelerator under the Radeon Instinct brand, which AMD calls "Server Accelerators." TechPowerUp accessed its BIOS, which is now up on our VGA BIOS database. The card goes with the device ID "0x1002 0x738C," which confirms "AMD" and "Arcturus,". The BIOS also confirms that memory size is at a massive 32 GB HBM2, clocked at 1000 MHz real (possibly 1 TB/s bandwidth, if memory bus width is 4096-bit).

Both Samsung (KHA884901X) and Hynix memory (H5VR64ESA8H) is supported, which is an important capability for AMD's supply chain. From the ID string "MI100 D34303 A1 XL 200W 32GB 1000m" we can derive that the TDP limit is set to a surprisingly low 200 W, especially considering this is a 128 CU / 8,192 shader count design. Vega 64 and Radeon Instinct MI60 for comparison have around 300 W power budget with 4,096 shaders, 5700 XT has 225 W with 2560 shaders, so either AMD achieved some monumental efficiency improvements with Arcturus or the whole design is intentionally running constrained, so that AMD doesn't reveal their hand to these partners, doing early testing of the card.

Samsung Launches 3rd-Generation "Flashbolt" HBM2E Memory

Samsung Electronics, the world leader in advanced memory technology, today announced the market launch of 'Flashbolt', its third-generation High Bandwidth Memory 2E (HBM2E). The new 16-gigabyte (GB) HBM2E is uniquely suited to maximize high performance computing (HPC) systems and help system manufacturers to advance their supercomputers, AI-driven data analytics and state-of-the-art graphics systems in a timely manner.

"With the introduction of the highest performing DRAM available today, we are taking a critical step to enhance our role as the leading innovator in the fast-growing premium memory market," said Cheol Choi, executive vice president of Memory Sales & Marketing at Samsung Electronics. "Samsung will continue to deliver on its commitment to bring truly differentiated solutions as we reinforce our edge in the global memory marketplace."

NVIDIA Unveils Tesla V100s Compute Accelerator

NVIDIA updated its compute accelerator product stack with the new Tesla V100s. Available only in the PCIe add-in card (AIC) form-factor for now, the V100s is positioned above the V100 PCIe, and is equipped with faster memory, besides a few silicon-level changes (possibly higher clock-speeds), to facilitate significant increases in throughput. To begin with, the V100s is equipped with 32 GB of HBM2 memory across a 4096-bit memory interface, with higher 553 MHz (1106 MHz effective) memory clock, compared to the 876 MHz memory clock of the V100. This yields a memory bandwidth of roughly 1,134 GB/s compared to 900 GB/s of the V100 PCIe.

NVIDIA did not detail changes to the GPU's core clock-speed, but mentioned the performance throughput numbers on offer: 8.2 TFLOP/s double-precision floating-point performance versus 7 TFLOP/s on the original V100 PCIe; 16.4 TFLOP/s single-precision compared to 14 TFLOP/s on the V100 PCIe; and 130 TFLOP/s deep-learning ops versus 112 TFLOP/s on the V100 PCIe. Company-rated power figures remain unchanged at 250 W typical board power. The company didn't reveal pricing.

Cray and Fujitsu Partner to Power Supercomputing in the Exascale Era

Global supercomputer leader Cray, a Hewlett Packard Enterprise company, and leading Japanese information and communication technology company Fujitsu, today announced a partnership to offer high performance technologies for the Exascale Era. Under the alliance agreement, Cray is developing the first-ever commercial supercomputer powered by the Fujitsu A64FX Arm -based processor with high-memory bandwidth (HBM) and supported on the proven Cray CS500 supercomputer architecture and programming environment. Initial customers include Los Alamos National Laboratory, Oak Ridge National Laboratory, RIKEN Center for Computational Science, Stony Brook University, and University of Bristol. As part of this new partnership, Cray and Fujitsu will explore engineering collaboration, co-development, and joint go-to-market to meet customer demand in the supercomputing space.

"Our partnership with Fujitsu means customers now have a broader choice of processor technology to address their pressing computational needs," said Fred Kohout, senior vice president and CMO at Cray, a Hewlett Packard Enterprise company. "We are delivering the development-to-deployment experience customers have come to expect from Cray, including exploratory development to the Cray Programming Environment (CPE) for Arm processors to optimize performance and scalability with additional support for Scalable Vector Extensions and high bandwidth memory."

GLOBALFOUNDRIES and SiFive to Deliver Next Level of High Bandwidth Memory on 12LP

GLOBALFOUNDRIES (GF ) and SiFive, Inc. announced today at GLOBALFOUNDRIES Technology Conference (GTC) in Taiwan that they are working to extend high DRAM performance levels with High Bandwidth Memory (HBM2E) on GF's recently announced 12LP+ FinFET solution, with 2.5D packaging design services to enable fast time-to-market for Artificial Intelligence (AI) applications.

In order to achieve the capacity and bandwidth for data-intensive AI training applications, system designers are challenged with squeezing more bandwidth into a smaller area while maintaining a reasonable power profile. SiFive's customizable high bandwidth memory interface on GF's 12LP platform and 12LP+ solution will enable easy integration of high bandwidth memory into a single System-on-Chip (SoC) solutions to deliver fast, power-efficient data processing for AI applications in the computing and wired infrastructure markets.

SK hynix Inc. Reports Third Quarter 2019 Results

SK hynix Inc. today announced financial results for its third quarter 2019 ended on September 30, 2019. The consolidated third quarter revenue was 6.84 trillion won while the operating profit amounted to 473 billion won and the net income 495 billion won. Operating margin and net margin for the quarter was 7%.

The revenue in the third quarter increased by 6% quarter-over-quarter (QoQ) as demand began to pick up. However, the operating profit fell by 26% QoQ as DRAM unit cost reduction was not enough to offset the price drop. DRAM bit shipments increased by 23% QoQ as the Company actively responded to the new products in the mobile market and purchases from some data center customers also increased. DRAM prices remained weak during the quarter, leading to a 16% drop in the average selling price, with the decline smaller than the previous quarter.

Samsung Develops Industry's First 12-Layer 3D-TSV Chip Packaging Technology

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced that it has developed the industry's first 12-layer 3D-TSV (Through Silicon Via) technology. Samsung's new innovation is considered one of the most challenging packaging technologies for mass production of high-performance chips, as it requires pinpoint accuracy to vertically interconnect 12 DRAM chips through a three-dimensional configuration of more than 60,000 TSV holes, each of which is one-twentieth the thickness of a single strand of human hair.

The thickness of the package (720 µm) remains the same as current 8-layer High Bandwidth Memory-2 (HBM2) products, which is a substantial advancement in component design. This will help customers release next-generation, high-capacity products with higher performance capacity without having to change their system configuration designs. In addition, the 3D packaging technology also features a shorter data transmission time between chips than the currently existing wire bonding technology, resulting in significantly faster speed and lower power consumption.

AMD "Navi 14" and "Navi 12" GPUs Detailed Some More

The third known implementation of AMD's "Navi" generation of GPUs with RDNA architecture is codenamed "Navi 14." This 7 nm chip is expected to be a cut-down, mainstream chip designed to compete with a spectrum of NVIDIA GeForce GTX 16-series SKUs, according to a 3DCenter.org report. The same report sheds more light on the larger "Navi 12" GPU that could power faster SKUs competing with the likes of the GeForce RTX 2080 and RTX 2080 Super. The two follow the July launch of the architecture debut with "Navi 10." There doesn't appear to be any guiding logic behind the numerical portion of the GPU codename. When launched, the pecking order of the three Navi GPUs will be "Navi 12," followed by "Navi 10," and "Navi 14."

"Navi 14" is expected to be the smallest of the three, with an estimated 170 mm² die-area, about 24 RDNA compute units (1,536 stream processors), and expected to feature a 128-bit wide GDDR6 memory interface. It will be interesting to see how AMD carves out an SKU that can compete with the GTX 1660 Ti, which has 6 GB of 192-bit GDDR6 memory. The company would have to wait for 16 Gbit (2 GB) GDDR6 memory chips, or piggy-back eight 8 Gbit chips to achieve 8 GB, or risk falling short of recommended system requirements of several games at 1080p, if it packs just 4 GB of memory.

Intel Ships Stratix 10 DX FPGAs, VMWare Among Early Partners

Intel today announced shipments of new Intel Stratix 10 DX field programmable gate arrays (FPGA). The new FPGAs are designed to support Intel Ultra Path Interconnect (Intel UPI), PCI-Express (PCIe) Gen4 x16 and a new controller for Intel Optane technology to provide flexible, high-performance acceleration. VMware is one of many early access program participants.

"Intel Stratix 10 DX FPGAs are the first FPGAs designed to combine key features that dramatically boost acceleration of workloads in the cloud and enterprise when used with Intel's portfolio of data center solutions. No other FPGA currently offers this combination of features for server designs based on future select Intel Xeon Scalable processors," said David Moore, Intel vice president and general manager, FPGA and Power Products, Network and Custom Logic Group.

Control Can Use Up to 18.5GB of Video Memory

"Control" by Remedy is the season's hottest AAA release, not just because it's an above-average story-driven action RPG, but also because it's an eye candy-shop. With the ability to use NVIDIA RTX real-time raytracing across a multitude of features, the game is particularly heavy on graphics hardware. Tweaktown tested the game's stability at extremely high display resolutions, including 8K, and found that the game can use up to 18.5 GB of video memory, when running in DirectX 12 with RTX enabled. There's only one client-segment graphics card capable of that much memory, the $2,499 NVIDIA TITAN RTX, which ships with 24 GB of GDDR6 memory. Its nearest client-segment neighbor is the AMD Radeon VII, but it only packs 16 GB of HBM2.

When a game needs more video memory than your graphics card has, Windows has an elaborate memory management system that sheds some of that memory onto your system's main memory, and the swap file progressively (at reduced performance, of course). Video memory usage drops like a rock between 8K and 4K UHD (which is 1/4th the pixels as 8K). With all RTX features enabled and other settings maxed out, "Control" only uses 8.1 GB of video memory. What this also means is that video cards with just 8 GB of memory are beginning fall short of what it takes to game at 4K. The $699 GeForce RTX 2080 Super only has 8 GB. The RTX 2080 Ti, with its 11 GB of memory has plenty of headroom and muscle. Find other interesting observations in the source link below.

AMD Retires the Radeon VII Less Than Five Months Into Launch

AMD has reportedly discontinued production of its flagship Radeon VII graphics card. According to a Cowcotland report, AMD no longer finds it viable to produce and sell the Radeon VII at prices competitive to NVIDIA's RTX 2080, especially when its latest Radeon RX 5700 XT performs within 5-12 percent of the Radeon VII at less than half its price. AMD probably expects custom-design RX 5700 XT cards to narrow the gap even more. The RX 5700 XT has a much lesser BOM (bill of materials) cost compared to the Radeon VII, due to the simplicity of its ASIC, a conventional GDDR6 memory setup, and far lighter electrical requirements.

In stark contrast to the RX 5700 XT, the Radeon VII is based on a complex MCM (multi-chip module) that has not just a 7 nm GPU die, but also four 32 Gbit HBM2 stacks, and a silicon interposer. It also has much steeper VRM requirements. Making matters worse is the now-obsolete "Vega" architecture it's based on, which loses big time against "Navi" at performance/Watt. The future of AMD's high-end VGA lineup is uncertain. Looking at the way "Navi" comes close to performance/Watt parity with NVIDIA on the RX 5700, AMD may be tempted to design a larger GPU die based on "Navi," with a conventional GDDR6-based memory sub-system, to take another swing at NVIDIA's high-end.

AMD Announces the Radeon Pro Vega II and Pro Vega II Duo Graphics Cards

AMD today announced the Radeon Pro Vega II and Pro Vega II Duo graphics cards, making their debut with the new Apple Mac Pro workstation. Based on an enhanced 32 GB variant of the 7 nm "Vega 20" MCM, the Radeon Pro Vega II maxes out its GPU silicon, with 4,096 stream processors, 1.70 GHz peak engine clock, 32 GB of 4096-bit HBM2 memory, and 1 TB/s of memory bandwidth. The card features both PCI-Express 3.0 x16 and InfinityFabric interfaces. As its name suggests, the Pro Vega II is designed for professional workloads, and comes with certifications for nearly all professional content creation applications.

The Radeon Pro Vega II Duo is the first dual-GPU graphics card from AMD in ages. Purpose built for the Mac Pro (and available on the Apple workstation only), this card puts two fully unlocked "Vega 20" MCMs with 32 GB HBM2 memory each on a single PCB. The card uses a bridge chip to connect the two GPUs to the system bus, but in addition, has an 84.5 GB/s InfinityFabric link running between the two GPUs, for rapid memory access, GPU and memory virtualization, and interoperability between the two GPUs, bypassing the host system bus. In addition to certifications for every conceivable content creation suite for the MacOS platform, AMD dropped in heavy optimization for the Metal 3D graphics API. For now the two graphics cards are only available as options for the Apple Mac Pro. The single-GPU Pro Vega II may see standalone product availability later this year, but the Pro Vega II Duo will remain a Mac Pro-exclusive.

AMD Memory Tweak Tool Lets You OC and Tweak AMD Radeon Memory Timings On-the-fly

Eliovp, who describes himself on GitHub as a Belgian [crypto] mining enthusiast, created what could go down as the best thing that happened to AMD Radeon users all decade. The AMD Memory Tweak Tool is a Windows and Linux based GUI utility that lets you not just overclock AMD Radeon graphics card memory on the fly, but also lets you tweak its memory timings. Most timings apply live, while your machine is running within Windows/Linux GUI, some require memory retraining via a reboot, which means they can't be changed at this time, because rebooting reverts the timings to default. The author is trying to figure out a way to run memory training at runtime, which would let you change those timings, too, in the future. While you're at it, the tool also lets you play with GPU core frequency and fan-control.

The AMD Memory Tweak tool supports both Windows and Linux (GUI), and works with all recent AMD Radeon GPUs with GDDR5 and HBM2 memory types. It requires Radeon Software Adrenalin 19.4.1 or later in case of Windows, or amdgpu-pro ROCM to be actively handling the GPU in case of Linux. The Linux version further has some dependencies, such as pciutils-dev, libpci-dev, build-essential, and git. The source-code for the utility is up on GitHub for you to inspect and test.

DOWNLOAD: AMD Memory Tweak Tool by Eliovp

EK Water Blocks Releases EK-Vector Series Water Blocks for AMD Radeon VII

EK Water Blocks, the leading premium computer liquid cooling gear manufacturer, is releasing EK-Vector Radeon VII water blocks that are compatible with reference design AMD Radeon VII graphics cards. This kind of efficient cooling will allow your high-end graphics card to reach higher boost clocks, thus providing more overclocking headroom and more performance during gaming or other GPU intense tasks.

This water block directly cools the GPU, 16GB of HBM2 memory, and VRM (voltage regulation module) as cooling liquid is channeled directly over these critical areas. These newly developed water blocks feature a redesigned cooling engine that has a larger footprint compared to the previous generation of EK Full Cover water blocks. This results in a larger surface area for heat transfer which increases the thermal performance of these water blocks.

Samsung Electronics Introduces New Flashbolt HBM2E High Bandwidth Memory

Samsung Electronics Co., Ltd., the world leader in advanced semiconductor technology, today announced its new High Bandwidth Memory (HBM2E) product at NVIDIA's GPU Technology Conference (GTC) to deliver the highest DRAM performance levels for use in next-generation supercomputers, graphics systems, and artificial intelligence (AI).

The new solution, Flashbolt , is the industry's first HBM2E to deliver a 3.2 gigabits-per-second (Gbps) data transfer speed per pin, which is 33 percent faster than the previous-generation HBM2. Flashbolt has a density of 16Gb per die, double the capacity of the previous generation. With these improvements, a single Samsung HBM2E package will offer a 410 gigabytes-per-second (GBps) data bandwidth and 16 GB of memory.

GIGABYTE Announces its Radeon VII Graphics Card

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the launch of Radeon VII HBM2 16G, the latest Radeon VII graphics cards built upon the world's first 7nm gaming GPU. Based on the enhanced second-generation AMD 'Vega' architecture, Radeon VII is equipped with 3840 stream processors and 16GB of ultra-fast HBM2 memory (second-generation High-Bandwidth Memory). It is designed to deliver exceptional performance and amazing experiences for the latest AAA, e-sports and Virtual Reality (VR) titles, demanding 3D rendering and video editing applications, and next-generation compute workloads.

According to the AMD official website, the Radeon VII graphics card enables high-performance gaming and ultra-high quality visuals. Ground-breaking 1 TB/s memory bandwidth and a 4,096-bit memory interface paves the way for ultra-high resolution textures, hyper-realistic settings and life-like characters. With the high speeds of today's graphics cards, framerates often exceed the monitor refresh rate, causing stuttering and tearing.

PowerColor Also Unveils its Reference Radeon VII

Amidst breaking news about PowerColor designing what could be the first custom-design Radeon VII graphics card, the company also unveiled its reference-design Radeon VII card, the AXVII 16GBHBM2-3DH. This card sticks to AMD's reference design clock speeds of up to 1750 MHz boost, and up to 1800 MHz "peak" clock speeds, with the memory ticking at 1000 MHz. It implements the slick, solid-aluminium triple-fan stock cooling solution AMD designed for this card. Drawing power from a pair of 8-pin PCIe power connectors, the card puts out three DisplayPort 1.4 and an HDMI 2.0b. Based on the 7 nm "Vega 20" silicon, the Radeon VII packs 3,840 stream processors, 240 TMUs, 64 ROPs, and a 4096-bit wide HBM2 memory interface, holding 16 GB of memory. It's likely that PowerColor will sell this card close to AMD's MSRP for this card, USD $699.

Sapphire Launches its Radeon VII, AMD Reserving 1.80 GHz Boost for Direct Sales?

Sapphire is among the first AMD add-in-board (AIB) partners to launch a Radeon VII graphics card. The card sticks to AMD reference board design, which the company unveiled at its CES 2019 keynote. Interestingly, its GPU engine boost frequency is set at 1750 MHz, which is less than the 1800 MHz boost frequency figure that was mentioned by the company earlier. Could it be that AMD is reserving 1800 MHz for cards directly sold on AMD.com? The memory frequency is unchanged at 1000 MHz, which works out to an HBM2 memory bandwidth of 1 TB/s. Sapphire's box for this card lists out key specifications upfront, and also features the Vega II logo. It's likely that the card will be sold at the baseline price of $699, given that there are no other variants of this card, not even custom-design.
Return to Keyword Browsing