News Posts matching #HBM3

Return to Keyword Browsing

Samsung Talks DDR6-12800, GDDR7 Development, and HBM3 Volume Production

During Samsung's Tech Day 2021, the company presented some interesting insights about the future of system memory technologies and how it plans to execute its production. Starting with the latest DDR5 standard, the company intends to follow JEDEC documents and offer some overclocking modules that surpass the specification advised by JEDEC. While the DDR5 standard specifies memory modules with 6,400 MT/s, Samsung will develop modules capable of overclocking up to 8,400 MT/s. These are not yet confirmed as they are still in the development phase. However, we can expect to see them in the later life of DDR5 memory.

The company also talked about the DDR6 standard, which is supposedly twice as fast as DDR5. The new DDR6 standard is still in early development, and all we know so far is that the number of memory channels per module is seeing a twofold increase over DDR5 to four channels. The number of memory banks also increases to 64. In addition to DDR6 for desktop and server use cases, the company is also working on Low Power DDR6 (LPDDR6) for mobile applications. While the company's LPDDR5 memory goes into volume production using the 1a-nm process at the beginning of 2022, the LPDDR6 is still in early development. The base speed for DDR6 modules will allegedly arrive at 12,800 MT/s, while overclocking modules will join the party at up to 17,000 MT/s. Mobile-oriented LPDDR6 version is also supposed to come with up to 17,000 MT/s speeds.

SK hynix Announces Development of HBM3 DRAM

SK hynix Inc. announced that it has become the first in the industry to successfully develop the High Bandwidth Memory 3, the world's best-performing DRAM. HBM3, the fourth generation of the HBM technology with a combination of multiple DRAM chips vertically connected, is a high value product that innovatively raises the data processing rate.

The latest development, which follows the start of mass production of HBM2E in July last year, is expected to help consolidate the company's leadership in the market. SK hynix was also the first in the industry to start mass production of HBM2E. SK hynix's HBM3 is not only the fastest DRAM in the world, but also comes with the biggest capacity and significantly improved level of quality.

Synopsys Accelerates Multi-Die Designs with Industry's First Complete HBM3 IP and Verification Solutions

Synopsys, Inc. today announced the industry's first complete HBM3 IP solution, including controller, PHY, and verification IP for 2.5D multi-die package systems. HBM3 technology helps designers meet essential high-bandwidth and low-power memory requirements for system-on-chip (SoC) designs targeting high-performance computing, AI and graphics applications. Synopsys' DesignWare HBM3 Controller and PHY IP, built on silicon-proven HBM2E IP, leverage Synopsys' interposer expertise to provide a low-risk solution that enables high memory bandwidth at up to 921 GB/s.

The Synopsys verification solution, including Verification IP with built-in coverage and verification plans, off-the-shelf HBM3 memory models for ZeBu emulation, and HAPS prototyping system, accelerates verification from HBM3 IP to SoCs. To accelerate development of HBM3 system designs, Synopsys' 3DIC Compiler multi-die design platform provides a fully integrated architectural exploration, implementation and system-level analysis solution.

Samsung Brings In-memory Processing Power to Wider Range of Applications

Samsung Electronics the world leader in advanced memory technology, today showcased its latest advancements with processing-in-memory (PIM) technology at Hot Chips 33—a leading semiconductor conference where the most notable microprocessor and IC innovations are unveiled each year. Samsung's revelations include the first successful integration of its PIM-enabled High Bandwidth Memory (HBM-PIM) into a commercialized accelerator system, and broadened PIM applications to embrace DRAM modules and mobile memory, in accelerating the move toward the convergence of memory and logic.

In February, Samsung introduced the industry's first HBM-PIM (Aquabolt-XL), which incorporates the AI processing function into Samsung's HBM2 Aquabolt, to enhance high-speed data processing in supercomputers and AI applications. The HBM-PIM has since been tested in the Xilinx Virtex Ultrascale+ (Alveo) AI accelerator, where it delivered an almost 2.5X system performance gain as well as more than a 60% cut in energy consumption.

Rambus Innovates 8.4 Gbps HBM3-ready Memory Subsystem

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced the Rambus HBM3-ready memory interface subsystem consisting of a fully-integrated PHY and digital controller. Supporting breakthrough data rates of up to 8.4 Gbps, the solution can deliver over a terabyte per second of bandwidth, more than double that of high-end HBM2E memory subsystems. With a market-leading position in HBM2/2E memory interface deployments, Rambus is ideally suited to enable customers' implementations of accelerators using next-generation HBM3 memory.

"The memory bandwidth requirements of AI/ML training are insatiable with leading-edge training models now surpassing billions of parameters," said Soo Kyoum Kim, associate vice president, Memory Semiconductors at IDC. "The Rambus HBM3-ready memory subsystem raises the bar for performance enabling state-of-the-art AI/ML and HPC applications."

SK Hynix Details its Upcoming HBM3 Memory: 665 GB/s per Stack

SK Hynix is at the forefront of developing the next generation of stacked high-bandwidth memory, the HBM3 standard. Succeeding the current HBM2e standard, HBM3 will power next-generation HPC and AI processors in high-density multi-chip modules. A Tom's Hardware report citing information from SK Hynix reveals two key details about the new standard. For starters, it could offer per-pin data-rates of 5.2 Gbps, a 44% increase over the 3.6 Gbps that HBM2e caps out at. This results in a per-stack bandwidth of 665 GB/s, compared to 480 GB/s for the HBM2e. A processor with four such stacks (over a 4096-bit wide bus), would hence enjoy 2.66 TB/s of memory bandwidth. It's likely that HBM3 stacks from SK Hynix could implement the DBI Ultra 2.5D/3D hybrid bonding interconnect technology licensed from Xperi Corp.

OpenFive Tapes Out SoC for Advanced HPC/AI Solutions on TSMC 5 nm Technology

OpenFive, a leading provider of customizable, silicon-focused solutions with differentiated IP, today announced the successful tape out of a high-performance SoC on TSMC's N5 process, with integrated IP solutions targeted for cutting edge High Performance Computing (HPC)/AI, networking, and storage solutions.

The SoC features an OpenFive High Bandwidth Memory (HBM3) IP subsystem and D2D I/Os, as well as a SiFive E76 32-bit CPU core. The HBM3 interface supports 7.2 Gbps speeds allowing high throughput memories to feed domain-specific accelerators in compute-intensive applications including HPC, AI, Networking, and Storage. OpenFive's low-power, low-latency, and highly scalable D2D interface technology allows for expanding compute performance by connecting multiple dice together using an organic substrate or a silicon interposer in a 2.5D package.

Intel Xe HPC Multi-Chip Module Pictured

Intel SVP for architecture, graphics, and software, Raja Koduri, tweeted the first picture of the Xe HPC scalar compute processor multi-chip module, with its large IHS off. It reveals two large main logic dies built on the 7 nm silicon fabrication process from a third-party foundry. The Xe HPC processor will be targeted at supercomputing and AI-ML applications, so the main logic dies are expected to be large arrays of execution units, spread across what appear to be eight clusters, surrounded by ancillary components such as memory controllers and interconnect PHYs.

There appear to be two kinds of on-package memory on the Xe HPC. The first kind is HBM stacks (from either the HBM2E or HBM3 generation), serving as the main high-speed memory; while the other is a mystery for now. This could either be another class of DRAM, serving a serial processing component on the main logic die; or a non-volatile memory, such as 3D XPoint or NAND flash (likely the former), providing fast persistent storage close to the main logic dies. There appear to be four HBM-class stacks per logic die (so 4096-bit per die and 8192-bit per package), and one die of this secondary memory per logic die.

Micron Also Announces Development of HBMnext

Continuing from the Micron tech brief we shared earlier, a new interesting prospect for the future of ultra-bandwidth solutions is being called simply HBMnext. It's very likely this is only a working title for a next generation HBM memory interface, whether it is a mere evolution of HBM2E or HBM3 proper. The jump in memory speed from HBM2E to HBMnext is still under wraps; however, we've seen HBM2E take significant strides compared to HBM2 already. The first HBM2E products arrived with a 0.4 Gbps improvement over HBM2 (2.4 Gbps vs 2.0 Gbps), but HBM2E has already been certified - and is announced by Micron - as hitting 3.2 Gbps as soon as the second half of this year. One can expect HBMnext to take somewhat comparable strides. Users shouldn't expect to see HBMnext on any products soon, though; it's only expected to launch come 2022.

Samsung Now Mass Producing Industry's First 2nd-Generation 10nm Class DRAM

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, announced today that it has begun mass producing the industry's first 2nd-generation of 10-nanometer class (1y-nm), 8-gigabit (Gb) DDR4 DRAM. For use in a wide range of next-generation computing systems, the new 8 Gb DDR4 features the highest performance and energy efficiency for an 8 Gb DRAM chip, as well as the smallest dimensions.

"By developing innovative technologies in DRAM circuit design and process, we have broken through what has been a major barrier for DRAM scalability," said Gyoyoung Jin, president of Memory Business at Samsung Electronics. "Through a rapid ramp-up of the 2nd-generation 10 nm-class DRAM, we will expand our overall 10 nm-class DRAM production more aggressively, in order to accommodate strong market demand and continue to strengthen our business competitiveness."

AMD Navi Found Secretly Hiding in Linux Drivers

We know AMD has been doing a great job keeping the lid on their Navi architecture with information being scarce at the moment. Aside from knowing that Navi is being fabricated on the 7 nm process, it is possible that the microarchitecture will quite possibly support next-generation memory like GDDR6 or HBM3. In a Navi discussion on the Beyond3D forums, a user found an entry in a Linux driver dated back to July that apparently mentions AMD's upcoming architecture - not by its real name, of course. The code is to add support for importing new asic definitions from a text file as opposed to adding support in code. Tom St Denis, a software engineer at AMD, listed the output that would be generated by using this functionality. However, the entry that caught our attention reads: new_chip.gfx10.mmSUPER_SECRET.enable [0: 0]. If our memory serves us right, the codename for Vega was GFX9. So by logic, Navi should carry the GFX10 codename. Obviously, the SUPER_SECRET part further backs up our theory or maybe AMD's just trolling us. The red team has been hiring personnel for their GFX10 projects, so we can assume they're working diligently to release Navi some time next year.

Rambus Talks HBM3, DDR5 in Investor Meeting

Rambus, a company that has veered around the line of being an innovative company and a patent troll, has shed some more light on what can be expected from HBM3 memory (when it's finally available). In an investor meeting, representatives from the company shared details regarding HBM3's improvements over HBM2. Details are still scarce, but at least we know Rambus' expectations for the technology: double the memory bandwidth per stack when compared to HBM2 (4000 MB/s), and a more complex design, which leaves behind the 2.5D design due to increased height of the HBM3 memory stacks. An interesting thing to note is that Rambus is counting on HBM3 to be produced on 7 nm technologies. Considering the overall semiconductor manufacturing calendar for the 7 nm process, this should place HBM3 production in 2019, at the earliest.

HBM3 is also expected to bring much lower power consumption compared to HBM2, besides increasing memory density and bandwidth. However, the "complex design architectures" in the Rambus slides should give readers pause. HBM2 production has had some apparent troubles in reaching demand levels, with suspected lower yields than expected being the most likely culprit. Knowing the trouble AMD has had in successful packaging of HBM2 memory with the silicon interposer and its own GPUs, an even more complex implementation of HBM memory in HBM3 could likely signal some more troubles in that area - maybe not just for AMD, but for any other takers of the technology. Here's hoping AMD's woes were due only to one-off snags on their packaging partners' side, and doesn't spell trouble for HBM's implementation itself.

Samsung Bets on GDDR6 for 2018 Rollout

Even as its fellow-Korean DRAM maker SK Hynix is pushing for HBM3 to bring 2 TB/s memory bandwidths to graphics cards, Samsung is betting on relatively inexpensive standards that succeed existing ones. The company hopes to have GDDR6, the memory standard that succeeds GDDR5X, to arrive by 2018.

GDDR6 will serve up bandwidths of up to 16 Gbps, up from the 10 Gbps currently offered by GDDR5X. This should enable memory bandwidths of 512 GB/s over a 256-bit wide memory interface, and 768 GB/s over 384-bit. The biggest innovation with GDDR6 that sets it apart from GDDR5X is LP4X, a method with which the memory controller can more responsively keep voltages proportionate to clocks, and reduce power-draw by up to 20% over the previous standard.

Third-Generation HBM Could Enable Graphics Cards with 64GB Memory

One of the first drafts of the HBM3 specification reveals that the standard could enable graphics cards with up to 64 GB of video memory. The HBM2 memory, which is yet to make its consumer graphics debut, caps out at 32 GB, and the first-generation HBM, which released with the AMD Radeon Fury series, at just 4 GB.

What's more, HBM3 doubles bandwidth over HBM2, pushing up to 512 GB/s per stack. A 4096-bit HBM3 equipped GPU could have up to 2 TB/s (yes, terabytes per second) of memory bandwidth at its disposal. SK Hynix, one of the key proponents of the HBM standard, even claims that HBM3 will be both more energy-efficient and cost-effective than existing memory standards, for the performance on offer. Some of the first HBM3 implementations could come from the HPC industry, with consumer implementations including game consoles, graphics cards, TVs, etc., following later.
Return to Keyword Browsing