News Posts matching #HBM

Return to Keyword Browsing

Jingjia Micro JM9 GPU Series Targeting GTX 1080 Performance Tapes Out

The Chinese Electronics company Jingjia Micro have recently completed the tapeout of their JM9 GPU series almost 2 years after they first announced the lineup. The JM9 series will consist of two GPUs with the entry-level JM9231 targeting GTX 1050 performance while the higher-end JM9271 aims for the GTX 1080. The JM9231 is stated to feature a clock speed above 1.5 GHz, 8 GB of GDDR5 memory, and will provide 2 TFLOPS of performance within a 150 W TDP through a PCIe Gen3 x16 interface. The JM9271 increases the clock speed to above 1.8 GHz and is paired with 16 GB of HBM memory which should offer 8 TFLOPS of single-precision performance to rival the GTX 1080. The card manages to do this within a TDP package of 200 W and also includes PCIe Gen4 x16 support. The two cards both support HDMI 2.0 in addition to DisplayPort 1.3 for the JM9231 and DisplayPort 1.4 for the JM9271.

While the JM9271 may target GTX 1080 performance it only features OpenGL and OpenCL API support lacking DirectX or Vulkan compatibility greatly reducing its use for gaming. The cards were originally expected to be available in 2020 but after various delays they are now ready to enter production. These products are highly unlikely to make their way outside of the Chinese mainland and if they did we wouldn't expect them to have much impact on the global market.

First AMD Radeon Vega M Drivers for Intel NUC8i7HNK and NUC8i7HVK in Over 16 Months Released

Intel and AMD released the first Radeon Software drivers for the exotic Radeon Vega M graphics found in Intel NUC8i7HNK and NUC8i7HVK, in over 16 months. The Intel Download Center just added version 21.10.03.11 of Radeon Software for the graphics solution, dated 09/09/2021. The previous drivers dated all the way back to February 2020, and were based on Adrenalin 20.2.

The drivers are based on a release candidate of Adrenalin 21.10, which is 21.10RC1, but does not include the security updates AMD could be bundling with the upcoming 21.10 drivers (October 2021). "Radeon RX Vega M Graphics Driver version 21.10.03.11, which is based on 21.10RC1, does not include the latest functional and security updates. An update is targeted to be released in March of 2022 and will include additional functional and security updates. Customers should update to the latest version as it becomes available," says Intel in the release notes of the drivers. The drivers do include optimization for the latest games, including "Metro Exodus" (DX12), and "Resident Evil Village," besides adding support for Microsoft PlayReady AV1 decode.

DOWNLOAD: AMD Radeon Vega M drivers for Intel NUC8i7HNK and NUC8i7HVK from Intel

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

Certain Intel Xeon "Sapphire Rapids" SKUs Come with On-Package HBM

Intel today, in its 2021 International Supercomputing Conference presentation, revealed that certain next-generation Xeon "Sapphire Rapids" SKUs come with on-package high-bandwidth memory (HBM). Given the context of its presentation, these could be special SKUs designed for high-density HPC setups, in which the processor package includes certain amount of "PMEM" (package memory), besides the processor's 8-channel DDR5 memory interface.

The size of the HBM PMEM, and its position in the memory hierarchy, were detailed, too. Given its high-density applications, PMEM may not serve as a victim cache for the processor, but rather be capable of serving as main memory, with none of the DDR5 DRAM channels populated with DIMMs. On machines with DIMMs, the PMEM will serve as a victim cache for the processor's on-die last-level cache, accelerating the memory I/O. "The next-generation of Intel Xeon Scalable processors (code-named "Sapphire Rapids) will offer integrated High Bandwidth Memory (HBM), providing a dramatic boost in memory bandwidth and a significant performance improvement for HPC applications that operate memory bandwidth-sensitive workloads. Users can power through workloads using just High Bandwidth Memory or in combination with DDR5," says Intel.

New Intel XPU Innovations Target HPC and AI

At the 2021 International Supercomputing Conference (ISC) Intel is showcasing how the company is extending its lead in high performance computing (HPC) with a range of technology disclosures, partnerships and customer adoptions. Intel processors are the most widely deployed compute architecture in the world's supercomputers, enabling global medical discoveries and scientific breakthroughs. Intel is announcing advances in its Xeon processor for HPC and AI as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.

"To maximize HPC performance we must leverage all the computer resources and technology advancements available to us," said Trish Damkroger, vice president and general manager of High Performance Computing at Intel. "Intel is the driving force behind the industry's move toward exascale computing, and the advancements we're delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization."

NVIDIA Launches A100 PCIe-Based Accelerator with 80 GB HBM2E Memory

During this year's ISC 2021 event, as a part of the company's exhibition portfolio, NVIDIA has decided to launch an updated version of the A100 accelerator. A couple of months ago, in November, NVIDIA launched an 80 GB HBM2E version of the A100 accelerator, on the SXM2 proprietary form-factor. Today, we are getting the same upgraded GPU in the more standard dual-slot PCIe type of card. Featuring a GA100 GPU built on TSMC's 7 nm process, this SKU has 6192 CUDA cores present. To pair with the beefy amount of computing, the GPU needs appropriate memory. This time, there is as much as 80 GB of HBM2E memory. The memory achieves a bandwidth of 2039 GB/s, with memory dies running at an effective speed of 3186 Gbps. An important note is that the TDP of the GPU has been lowered to 250 Watts, compared to the 400 Watt SXM2 solution.

To pair with the new upgrade, NVIDIA made another announcement today and that is an enterprise version of Microsoft's DirectStorage, called NVIDIA GPUDirect Storage. It represents a way of allowing applications to access the massive memory pool built on the GPU, with 80 GB of super-fast HBM2E memory.

AMD Confirms CDNA2 Instinct MI200 GPU Will Feature at Least Two Dies in MCM Design

Today we've got the first genuine piece of information that confirms AMD's MCM approach to CDNA2, the next-gen compute architecture meant for ML/HPC/Exascale computing. This comes courtesy of a Linux kernel update, where AMD engineers annotated the latest Linux kernel patch with some considerations specific for their upcoming Aldebaran, CDNA2-based compute cards. Namely, the engineers clarify the existence of a "Die0" and a "Die1", where power data fetching should be allocated to Die0 of the accelerator card - and that the power limit shouldn't be set on the secondary die.

This confirms that Aldebaran will be made of at least two CDNA2 compute dies, and as (almost) always in computing, one seems to be tasked with general administration of both compute dies. It is unclear as of yet whether the HBM2 memory controller will be allocated to the primary die, or if there will be an external I/O die (much like in Zen) that AMD can leverage for off-chip communication. AMD's approach to CDNA2 will eventually find its way (in an updated form) for AMD's consumer-geared next-generation graphics architecture with RDNA3.

OpenFive Tapes Out SoC for Advanced HPC/AI Solutions on TSMC 5 nm Technology

OpenFive, a leading provider of customizable, silicon-focused solutions with differentiated IP, today announced the successful tape out of a high-performance SoC on TSMC's N5 process, with integrated IP solutions targeted for cutting edge High Performance Computing (HPC)/AI, networking, and storage solutions.

The SoC features an OpenFive High Bandwidth Memory (HBM3) IP subsystem and D2D I/Os, as well as a SiFive E76 32-bit CPU core. The HBM3 interface supports 7.2 Gbps speeds allowing high throughput memories to feed domain-specific accelerators in compute-intensive applications including HPC, AI, Networking, and Storage. OpenFive's low-power, low-latency, and highly scalable D2D interface technology allows for expanding compute performance by connecting multiple dice together using an organic substrate or a silicon interposer in a 2.5D package.

NVIDIA and Global Computer Makers Launch Industry-Standard Enterprise Server Platforms for AI

NVIDIA today introduced a new class of NVIDIA-Certified Systems, bringing AI within reach for organizations that run their applications on industry-standard enterprise data center infrastructure. These include high-volume enterprise servers from top manufacturers, which were announced in January and are now certified to run the NVIDIA AI Enterprise software suite—which is exclusively certified for VMware vSphere 7, the world's most widely used compute virtualization platform.

Further expanding the NVIDIA-Certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, also announced today.

NVIDIA Announces Grace CPU for Giant AI and High Performance Computing Workloads

NVIDIA today announced its first data center CPU, an Arm-based processor that will deliver 10x the performance of today's fastest servers on the most complex AI and high performance computing workloads.

The result of more than 10,000 engineering years of work, the NVIDIA Grace CPU is designed to address the computing requirements for the world's most advanced applications—including natural language processing, recommender systems and AI supercomputing—that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. It combines energy-efficient Arm CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency.

Intel's Upcoming Sapphire Rapids Server Processors to Feature up to 56 Cores with HBM Memory

Intel has just launched its Ice Lake-SP lineup of Xeon Scalable processors, featuring the new Sunny Cove CPU core design. Built on the 10 nm node, these processors represent Intel's first 10 nm shipping product designed for enterprise. However, there is another 10 nm product going to be released for enterprise users. Intel is already preparing the Sapphire Rapids generation of Xeon processors and today we get to see more details about it. Thanks to the anonymous tip that VideoCardz received, we have a bit more details like core count, memory configurations, and connectivity options. And Sapphire Rapids is shaping up to be a very competitive platform. Do note that the slide is a bit older, however, it contains useful information.

The lineup will top at 56 cores with 112 threads, where this processor will carry a TDP of 350 Watts, notably higher than its predecessors. Perhaps one of the most interesting notes from the slide is the department of memory. The new platform will make a debut of DDR5 standard and bring higher capacities with higher speeds. Along with the new protocol, the chiplet design of Sapphire Rapids will bring HBM2E memory to CPUs, with up to 64 GBs of it per socket/processor. The PCIe 5.0 standard will also be present with 80 lanes, accompanying four Intel UPI 2.0 links. Intel is also supposed to extend the x86_64 configuration here with AMX/TMUL extensions for better INT8 and BFloat16 processing.

SK Hynix Envisions the Future: 600-Layer 3D NAND and EUV-made DRAM

On March 22nd, the CEO of SK Hynix, Seok-Hee Lee, gave a keynote speech to the IEEE International Reliability Physics Symposium (IRPS) and shared with experts a part of its plan for the future of SK Hynix products. The CEO took the stage and delivered some conceptual technologies that the company is working on right now. At the center of the show, two distinct products stood out - 3D NAND and DRAM. So far, the company has believed that its 3D NAND scaling was very limited and that it can push up to 500 layers sometime in the future before the limit is reached. However, according to the latest research, SK Hynix will be able to produce 600-layer 3D NAND technology in the distant future.

So far, the company has managed to manufacture and sample 512Gb 176-layer 3D NAND chips, so the 600-layer solutions are still far away. Nonetheless, it is a possibility that we are looking at. Before we reach that layer number, there are various problems needed to be solved so the technology can work. According to SK Hynix, "the company introduced the atomic layer deposition (ALD) technology to further improve the cell property of efficiently storing electric charges and exporting them when needed, while developing technology to maintain uniform electric charges over a certain amount through the innovation of dielectric materials. In addition to this, to solve film stress issues, the mechanical stress levels of films is controlled and the cell oxide-nitride (ON) material is being optimized. To deal with the interference phenomenon between cells and charge loss that occur when more cells are stacked at a limited height, SK Hynix developed the isolated-charge trap nitride (isolated-CTN) structure to enhance reliability."

Apple is Discontinuing Intel-based iMac Pro

According to the official company website, Apple will no longer manufacture its iMac Pro computers based on Intel processors. Instead, the company will carry these models in its store, only while the supplies last. Apple will be replacing these models with next-generation iMac Pro devices that will be home to the custom Apple Silicon processors, combining Arm CPU cores with custom GPU design. Having a starting price of 4990 USD, the Apple iMac Pro was able to max out at 15000 USD. The most expensive part was exactly the Intel Xeon processor inside it, among the AMD GPU with HBM. Configuration pricing was also driven by storage/RAM options. However, even the most expensive iMac Pro with its 2017 hardware had no chance against the regular 2020 iMac, so the product was set to be discontinued sooner or later.

When the stock of the iMac Pro runs out, Apple will replace this model with its Apple Silicon equipped variant. According to the current rumor mill, Apple is set to hold a keynote on March 16th that will be an announcement for new iMac Pro devices with custom processors. What happens is only up to Apple, so we have to wait and see.

AMD is Preparing RDNA-Based Cryptomining GPU SKUs

Back in February, NVIDIA has announced its GPU SKUs dedicated to the cryptocurrency mining task, without any graphics outputs present on the chips. Today, we are getting information that AMD is rumored to introduce its own lineup of graphics cards dedicated to cryptocurrency mining. In the latest patch for AMD Direct Rendering Manager (DRM), a subsystem of the Linux kernel responsible for interfacing with GPUs, we see the appearance of the Navi 12. This GPU SKU was not used for anything except Apple's Mac devices in a form of Radeon Pro 5600M GPU. However, it seems like the Navi 12 could join forces with Navi 10 GPU SKU and become a part of special "blockchain" GPUs.

Way back in November, popular hardware leaker, KOMACHI, has noted that AMD is preparing three additional Radeon SKUs called Radeon RX 5700XTB, RX 5700B, and RX 5500XTB. The "B" added to the end of each name is denoting the blockchain revision, made specifically for crypto-mining. When it comes to specifications of the upcoming mining-specific AMD GPUs, we know that both use first-generation RDNA architecture and have 2560 Stream Processors (40 Compute Units). Memory configuration for these cards remains unknown, as AMD surely won't be putting HBM2 stacks for mining like it did with Navi 12 GPU. All that remains is to wait and see what AMD announces in the coming months.

SiPearl to Manufacture its 72-Core Rhea HPC SoC at TSMC Facilities

SiPearl has this week announced their collaboration with Open-Silicon Research, the India-based entity of OpenFive, to produce the next-generation SoC designed for HPC purposes. SiPearl is a part of the European Processor Initiative (EPI) team and is responsible for designing the SoC itself that is supposed to be a base for the European exascale supercomputer. In the partnership with Open-Silicon Research, SiPearl expects to get a service that will integrate all the IP blocks and help with the tape out of the chip once it is done. There is a deadline set for the year 2023, however, both companies expect the chip to get shipped by Q4 of 2022.

When it comes to details of the SoC, it is called Rhea and it will be a 72-core Arm ISA based processor with Neoverse Zeus cores interconnected by a mesh. There are going to be 68 mesh network L3 cache slices in between all of the cores. All of that will be manufactured using TSMC's 6 nm extreme ultraviolet lithography (EUV) technology for silicon manufacturing. The Rhea SoC design will utilize 2.5D packaging with many IP blocks stitched together and HBM2E memory present on the die. It is unknown exactly what configuration of HBM2E is going to be present. The system will also see support for DDR5 memory and thus enable two-level system memory by combining HBM and DDR. We are excited to see how the final product looks like and now we wait for more updates on the project.

Intel Confirms HBM is Supported on Sapphire Rapids Xeons

Intel has just released its "Architecture Instruction Set Extensions and Future Features Programming Reference" manual, which serves the purpose of providing the developers' information about Intel's upcoming hardware additions which developers can utilize later on. Today, thanks to the @InstLatX64 on Twitter we have information that Intel is bringing on-package High Bandwidth Memory (HBM) solution to its next-generation Sapphire Rapids Xeon processors. Specifically, there are two instructions mentioned: 0220H - HBM command/address parity error and 0221H - HBM data parity error. Both instructions are there to address data errors in HBM so the CPU operates with correct data.

The addition of HBM is just one of the many new technologies Sapphire Rapids brings. The platform is supposedly going to bring many new technologies like an eight-channel DDR5 memory controller enriched with Intel's Data Streaming Accelerator (DSA). To connect to all of the external accelerators, the platform uses PCIe 5.0 protocol paired with CXL 1.1 standard to enable cache coherency in the system. And as a reminder, this would not be the first time we see a server CPU use HBM. Fujitsu has developed an A64FX processor with 48 cores and HBM memory, and it is powering today's most powerful supercomputer - Fugaku. That is showing how much can a processor get improved by adding a faster memory on-board. We are waiting to see how Intel manages to play it out and what we end up seeing on the market when Sapphire Rapids is delivered.

BittWare Launches IA-840F with Intel Agilex FPGA and Support for oneAPI

BittWare, a Molex company, today unveiled the IA-840F, the company's first Intel Agilex -based FPGA card designed to deliver significant performance-per-watt improvements for next-generation data center, networking and edge compute workloads. Agilex FPGAs deliver up to 40% higher performance or up to 40% lower power, depending on application requirements. BittWare maximized I/O features using the Agilex chip's unique tiling architecture with dual QSFP-DDs (4× 100G), PCIe Gen4 x16, and three MCIO expansion ports for diverse applications. BittWare also announced support for Intel oneAPI, which enables an abstracted development flow for dramatically simplified code re-use across multiple architectures.

"Modern data center workloads are incredibly diverse, requiring customers to implement a mix of scalar, vector, matrix and spatial architectures," said Craig Petrie, vice president of marketing for BittWare. "The IA-840F ensures that customers can quickly and easily exploit the advanced features of the Intel Agilex FPGA. For those customers who prefer to develop FPGA applications at an abstracted level, we are including support for oneAPI. This new unified software programming environment allows customers to program the Agilex FPGA from a single code base with native high-level language performance across architectures."

SK hynix Inc. Reports Third Quarter 2020 Results

SK hynix Inc. today announced financial results for its third quarter 2020 ended on September 30, 2020. The consolidated revenue of third quarter 2020 was 8.129 trillion won while the operating profit amounted to 1.3 trillion won, and the net income 1.078 trillion won. Operating margin for the quarter was 16% and net margin was 13%.

Despite the Company saw the recovery of mobile DRAM demand in the quarter, both the revenue and operating profit decreased by 6% and 33% quarter-over-quarter (QoQ) respectively, as the server DRAM and SSD demands weakened, and the overall semiconductor memory price flow turned downwards in the quarter. For DRAM, SK hynix proactively responded to rising demands of mobile and graphics DRAM, and the expansion of consumer electronics DRAM demand as well. As a result, in spite of decreased server DRAM demand, the Company's DRAM bit shipment in the quarter still increased by 4% QoQ. However, due to the unfavorable price of server DRAM and other certain DRAM products, the average selling price decreased by 7% QoQ.

TSMC to Enter Mass Production of 6th Generation CoWoS Packaging in 2023, up to 12 HBM Stacks

TSMC, the world's leading semiconductor manufacturing company, is rumored to start production of its 6th generation Chip-on-Wafer-on-Substrate (CoWoS) packaging technology. As the silicon scaling is getting ever so challenging, the manufacturers have to come up with a way to get as much performance as possible. That is where TSMC's CoWoS and other chiplet technologies come. They allow designers to integrate many integrated circuits on a single package, making for a cheaper overall product compared to if the product used one big die. So what is so special about 6th generation CoWoS technology from TSMC, you might wonder. The new generation is said to enable a massive 12 stacks of HBM memory on a package. You are reading that right. Imagine if each stack would be an HBM2E variant with 16 GB capacity that would be 192 GB of memory on the package present. Of course, that would be a very expensive chip to manufacture, however, it is just a showcase of what the technology could achieve.

Update 16:44 UTC—TheEnglish DigiTimes report indicates that this technology is expected to see mass production in 2023.

AMD Big Navi GPU Features Infinity Cache?

As we are nearing the launch of AMD's highly hyped, next-generation RDNA 2 GPU codenamed "Big Navi", we are seeing more details emerge and crawl their way to us. We already got some rumors suggesting that this card is supposedly going to be called AMD Radeon RX 6900 and it is going to be AMD's top offering. Using a 256-bit bus with 16 GB of GDDR6 memory, the GPU will not use any type of HBM memory, which has historically been rather pricey. Instead, it looks like AMD will compensate for a smaller bus with a new technology it has developed. Thanks to the new findings on Justia Trademarks website by @momomo_us, we have information about the alleged "infinity cache" technology the new GPU uses.

It is reported by VideoCardz that the internal name for this technology is not Infinity Cache, however, it seems that AMD could have changed it recently. What does exactly you might wonder? Well, it is a bit of a mystery for now. What it could be, is a new cache technology which would allow for L1 GPU cache sharing across the cores, or some connection between the caches found across the whole GPU unit. This information should be taken with a grain of salt, as we are yet to see what this technology does and how it works, when AMD announces their new GPU on October 28th.

Rambus Advances HBM2E Performance to 4.0 Gbps for AI/ML Training Applications

Rambus Inc. (NASDAQ: RMBS), a premier silicon IP and chip provider making data faster and safer, today announced it has achieved a record 4 Gbps performance with the Rambus HBM2E memory interface solution consisting of a fully-integrated PHY and controller. Paired with the industry's fastest HBM2E DRAM from SK hynix operating at 3.6 Gbps, the solution can deliver 460 GB/s of bandwidth from a single HBM2E device. This performance meets the terabyte-scale bandwidth needs of accelerators targeting the most demanding AI/ML training and high-performance computing (HPC) applications.

"With this achievement by Rambus, designers of AI and HPC systems can now implement systems using the world's fastest HBM2E DRAM running at 3.6 Gbps from SK hynix," said Uksong Kang, vice president of product planning at SK hynix. "In July, we announced full-scale mass-production of HBM2E for state-of-the-art computing applications demanding the highest bandwidth available."

NVIDIA Tesla A100 GPU Pictured

Thanks to the sources of VideoCardz, we now have the first picture of the next-generation NVIDIA Tesla A100 graphics card. Designed for computing oriented applications, the Tesla A100 is a socketed GPU designed for NVIDIA's proprietary SXM socket. In a post few days ago, we were suspecting that you might be able to fit the Tesla A100 GPU in the socket of the previous Volta V100 GPUs as it is a similar SXM socket. However, the mounting holes have been re-arranged and this one requires a new socket/motherboard. The Tesla A100 GPU is based on GA100 GPU die, which we don't know specifications of. From the picture, we can only see that there is one very big die attached to six HBM modules, most likely HBM2E. Besides that everything else is unknown. More details are expected to be announced today at the GTC 2020 digital keynote.
NVIDIA Tesla A100

Fujitsu Completes Delivery of Fugaku Supercomputer

Fujitsu has today officially completed the delivery of the Fugaku supercomputer to the Riken scientific research institute of Japan. This is a big accomplishment as the current COVID-19 pandemic has delayed many happenings in the industry. However, Fujitsu managed to play around that and deliver the supercomputer on time. The last of 400 racks needed for the Fugaku supercomputer was delivered today, on May 13th, as it was originally planned. The supercomputer is supposed to be fully operational starting on the physical year of 2021, where the installation and setup will be done before.

As a reminder, the Fugaku is an Arm-based supercomputer consisting out of 150 thousand A64FX CPUs. These CPUs are custom made processors by Fujitsu based on Arm v8.2 ISA, and they feature 48 cores built on TSMC 7 nm node and running above 2 GHz. Packing 8.786 billion transistors, this monster chips use HBM2 memory instead of a regular DDR memory interface. Recently, a prototype of the Fugaku supercomputer was submitted to the Top500 supercomputer list and it came on top for being the most energy-efficient of all, meaning that it will be as energy efficient as it will be fast. Speculations are that it will have around 400 PetaFlops of general compute power for Dual-Precision workloads, however, for the specific artificial intelligence applications, it should achieve ExaFLOP performance target.
K SuperComputer

Micron to Launch HBM2 Memory This Year

Micron Technologies, in the latest earnings report, announced that they will start shipping High-Bandwidth Memory 2 (HBM2) DRAM. Used for high-performance graphics cards, server processors and all kinds of processors, HBM2 memory is wanted and relatively expensive solution, however, when Micron enters the market of its manufacturing, prices, and the market should adjust for the new player. Previously, only SK-Hynix and Samsung were manufacturing the HBM2 DRAM, however, Micron will join them and they will again form a "big-three" pact that dominates the memory market.

Up until now, Micron used to lay all hopes on its proprietary Hybrid Memory Cube (HMC) DRAM type, which didn't gain much traction from customers and it never really took off. Only a few rare products used it, as Fujitsu SPARC64 XIfx CPU used in Fujitsu PRIMEHPC FX100 supercomputer introduced in 2015. Micron announced to suspend works on HMC in 2018 and decided to devote their efforts to GDDR6 and HBM development. So, as a result, we are seeing that they will launch HBM2 DRAM products sometime this year.
Micron HMC High-Bandwidth Memory

Rambus Designs HBM2E Controller and PHY

Rambus, a maker of various Interface IP solutions, today announced the latest addition to its high-speed memory interface IP product portfolio in form of High Bandwidth Memory 2E (HBM2E) controller and physical layer (PHY) IP solution. The two IPs are enabling customers to completely integrate the HBM2E memory into their products, given that Rambus provides a complete solution for controlling and interfacing the memory. The design that Ramus offers can support for 12-high DRAM stacks of up to 24 Gb devices, making for up to 36 GB of memory per 3D stack. This single 3D stack is capable of delivering 3.2 Gbps over a 1024-bit wide interface, delivering 410 GB/s of bandwidth per stack.

The HBM2E controller core is DFI 3.1 compatible and has support for logic interfaces like AXI, OCP, or a custom one, so the customer can choose a way to integrate this core in their design. With a purchase of their HBM2E IP, Rambus will provide source code written in Hardware Description Language (HDL) and GDSII file containing the layout of the interface.
Return to Keyword Browsing
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.