Thursday, June 10th 2021

SK Hynix Details its Upcoming HBM3 Memory: 665 GB/s per Stack

SK Hynix is at the forefront of developing the next generation of stacked high-bandwidth memory, the HBM3 standard. Succeeding the current HBM2e standard, HBM3 will power next-generation HPC and AI processors in high-density multi-chip modules. A Tom's Hardware report citing information from SK Hynix reveals two key details about the new standard. For starters, it could offer per-pin data-rates of 5.2 Gbps, a 44% increase over the 3.6 Gbps that HBM2e caps out at. This results in a per-stack bandwidth of 665 GB/s, compared to 480 GB/s for the HBM2e. A processor with four such stacks (over a 4096-bit wide bus), would hence enjoy 2.66 TB/s of memory bandwidth. It's likely that HBM3 stacks from SK Hynix could implement the DBI Ultra 2.5D/3D hybrid bonding interconnect technology licensed from Xperi Corp.
Source: Tom's Hardware
Add your own comment

8 Comments on SK Hynix Details its Upcoming HBM3 Memory: 665 GB/s per Stack

#1
R0H1T
Great, now just use a single stack for zen4/5 based APU or even CPU's after which Intel will probably through in the towel :nutkick:
Posted on Reply
#3
delshay
This is what I would love in a Nano GFX card. ...Lets see if GDDR6x can responded to those speed.
Posted on Reply
#4
TumbleGeorge
delshayThis is what I would love in a Nano GFX card. ...Lets see if GDDR6x can responded to those speed.
GDDR6X is not person. It not has response.
Posted on Reply
#5
AnarchoPrimitiv
R0H1TGreat, now just use a single stack for zen4/5 based APU or even CPU's after which Intel will probably through in the towel :nutkick:
You know, I've ALWAYS wanted a powerhouse APU, something akin to the amount of CUs in the Xbox Series X Soc, but with 8GB of HBM2e integrated into the Soc. I think these will eventually come, but I think the major hurdle is a consumer one. What I mean by that, is that the vast majority of buyers want to be able to independently upgrade their CPU, GPU and system memory, so an Soc that ties you to a locked combo probably wouldn't be that widely popular on the DIY market. They would make epic SFF builds though.

Personaly, I wouldn't care if such APUs required a package the size of threadripper, but I realize I don't represent the majority.
Posted on Reply
#6
Wirko
Those 240,000 wafers weren't defective after all, they were radioactivated and destined to become HBM3, but were then sent out to customers in confusion due to shortages and Covid.
Posted on Reply
#7
Manoa
pointless, until radeon/geforce build cards with this many speed it's only will be for computational cards 5000$ and higher, it sad becuase this speed is need for ray trace performance :x
Posted on Reply
#8
Xajel
R0H1TGreat, now just use a single stack for zen4/5 based APU or even CPU's after which Intel will probably through in the towel :nutkick:
The main idea of an APU is a low cost CPU+GPU solution. HBM main issue is that to have it you must use an expensive Silicon interposer.

So we won't see any APU+HBM for consumers anytime soon, but if the server/HPC market demand such product to justify it's cost then they will do it, but it seems there's not enough demand because we still don't have such product even for the HPC market.

Maybe with the current advances in packaging technology this might be possible, like if it will be possible to 3D stack the HBM stack over the IO die or over the cache just like how AMD's 3D V-Cache works. The main issue with such implementation is the Z-Height, HBM are already a 3D stacked DRAM dies, so they're "tall" if we can say that, stacking them over the IO/Cache will make the main Core die have lower Z-Height.
Intel was faced with this issue with their strange Intel+Radeon opGPU as it uses HBM also (connected via EMIB), they did some engineering work and succeeded but the concept was costly and never got popular enough, they dropped it.
Posted on Reply
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.