Thursday, February 22nd 2024

SK Hynix VP Reveals HBM Production Volumes for 2024 are Sold Out

SK Hynix Vice President Kitae Kim presides over the company's HBM Sales & Marketing (S&M) department—an official leadership blog profile reveals that the executive played a key role in making the South Korean supplier's high bandwidth memory (HBM) product line "a superstar of the semiconductor memory industry in 2023." Growing demand for powerful AI processors has placed SK Hynix in a more comfortable position, following recessive spells—including a major sales downturn in 2022. NVIDIA is the market leader in AI processing chips, and many of its flagship enterprise designs are fitted with cutting-edge SK Hynix memory modules. Kim noted that his firm has many notable international clients: "HBM is a revolutionary product which has challenged the notion that semiconductor memory is only one part of an overall system...in particular, SK Hynix's HBM has outstanding competitiveness. Our advanced technology is highly sought after by global tech companies."

The VP outlined how artificial intelligence industries are fuelling innovations: "With the diversification and advancement of generative AI services, demand for HBM, an AI memory solution, has also exploded. HBM, with its high-performance and high-capacity characteristics, is a monumental product that shakes the conventional wisdom that memory semiconductors are only a part of the overall system. In particular, SK Hynix HBM's competitiveness is outstanding." Business is booming, so much so that nothing can be added to this year's HBM order books: "Proactively securing customer purchase volumes and negotiating more favorable conditions for our high-quality products are the basics of semiconductor sales operations. With excellent products in hand, it's a matter of speed. Our planned production volume of HBM this year has already sold out. Although 2024 has just begun, we've already started preparing for 2025 to stay ahead of the market."
Sources: SK Hynix News #1, Wccftech, SK Hynix News #2
Add your own comment

17 Comments on SK Hynix VP Reveals HBM Production Volumes for 2024 are Sold Out

#1
Space Lynx
Astronaut
their prices were not high enough then, so failed in that sense. could have prob still sold out even higher priced. noobs
Posted on Reply
#2
Panther_Seraphin
Itll be AMD/nVidia buying it out for the AI boom that is ongoing in datacenters.
Posted on Reply
#3
P4-630
Damn, I just wanted to order me some....
Posted on Reply
#4
mechtech
Great news

Now price 4TB platinum P41 / P44 Pro nvme drives at $300CAD or less and watch them sell out too.
Posted on Reply
#5
LabRat 891
Let's take this in for a moment, shall we?

HBM has to be bonded to a die interposer. Meaning, there is no salvaging of this highly-performant memory.

Most-all of the "AI accelerators" are not in a PCIe Add-in card form factor, and most are no longer even GPU/Pin-compatible with anything else in PCIe.
Meaning, as soon as these AI Accelerators are superseded by faster (and more profitable) designs, they become un-re-usable eWaste.

Unlike Radeon Instinct's older models and the PCIe-slottable nVidia Tesla cards, almost all of this HBM, will soon become nothing more than glass waste.



:oops: save some HBM for us, gaiz
Posted on Reply
#6
Wirko
LabRat 891HBM has to be bonded to a die interposer. Meaning, there is no salvaging of this highly-performant memory.
Suppose you're able to get salvaged HBM stacks. What kind of system would you reuse them in?
Posted on Reply
#7
Chaitanya
mechtechGreat news

Now price 4TB platinum P41 / P44 Pro nvme drives at $300CAD or less and watch them sell out too.
Platinum P41 doesnt come in 4TB capacity and P44 Pro is Solidgm not SKHynix drive.
Posted on Reply
#8
mechtech
ChaitanyaPlatinum P41 doesnt come in 4TB capacity and P44 Pro is Solidgm not SKHynix drive.
www.businesswire.com/news/home/20211229005325/en/Introducing-Solidigm-%E2%80%93-A-Market-Leader-in-NAND-Flash-Technology

"The new company’s origins stem from an October 2020 agreement by SK hynixto acquire Intel's NAND and SSD business"

www.techpowerup.com/review/solidigm-p44-pro-2-tb-1-tb/
"The Solidigm P44 Pro is based on the Hynix ACNS075 controller, paired with Hynix 176-layer 3D TLC NAND flash. This means that the P44 Pro has the same hardware configuration as the Hynix Platinum P41 SSD—the fastest SSD we ever tested."

"Platinum P41 doesnt come in 4TB capacity"
I am aware - that's why I said 4TB - they should make some

Pretty sad even newer drives like the SN580 WD doesn't have a 4TB version....................
Posted on Reply
#9
LabRat 891
ChaitanyaPlatinum P41 doesnt come in 4TB capacity and P44 Pro is Solidgm not SKHynix drive.
As mechtech pointed out, Solidigm is SKHynix, and SK Hynix bought-out Intel's remaining NAND business
WirkoSuppose you're able to get salvaged HBM stacks. What kind of system would you reuse them in?
Seeing what Chinese modders and companies have done w/ retired GPUs... (Physically not possible, but) upgrading old HBM GPUs to higher densities, comes to mind.

If there were an FPGA that could be used as an HBM controller, the assembly could be used as a plug-in fast cache.
I'd imagine there are smaller datacenter companies and high-performance SOHO users around the world that would find use for Low-Latency high-bandwidth caching devices.

A Plug-in 'Flash' Card with Optane DIMMs and HBM stacks, connected with an FPGA seems like it would have wide non-gaming Professional/Experimental use.
Posted on Reply
#10
Bwaze
LabRat 891Let's take this in for a moment, shall we?

HBM has to be bonded to a die interposer. Meaning, there is no salvaging of this highly-performant memory.

Most-all of the "AI accelerators" are not in a PCIe Add-in card form factor, and most are no longer even GPU/Pin-compatible with anything else in PCIe.
Meaning, as soon as these AI Accelerators are superseded by faster (and more profitable) designs, they become un-re-usable eWaste.

Unlike Radeon Instinct's older models and the PCIe-slottable nVidia Tesla cards, almost all of this HBM, will soon become nothing more than glass waste.



:oops:save some HBM for us, gaiz
Cryptomining rigs were far more reusable, but many miners opted not to sell their rigs, even the ones mining on gaming cards - they planned to sell their RTX 30X0 cards at the next crypto high, when even the prices of the old cards will again skyrocket. If it takes 5 years...
Posted on Reply
#12
WhitetailAni
LabRat 891A Plug-in 'Flash' Card with Optane DIMMs and HBM stacks, connected with an FPGA seems like it would have wide non-gaming Professional/Experimental use.
This sounds interesting. Kind of like the old Gigabyte i-RAM that used DDR1?
Maybe it could finally saturate a PCIe 5.0 x16 link.
Posted on Reply
#13
Flyordie
I'd hoped HBM would make a return in the enthusiast consumer GPU market- Seeing as we now have chiplets- surely it wouldn't be so hard to take a workstation GPU with more compute oriented cores and swap them out for more graphics oriented setups. Using economies of scale to get the pricing reasonable by mass producing the board and components just like they did with V64.

I'm still using Vega64 XTX... works as good as the day I bought it.
Posted on Reply
#14
Gooigi's Ex
Would be amazing if AMD used HBM for their APUs as HBM are more efficient than GDDDR and I think the surface area is smaller as well.

would be neat to have a 8C/16T with a RX 7600 as the GPU on top of HBM
Posted on Reply
#15
Wirko
LabRat 891Seeing what Chinese modders and companies have done w/ retired GPUs... (Physically not possible, but) upgrading old HBM GPUs to higher densities, comes to mind.

If there were an FPGA that could be used as an HBM controller, the assembly could be used as a plug-in fast cache.
I'd imagine there are smaller datacenter companies and high-performance SOHO users around the world that would find use for Low-Latency high-bandwidth caching devices.

A Plug-in 'Flash' Card with Optane DIMMs and HBM stacks, connected with an FPGA seems like it would have wide non-gaming Professional/Experimental use.
The best use of HBM is if it stays in the original product, the accelerator module. The Chinese modders may as well design PCIe adapter cards that take entire SXM or OCP modules, which shouldn't require anything else than power and PCIe bus to operate (they normally connect to Epycs and Xeons). That would be fine for those who need a single module; those who need more will be looking for used rack mount server blades with 4 or 8 of those anyway.

Also, as you said that "HBM has to be bonded to a die interposer" - yes, but that's still soldering, using an alloy that melts at a temperature low enough to not destroy the transistors immediately. The grey substance here. Problem is, there are tens of thousands of contact points. Connecting them requires *far* greater precision than soldering of packaged chips with solder bumps. The dies in the TSV stack are also connected to each other with solder, which should not melt during the surgery.
Posted on Reply
#16
stimpy88
Sold out in February, yeah, I think somebody needs to attend Business 101, then go to the addon course Supply & Demand 101.
Posted on Reply
#17
Panther_Seraphin
Gooigi's ExWould be amazing if AMD used HBM for their APUs as HBM are more efficient than GDDDR and I think the surface area is smaller as well.

would be neat to have a 8C/16T with a RX 7600 as the GPU on top of HBM
HBM costing is a LOT more than GDDR due to the requirement of an active interposer for HBM

Its why it didnt stay in GPUs after AMD testing it out unfortunately. What I would love to see is a next gen AMD GPU going for the chiplet method as currently but over the MCD they stick an HBM module over it. That way it lowers/removes the Active interposer cost.
Posted on Reply
Add your own comment
May 6th, 2024 11:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts