Tuesday, April 16th 2024

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.
The inclusion of the GB200, B100, and B200 in NVIDIA's B-series will boost demand for CoWoS capacity, leading TSMC to raise its total CoWoS capacity needs for 2024. The estimated monthly capacity by the end of the year is expected to reach nearly 40K—a staggering 150% year-over-year increase. By 2025, the planned total capacity could nearly double, with NVIDIA's demand expected to make up more than half of this capacity. Other suppliers, such as Amkor and Intel, currently focus on CoWoS-S technology and are primarily targeting NVIDIA's H-series. With technological breakthroughs expected to be challenging in the short term, expansion plans remain conservative unless these suppliers can secure additional orders beyond NVIDIA, such as self-developed ASIC chips by CSPs, which might lead to a more aggressive expansion strategy.

NVIDIA and AMD's AI development set to propel HBM3e into mainstream market dominance by the second half of the year
TrendForce has identified three major HBM trends for NVIDIA and AMD's primary GPU products and their planned specifications beyond 2024: Firstly, the transition from HBM3 to HBM3e is anticipated. NVIDIA is expected to start scaling up shipments of the H200 equipped with HBM3e in the second half of 2024, replacing the H100 as the mainstream. Following this, other models such as the GB200 and B100 will also adopt HBM3e. Meanwhile, AMD plans to launch the new MI350 by the end of the year and may introduce interim models like the MI32x in the meantime to compete with the H200, with both utilizing HBM3e.

Secondly, there will be a continued expansion in HBM capacity to boost the overall computational efficiency and system bandwidth of AI servers. The market currently predominantly uses the NVIDIA H100 with 80 GB of HBM, which is expected to increase to between 192 GB and 288 GB by the end of 2024. AMD's new GPUs, starting from the MI300A's 128 GB, will also see increases, reaching up to 288 GB.

Thirdly, the lineup of GPUs equipped with HBM3e will evolve from 8Hi configurations to 12Hi configurations. NVIDIA's B100 and GB200 currently feature 8Hi HBM3e with a capacity of 192 GB, and by 2025, the B200 model is planned to be equipped with 12Hi HBM3e, achieving 288 GB. AMD's upcoming MI350, to be launched by the end of this year, and the MI375 series, expected in 2025, are both anticipated to come with 12Hi HBM3e, also reaching 288 GB.
Source: TrendForce
Add your own comment

6 Comments on Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

#1
Daven
MI350 vs B200…Round One…Fight!
Posted on Reply
#2
Wirko
I'm wondering why it's called Chip-on-Wafer-on-Substrate, do they really stack individual dies on an entire uncut wafer at some stage in the process?
Posted on Reply
#3
AnotherReader
I really think Intel missed a trick here by not courting others aggressively for EMIB. They could have taken advantage of the shortage of TSMC's COWOS and its technical limitations.
Posted on Reply
#4
TheLostSwede
News Editor
WirkoI'm wondering why it's called Chip-on-Wafer-on-Substrate, do they really stack individual dies on an entire uncut wafer at some stage in the process?
It's supposedly done like this:

en.wikichip.org/wiki/tsmc/cowos
Posted on Reply
#5
mechtech

Looks a bit creepy. But there is some black leather ;)
Posted on Reply
#6
stimpy88
Oh my god nGreedia are going to make us pay for the consumer version. I detect lots of deliberate stock issues for the first 9 months to push the prices even higher.
Posted on Reply
Apr 29th, 2024 16:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts