• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

TSMC Reserves 70% of 2025 CoWoS-L Capacity for NVIDIA

Nomad76

News Editor
Staff member
Joined
May 21, 2024
Messages
1,393 (3.66/day)
Rumors previously suggested that NVIDIA might scale back its CoWoS orders from TSMC. However, according to a report from Economic Daily News, orders for TSMC's advanced packaging have instead seen a surge. NVIDIA's Blackwell architecture GPUs are in strong demand, leading the company to secure over 70% of TSMC's CoWoS-L advanced packaging capacity for 2025. Shipment volumes are projected to rise by more than 20% each quarter, with total annual shipments expected to surpass 2 million units.

Meanwhile, following the U.S. announcement of the Stargate project—which is anticipated to drive new AI server demand—NVIDIA is reportedly considering placing additional orders with TSMC. During TSMC's earnings call in January, Chairman C.C. Wei stated that the company is continuously expanding its advanced packaging capacity to keep pace with customer demand. According to reports, advanced packaging revenue accounted for roughly 8% in 2024 and is projected to exceed 10% in 2025.



NVIDIA's Blackwell architecture chips B200/B300 continue to be manufactured using TSMC's 4 nm process. The company has adopted CoWoS-L advanced packaging, which integrates a redistribution layer (RDL) with a partial silicon interposer (LSI). CoWoS-L packaging technology enhances chip size and area by increasing transistor density, enabling the stacking of more high-bandwidth memory (HBM). Compared to previous CoWoS-S and CoWoS-R technologies, CoWoS-L offers superior performance, higher yield, and better cost efficiency.

Economic Daily News also reports that TSMC intends to expand its CoWoS manufacturing footprint by adding eight facilities in the near future. Two of these will be located at ChiaYi Science Park Phase 1, while another two will come from the acquisition of Innolux (AP8). However, two more facilities initially planned by TSMC for ChiaYi Science Park Phase 2 will have to wait at least until 2026 as the land will not be available. As a result, TSMC has shifted its focus to prioritizing the construction of two CoWoS sites at STSP Phase III. The final two locations remain under evaluation.

View at TechPowerUp Main Site | Source
 
Hah, this news arrives just as Nvidia announce that datacenter and AI demand has been far lower than anticipated.

Maybe use poor consumer peasants can have some GPUs again next year?
 
Maybe use poor consumer peasants can have some GPUs again next year?
Well, as the classic Aerosmith song says:

"Dream on, dream on, dream on... Dream until your dreams come through...

Sing with me, sing for the year....Sing for the laughter, sing for the tear"
 
Maybe use poor consumer peasants can have some GPUs again next year?
Or maybe return the favor and dont give Ngreedia any of your money.
 
Maybe use poor consumer peasants can have some GPUs again next year?
Okay, okay, just gimme a sec here, I need to contact my internal sources, they might have an answer.
1740400832503.jpeg
 
Or maybe return the favor and dont give Ngreedia any of your money.
They won't care, as Gamers are such a small percentage of their market, they can afford to focus solely on Ai then.
 
Hah, this news arrives just as Nvidia announce that datacenter and AI demand has been far lower than anticipated.

Maybe use poor consumer peasants can have some GPUs again next year?
Eeh... No. The trend, however, suggests that iGPUs will dominate the low-end and mid-range market within the next five years. This shift could drive more reasonable pricing, as AMD, Intel, Nvidia, or any other company selling a CPU-GPU combo in a single design won’t need to share profits with AIBs and third parties, reducing costs. The performance of DDR6/LPDDR6 will also be a crucial factor in this equation, and will determine the extent to which iGPUs encroach on the dGPU market.


GraphMarket.png
 
They won't care, as Gamers are such a small percentage of their market, they can afford to focus solely on Ai then.
Well, we need to start somewhere, right?
 
Nvidia needs a second fab source, one that can produce chips without defects across it's entire line of graphics cards. I can't believe how crap TSMC process nodes are that all versions of Nvidia graphic cards have defects bad enough that they are disabled randomly for end users, have they turned into InTeL?!?!?! /s
 
Last edited:
Or maybe return the favor and dont give Ngreedia any of your money.
When AMD and Intel support CUDA I'll be sure to buy the alternatives! At home at least I always ensure I'm using both AMD and Nvidia simultaneously; The current balance is two active Nvidia GPUs and two active AMD GPUs.

Back in 2017 all 800+ workstation GPUs I managed were Radeons. Then software that needed CUDA became mainstream, and now some things support only CUDA. CFD, Solar and wind simulations, pedestrian and road traffic simulations, and of course all the major local AI image generation software relies on CUDA unless you want your result processed at a tiny 1-2% fraction of the speed. Suddenly your "I'll have that ready in an hour" becomes "I'll reserve some time on the farm and let it run iterations until next week"

ROCm is a thing, but it's not compatible with CUDA (yet) which means Nvidia doesn't have any competition (yet).
 
When AMD and Intel support CUDA I'll be sure to buy the alternatives! At home at least I always ensure I'm using both AMD and Nvidia simultaneously; The current balance is two active Nvidia GPUs and two active AMD GPUs.

Back in 2017 all 800+ workstation GPUs I managed were Radeons. Then software that needed CUDA became mainstream, and now some things support only CUDA. CFD, Solar and wind simulations, pedestrian and road traffic simulations, and of course all the major local AI image generation software relies on CUDA unless you want your result processed at a tiny 1-2% fraction of the speed. Suddenly your "I'll have that ready in an hour" becomes "I'll reserve some time on the farm and let it run iterations until next week"

ROCm is a thing, but it's not compatible with CUDA (yet) which means Nvidia doesn't have any competition (yet).
Im glad that i am not at their mercy like that.
 
Next year will be our year boys, GPUs for all!

If your wallet will letcha :)
 
NVIDIA's Blackwell architecture GPUs are in strong demand - To a storage depot to make money off the upcoming tariffs anyway.
 
Back
Top