News Posts matching #TSMC

Return to Keyword Browsing

ASML Unveils Plans for Next-Generation "Hyper-NA" Extreme Ultraviolet Lithography

ASML, the world's sole provider of extreme ultraviolet (EUV) lithography systems essential for manufacturing the most advanced chips, has revealed its roadmap for pushing semiconductor scaling even further. In a recent presentation, former ASML president Martin van den Brink announced the company's plans for a new "Hyper-NA" EUV technology that would succeed the High-NA EUV systems, which are just beginning to deploy. The Hyper-NA tools, still in early research stages, would increase the numerical aperture to 0.75 from High-NA's 0.55, enabling chips with transistor densities beyond the projected limits of High-NA in the early 2030s. This higher numerical aperture should reduce reliance on multi-patterning techniques that add complexity and cost.

Hyper-NA is bringing challenges of its own to commercialization. Key obstacles include light polarization effects that degrade imaging contrast, requiring polarization filters that reduce light throughput. Resist materials may also need to become thinner to maintain resolution. While leading EUV chipmakers like TSMC can likely extend scaling for several more nodes using multi-patterning with existing 0.33 NA EUV tools, Intel has adopted 0.55 High-NA to avoid these complexities. But Hyper-NA will likely become essential across the industry later this decade as High-NA's physical limits are reached. Beyond Hyper-NA, few alternative patterning solutions exist besides expensive multi-beam electron lithography, which lacks the throughput of EUV photolithography. To continue classical scaling, the industry may need to eventually transition to new channel materials with superior electron mobility properties compared to silicon, requiring novel deposition and etch capabilities.

Silicon Motion's SM2508 Set to Launch in Q4, Edging Out Phison as Top SSD Controller

Silicon Motion's SM2508 was first revealed in August last year at the Flash Memory Summit 2023, but after that things went pretty quiet. However, the company was demoing the SM2508 up and running at Computex this past week and it's set to edge out Phison's E26 Max14um in the battle of fastest NVMe SSD controller. We're not talking about any massive gains here, but the reference drive from Silicon Motion was shown running CrystalDiskMark 8.0.4 at the show and if we do a rough comparison to a Phison E26 Max14um, the SM2508 beats Phison by about 800 MB/s in sequential read performance and 500 MB/s in sequential write performance.

This might not seem like a whole lot, but the SM2508 is built on TSMC's N6 node which results in a 3.5 Watt peak power consumption, or 7 Watts for the entire SSD at load. A typical Phison E26 based SSD draws in excess of 11 Watts of power at full load, which is a big difference in a mobile device. This should obviously also lead to lower thermals and we should finally see PCIe 5.0 drives that don't need massive heatsinks or active cooling. In fact, 7 Watts power draw is very similar to Phison's E18 PCIe 4.0 based SSDs. Silicon Motion is still working on fine tuning the firmware for the SM2508, so performance might yet improve to reach the promised 14 GB/s write performance. Currently the random performance is also looking a bit on the weak side compared to Phison. According to Tom's hardware, we should see the first drives with the Silicon Motion SM2508 appear in the market sometime in Q4 this year.

TSMC Thinking to Raise Prices, NVIDIA's Jensen Fully Supports the Idea

NVIDIA's CEO Jensen Huang said on June 5th that TSMC's stock price is too low, and he agrees with new TSMC chairman C. C. Wei's idea about TSMC's value. Jensen promised to support TSMC in charging more for their wafers and a type of packaging called CoWoS. An article from TrendForce says that NVIDIA and TSMC will talk about chip prices for next year, which could help TSMC make more money. Jensen also said he's not too worried about problems between countries because Taiwan has a strong supply chain; TSMC is doing more than just making chips, they're handling many supply chain issues too.

Last year, many companies were waiting for TSMC's products, ever-increasing demand and production issues causing delays. Even though things got a bit better this year, there's still not enough supply. TSMC says that even making three times more 3-nanometer chips isn't enough, so they need to make even more. NVIDIA's profits are very high, much higher than other companies like AMD and even TSMC. If TSMC raises prices for these advanced processes, it won't hurt NVIDIA's profits much, but it might lower profits for other companies like Apple, AMD, and Qualcomm. It will also have an impact on end-users.

Taiwanese Chipmakers Expand Overseas to Capitalize on Geopolitical Shifts and De-Sinicization Benefits

On June 5th, Vanguard and NXP announced plans to jointly establish VisionPower Semiconductor Manufacturing Company (VSMC) in Singapore to build a 12-inch wafer plant. TrendForce posits that this move reflects the trend of global supply chains shifting "Out of China, Out of Taiwan"(OOC/OOT), with Taiwanese companies accelerating their overseas expansion to improve regional capacity flexibility and competitiveness.

TrendForce noted that the semiconductor supply chain has been diversifying over the past two years to mitigate geopolitical and pandemic-related risks, forming two major segments: China's domestic supply chain and a non-China supply chain. Recent US tariff increases have accelerated this shift, leading to increased orders from American customers.

AMD Wants to Tap Samsung Foundry for 3 nm GAAFET Process

According to a report by KED Global, Korean chipmaking giant Samsung is ramping up its efforts to compete with global giants like TSMC and Intel. The latest partnership on the horizon is AMD's collaboration with Samsung. AMD is planning to utilize Samsung's cutting-edge 3 nm technology for its future chips. More specifically, AMD wants to utilize Samsung's gate-all-around FETs (GAAFETs). During ITF World 2024, AMD CEO Lisa Su noted that the company intends to use 3 nm GAA transistors for its future products. The only company offering GAAFETs on a 3 nm process is Samsung. Hence, this report from KED gains more credibility.

While we don't have any official information, AMD's utilization of a second foundry as a manufacturing partner would be a first for the company in years. This strategic move signifies a shift towards dual-sourcing, aiming to diversify its supply chain and reduce dependency on a single manufacturer, previously TSMC. We still don't know what specific AMD products will use GAAFETs. AMD could use them for CPUs, GPUs, DPUs, FPGAs, and even data center accelerators like Instinct MI series.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

China Launches Massive $47.5 Billion "Big Fund" to Boost Domestic Chip Industry

Beijing has doubled down on its push for semiconductor self-sufficiency with the establishment of a new $47.5 billion investment fund to accelerate growth in the domestic chip sector. The fund, officially registered on May 24th under the name "China Integrated Circuit Industry Investment Fund Phase III", represents the largest of three state-backed vehicles aimed at cultivating China's semiconductor capabilities. The announcement comes as tensions over advanced chip technology continue to escalate between the U.S. and China. Over the past couple years, Washington has steadily ratcheted up export controls on semiconductors to Beijing over national security concerns about potential military applications. These measures have lent new urgency to China's quest for self-sufficiency in chip design and manufacturing.

With a war chest of 344 billion yuan ($47.5 billion), the "Big Fund" dwarfs the combined capital of the first two semiconductor investment vehicles launched in 2014 and 2019. Officials have outlined a multipronged strategy targeting key bottlenecks, focusing on equipment for chip fabrication plants. The fund has bankrolled major projects such as flash memory maker Yangtze Memory Technologies and leading foundries like SMIC and Huahong. China's homegrown chip industry still needs to catch up to global leaders like Intel, Samsung, and TSMC. However, the immense scale of state-directed capital illustrates Beijing's unwavering commitment to developing a self-reliant supply chain for semiconductors—a technology viewed as indispensable for economic and military competitiveness. News of the "Big Fund" sent Chinese chip stocks surging over 3% on hopes of fresh financing tailwinds.

NVIDIA's Arm-based AI PC Processor Could Leverage Arm Cortex X5 CPU Cores and Blackwell Graphics

Last week, we got confirmation from the highest levels of Dell and NVIDIA that the latter is making a client PC processor for the Windows on Arm (WoA) AI PC ecosystem that only has one player in it currently, Qualcomm. Michael Dell hinted that this NVIDIA AI PC processor would be ready in 2025. Since then, speculation has been rife about the various IP blocks NVIDIA could use in the development of this chip, the two key areas of debate have been the CPU cores and the process node.

Given that NVIDIA is gunning toward a 2025 launch of its AI PC processor, the company could implement reference Arm IP CPU cores, such as the Arm Cortex X5 "Blackhawk," and not venture out toward developing its own CPU cores on the Arm machine architecture, unlike Apple. Depending on how the market recieves its chips, NVIDIA could eventually develop its own cores. Next up, the company could use the most advanced 3 nm-class foundry node available in 2025 for its chip, such as the TSMC N3P. Given that even Apple and Qualcomm will build their contemporary notebook chips on this node, it would be a logical choice of node for NVIDIA. Then there's graphics and AI acceleration hardware.

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

AMD to Present "Zen 5" Microarchitecture Deep-dive at Hot Chips 2024

AMD is slated to deliver a "Zen 5" microarchitecture deep-dive at the Hot Chips 2024 conference, on August 25. The company is widely expected to either unveil or announce its next-generation processors based on the architecture, in its 2024 Computex keynote on June 3, so it remains to be seen if the deep-dive follows a product launch, or predates it. Either way, Hot Chips talks tend to be significantly more detailed than the product launch pre-briefs that we get; and so we hope to learn a lot more about the architecture.

A lot rides on the continued success of "Zen 5" to deliver a double-digit percentage IPC increase over its predecessor, while also introducing new microarchitecture-level features; and leveraging new foundry processes at TSMC, to deliver competitive processors to Intel. Unlike Intel, which has implemented hybrid CPU cores across its product stack, AMD continues to make traditional multicore processors, and refuses to level even the chips that contain regular and high-density versions of its "Zen 4" cores as "hybrid."

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

NVIDIA "Blackwell" Successor Codenamed "Rubin," Coming in Late-2025

NVIDIA barely started shipping its "Blackwell" line of AI GPUs, and its next-generation architecture is already on the horizon. Codenamed "Rubin," after Vera Rubin, the new architecture will power NVIDIA's future AI GPUs with generational jumps in performance, but more importantly, a design focus on lowering the power draw. This will become especially important as NVIDIA's current architectures already approach the kilowatt range, and cannot scale boundlessly. TF International Securities analyst, Mich-Chi Kuo says that NVIDIA's first AI GPU based on "Rubin," the R100 (not to be confused with an ATI GPU from many moons ago); is expected to enter mass-production in Q4-2025, which means it could be unveiled and demonstrated sooner than that; and select customers could have access to the silicon sooner, for evaluations.

The R100, according to Mich-Chi Kuo, is expected to leverage TSMC's 3 nm EUV FinFET process, specifically the TSMC-N3 node. In comparison, the new "Blackwell" B100 uses the TSMC-N4P. This will be a chiplet GPU, and use a 4x reticle design compared to Blackwell's 3.3x reticle design, and use TSMC's CoWoS-L packaging, just like the B100. The silicon is expected to be among the first users of HBM4 stacked memory, and feature 8 stacks of a yet unknown stack height. The Grace Ruben GR200 CPU+GPU combo could feature a refreshed "Grace" CPU built on the 3 nm node, likely an optical shrink meant to reduce power. A Q4-2025 mass-production roadmap target would mean that customers will start receiving the chips by early 2026.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

TSMC to Introduce Location Premium for Overseas Chip Production

As a part of its Q1 earnings call discussion, one of the largest semiconductor manufacturers, TSMC, has unveiled a strategic move to charge a premium for chips manufactured at its newly established overseas fabrication plants. During an earnings call, TSMC's CEO, C.C. Wei, announced that the company will impose higher pricing for chips produced outside Taiwan to offset the higher operational costs associated with these international locations. This move aims to maintain TSMC's target gross margin of 53% amidst rising expenses such as inflation and elevated electricity costs. This decision comes as TSMC expands its global footprint with new facilities in the United States, Germany, and Japan (JAMS) to meet the increasing demand for semiconductor chips worldwide. The company's new US-based Arizona facility, known as Fab 21, has faced delays due to equipment installation issues and labor negotiations.

Chips produced at this site, utilizing TSMC's advanced N5 and N4 nodes, could cost between 20% to 30% more than those manufactured in Taiwan. TSMC's strategy to manage the cost disparities across different geographic locations involves strategic pricing, securing government support, and leveraging its manufacturing technology leadership. This approach reflects the company's commitment to maintaining its competitive edge while navigating the complexities of global semiconductor manufacturing in today's fragmented market. Introducing a location premium is expected to impact American semiconductor designers, who may need to pass these costs on to specific market segments, particularly those with lower price sensitivity, such as government-related projects. Despite these challenges, TSMC's overseas expansion underscores its adaptive strategies in the face of global economic pressures and industry demands, ensuring its continued position as a leading player in the semiconductor industry.

SK hynix Collaborates with TSMC on HBM4 Chip Packaging

SK hynix Inc. announced today that it has recently signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass-produced from 2026, through this initiative.

SK hynix said the collaboration between the global leader in the AI memory space and TSMC, a top global logic foundry, will lead to more innovations in HBM technology. The collaboration is also expected to enable breakthroughs in memory performance through trilateral collaboration between product design, foundry, and memory provider. The two companies will first focus on improving the performance of the base die that is mounted at the very bottom of the HBM package. HBM is made by stacking a core DRAM die on top of a base die that features TSV technology, and vertically connecting a fixed number of layers in the DRAM stack to the core die with TSV into an HBM package. The base die located at the bottom is connected to the GPU, which controls the HBM.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Meta Announces New MTIA AI Accelerator with Improved Performance to Ease NVIDIA's Grip

Meta has announced the next generation of its Meta Training and Inference Accelerator (MTIA) chip, which is designed to train and infer AI models at scale. The newest MTIA chip is a second-generation design of Meta's custom silicon for AI, and it is being built on TSMC's 5 nm technology. Running at the frequency of 1.35 GHz, the new chip is getting a boost to 90 Watts of TDP per package compared to just 25 Watts for the first-generation design. Basic Linear Algebra Subprograms (BLAS) processing is where the chip shines, and it includes matrix multiplication and vector/SIMD processing. At GEMM matrix processing, each chip can process 708 TeraFLOPS at INT8 (presumably meant FP8 in the spec) with sparsity, 354 TeraFLOPS without, 354 TeraFLOPS at FP16/BF16 with sparsity, and 177 TeraFLOPS without.

Classical vector and processing is a bit slower at 11.06 TeraFLOPS at INT8 (FP8), 5.53 TeraFLOPS at FP16/BF16, and 2.76 TFLOPS single-precision FP32. The MTIA chip is specifically designed to run AI training and inference on Meta's PyTorch AI framework, with an open-source Triton backend that produces compiler code for optimal performance. Meta uses this for all its Llama models, and with Llama3 just around the corner, it could be trained on these chips. To package it into a system, Meta puts two of these chips onto a board and pairs them with 128 GB of LPDDR5 memory. The board is connected via PCIe Gen 5 to a system where 12 boards are stacked densely. This process is repeated six times in a single rack for 72 boards and 144 chips in a single rack for a total of 101.95 PetaFLOPS, assuming linear scaling at INT8 (FP8) precision. Of course, linear scaling is not quite possible in scale-out systems, which could bring it down to under 100 PetaFLOPS per rack.
Below, you can see images of the chip floorplan, specifications compared to the prior version, as well as the system.

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

US Backs TSMC's $65B Arizona Investment with $11.6B Support Package

According to the latest report from Bloomberg, the US government under Joe Biden's administration has announced plans to provide Taiwan Semiconductor Manufacturing Company (TSMC) with a substantial financial support package worth $11.6 billion. The package is composed of $6.6 billion in grants and up to $5 billion in loans. This represents the most significant financial assistance approved under the CHIPS and Science Act, a key initiative to resurrect the US chip industry. The funding will aid TSMC in establishing three cutting-edge semiconductor production facilities in Arizona, with the company's total investment in the state expected to exceed an impressive $65 billion. TSMC's multi-phase Arizona project will commence with the construction of a fab module near its existing Fab 21 facility. Production using 4 nm and 5 nm process nodes is slated to begin by early 2025. The second phase, scheduled for 2028, will focus on even more advanced 2 nm and 3 nm technologies.

TSMC has kept details about the third facility's production timeline and process node under wraps. The company's massive investment in Arizona is expected to profoundly impact the local economy, creating 6,000 high-tech manufacturing jobs and over 20,000 construction positions. Moreover, $50 million has been earmarked for training local workers, which aligns with President Joe Biden's goal of bolstering domestic manufacturing and technological independence. However, TSMC's Arizona projects have encountered obstacles, including labor disputes and uncertainties regarding government support, resulting in delays for the second facility's production timeline. Additionally, reports suggest that at least one TSMC supplier has abandoned plans to set up operations in Arizona due to workforce-related challenges.

Magnitude 7.4 Earthquake in Taiwan Halts Production at TSMC and Other Foundries

At 07:58 local time, Taiwan was rocked by a magnitude 7.4 earthquake on the east coast which was felt nationwide and as far as to the southeastern parts of China and southern Japan. It caused some major damage in the east coast city of Hualien where the epicentre of the quake was located, as well as surrounding areas. The earthquake reportedly left nine people dead and over 900 people injured islandwide. TSMC, UMC, PSMC and Innolux all halted some of their production lines in the Hsinchu Science Park on the west coast of the island, although this is said to have been as a preventive step, rather than caused by actual damage from the earthquake.

All the above-mentioned companies also evacuated their staff from their factories due to the intensity of the quake, as it reached a magnitude of around four or five almost island wide. The semiconductor manufacturers are all inspecting their fabs now to make sure none of the equipment was damaged by the earthquake. Innolux also has a factory in the southern city of Kaohsiung and has reported that it has suspended production in Hsinchu, but that production in Kaohsiung wasn't affected. Local media in Taiwan hasn't made any mention of the likes of Micron or other chip manufacturers, but it's likely that the situation is similar, since all of these companies are located in the same areas on the island. Aftershocks have continued throughout the day and there's a risk for further big earthquakes to follow in the coming days.
Images courtesy of the Taiwan Central Weather Administration (CWA).

Update 15:11 UTC: Updated with an official statement from Micron below.

Apple M3 Ultra Chip Could be a Monolithic Design Without UltraFusion Interconnect

As we witness Apple's generational updates of the M series of chips, the highly anticipated SKU of the 3rd generation of Apple M series yet-to-be-announced top-of-the-line M3 Ultra chip is growing speculations from industry insiders. The latest round of reports suggests that the M3 Ultra might step away from its predecessor's design, potentially adopting a monolithic architecture without the UltraFusion interconnect technology. In the past, Apple has relied on a dual-chip design for its Ultra variants, using the UltraFusion interconnect to combine two M series Max chips. For example, the second generation M Ultra chip, M2 Ultra, boasts 134 billion transistors across two 510 mm² chips. However, die-shots of the M3 Max have sparked discussions about the absence of dedicated chip space for the UltraFusion interconnect.

While the absence of visible interconnect space on early die-shots is not conclusive evidence, as seen with the M1 Max not having visible UltraFusion interconnect and still being a part of M1 Ultra with UltraFusion, industry has led the speculation that the M3 Ultra may indeed feature a monolithic design. Considering that the M3 Max has 92 billion transistors and is estimated to have a die size between 600 and 700 mm², going Ultra with these chips may be pushing the manufacturing limit. Considering the maximum die size limit of 848 mm² for the TSMC N3B process used by Apple, there may not be sufficient space for a dual-chip M3 Ultra design. The potential shift to a monolithic design for the M3 Ultra raises questions about how Apple will scale the chip's performance without the UltraFusion interconnect. Competing solutions, such as NVIDIA's Blackwell GPU, use a high-bandwidth C2C interface to connect two 104 billion transistor chips, achieving a bandwidth of 10 TB/s. In comparison, the M2 Ultra's UltraFusion interconnect provided a bandwidth of 2.5 TB/s.

Intel Lunar Lake Chiplet Arrangement Sees Fewer Tiles—Compute and SoC

Intel Core Ultra "Lunar Lake-MX" will be the company's bulwark against Apple's M-series Pro and Max chips, designed to power the next crop of performance ultraportables. The MX codename extension denotes MoP (memory-on-package), which sees stacked LPDDR5X memory chips share the package's fiberglass substrate with the chip, to conserve PCB footprint, and give Intel greater control over the right kind of memory speed, timings, and power-management features suited to its microarchitecture. This is essentially what Apple does with its M-series SoCs powering its MacBooks and iPad Pros. Igor's Lab scored the motherlode on the way Intel has restructured the various components across its chiplets, and the various I/O wired to the package.

When compared to "Meteor Lake," the "Lunar Lake" microarchitecture sees a small amount of "re-aggregation" of the various logic-heavy components of the processor. On "Meteor Lake," the CPU cores and the iGPU sat on separate tiles—Compute tile and Graphics tile, respectively, with a large SoC tile sitting between them, and a smaller I/O tile that serves as an extension of the SoC tile. All four tiles sat on top of a Foveros base tile, which is essentially an interposer—a silicon die that facilitates high-density microscopic wiring between the various tiles that are placed on top of it. With "Lunar Lake," there are only two tiles—the Compute tile, and the SoC tile.

TSMC 3nm Node to Make 20% of Company's Revenues in 2024

The 3 nm EUV node, which will be TSMC's final semiconductor fabrication node to implement FinFET transistors, will make for a staggering 20% of TSMC's revenues in 2024, a report by ICSmart says. 20% is big for a new foundry node, considering that TSMC is simultaneously running 4 nm and 5 nm EUV nodes; 6 nm and 7 nm DUV nodes; and several older mature nodes. Apple is expected to be the foundry's biggest customer for 3 nm, as it could power the company's current A17 and M3, and upcoming A18 and M4 line of chips for its next-generation iPhone and MacBooks; followed by NVIDIA, AMD, and possibly even Intel. AMD is expected to build some versions of its upcoming "Zen 5" processors on 3 nm; while Intel is expected to use 3 nm for some of the tiles of its upcoming "Lunar Lake" processor. The same report goes to suggest that 3 nm will make up 30% of TSMC's revenues in 2025.

Nvidia CEO Reiterates Solid Partnership with TSMC

One key takeaway from the ongoing GTC is that Nvidia's AI empire has taken shape with strong partnerships from TSMC and other Taiwanese makers, such as those major server ODMs.

According to the news report from the technology-focused media DIGITIMES Asia, during his keynote at GTC on March 18, Huang underscored his company's partnerships with TSMC, as well as the supply chain in Taiwan. Speaking to the press later, Huang said Nvidia will have a very strong demand for CoWoS, the advanced packaging services TSMC offers.
Return to Keyword Browsing
Jun 14th, 2024 21:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts