News Posts matching #datacenter

Return to Keyword Browsing

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

EdgeCortix to Showcase Flagship SAKURA-I Chip at Singapore Airshow 2024

EdgeCortix, the Japan-based fabless semiconductor company focused on energy-efficient AI processing, announced today that the Acquisitions, Technology and Logistics Agency (ATLA), Japan Ministry of Defense, will include the groundbreaking edge AI startup alongside an elite group of leading Japanese companies to represent Japan's air and defense innovation landscape at ATLA's booth at the Singapore Airshow to be held February 20 - 25. The Singapore Airshow is one of the largest and most influential shows of its kind in the world, and the largest in Asia, seeing as many as 50,000 attendees per biennial show. Over 1,000 companies from 50 countries are expected to participate in the 2024 show.

EdgeCortix's flagship product, the SAKURA-I chip, will be featured among a small handful of influential Japanese innovations at the booth. SAKURA-I is a dedicated co-processor that delivers high compute efficiency and low latency for artificial intelligence (AI) workloads that are carried out "at the edge", where the data is collected and mission critical decisions need to be made - far away from a datacenter. SAKURA-I delivers orders of magnitude better energy efficiency and processing speed than conventional semiconductors (ex: GPUs & CPUs), while drastically reducing operating costs for end users.

AMD Instinct MI300X Released at Opportune Moment. NVIDIA AI GPUs in Short Supply

LaminiAI appeared to be one of the first customers to receive an initial shipment of AMD's Instinct MI300X accelerators, as disclosed by their CEO posting about functioning hardware on social media late last week. A recent Taiwan Economic Daily article states that the "MI300X is rumored to have begun supply"—we are not sure about why they have adopted a semi-secretive tone in their news piece, but a couple of anonymous sources are cited. A person familiar with supply chains in Taiwan divulged that: "(they have) been receiving AMD MI300X chips one after another...due to the huge shortage of NVIDIA AI chips, the arrival of new AMD products is really a timely rainfall." Favorable industry analysis (from earlier this month) has placed Team Red in a position of strength, due to growing interest in their very performant flagship AI accelerator.

The secrecy seems to lie in Team Red's negotiation strategies in Taiwan—the news piece alleges that big manufacturers in the region have been courted. AMD has been aggressive in a push to: "cooperate and seize AI business opportunities, with GIGABYTE taking the lead and attracting the most attention. Not only was GIGABYTE the first to obtain a partnership with AMD's MI300A chip, which had previously been mass-produced, but GIGABYTE was also one of the few Taiwanese manufacturers included in AMD's first batch of MI300X partners." GIGABYTE is expected to release two new "G593" product lines of server hardware later this year, based on combinations of AMD's Instinct MI300X accelerator and EPYC 9004 series processors.

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

Alphawave Semi Partners with Keysight to Deliver a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity for the world's technology infrastructure, today announced successful collaboration with Keysight Technologies, a market-leading design, emulation, and test solutions provider, demonstrating interoperability between Alphawave Semi's PCIe 6.0 64 GT/s Subsystem (PHY and Controller) Device and Keysight PCIe 6.0 64 GT/s Protocol Exerciser, negotiating a link to the maximum PCIe 6.0 data rate. Alphawave Semi, already on the PCI-SIG 5.0 Integrators list, is accelerating next-generation PCIe 6.0 Compliance Testing through this collaboration.

Alphawave Semi's leading-edge silicon implementation of the new PCIe 6.0 64 GT/s Flow Control Unit (FLIT)-based protocol enables higher data rates for hyperscale and data infrastructure applications. Keysight and Alphawave Semi achieved another milestone by successfully establishing a CXL 2.0 link setting the stage for future cache coherency in the datacenter.

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.

AMD Instinct MI300X Could Become Company's Fastest Product to Rake $1 Billion in Sales

AMD in its post Q3-2023 financial results call stated that it expects the Instinct MI300X accelerator to be the fastest product in AMD history to rake in $1 billion in sales. This would be the time it took for a product in its lifecycle to register $1 billion in sales. With the MI300 series, the company hopes to finally break into the AI-driven HPC accelerator market that's dominated by NVIDIA, and at scale. This growth is attributable to two distinct factors. The first of which is that NVIDIA is supply bottlenecked, and customers and looking for alternatives, and finally found a suitable one with the MI300 series; and the second is that with the MI300 series, AMD has finally ironed out the software ecosystem backing the hardware that looks incredible on paper.

It's also worth noting here, that AMD is rumored to be sacrificing its market presence in the enthusiast-class gaming GPU segment with its next-generation, with the goal of maximizing its foundry allocation for HPC accelerators such as the MI300X. HPC accelerators are a significantly higher margin class of products than gaming GPUs such as the Radeon RX 7900 XTX. The RX 7900 XTX and its refresh under the RX 7950 series, are not expected to have a successor in the RDNA4 generation. "We now expect datacenter GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year," said Dr. Lisa Su, CEO AMD, at the company's earnings call with analysts and investors. "This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history."

Lenovo Opens New Global Innovation Centre in Budapest

Lenovo today announced the Europe based Innovation Centre specializing in HPC and AI, will now operate with enhanced customer experience from the in-house manufacturing facility in Budapest.

Running the new Innovation Center operations from the Budapest factory with onsite inventory stock, allows Lenovo customers to access the most advanced power and cooling infrastructure solutions and latest generation technology within the supply chain. This access ensures that workloads are tested on accurate representations of end purchased solutions, enabling customers to know with certainty the Lenovo solution installed will perform successfully for the intended workload.

ASUS Showcases Cutting-Edge Cloud Solutions at OCP Global Summit 2023

ASUS, a global infrastructure solution provider, is excited to announce its participation in the 2023 OCP Global Summit, which is taking place from October 17-19, 2023, at the San Jose McEnery Convention Center. The prestigious annual event brings together industry leaders, innovators and decision-makers from around the world to explore and discuss the latest advancements in open infrastructure and cloud technologies, providing a perfect stage for ASUS to unveil its latest cutting-edge products.

The ASUS theme for the OCP Global Summit is Solutions beyond limits—ASUS empowers AI, cloud, telco and more. We will showcase an array of products:

Supermicro Introduces a Number of Density and Power Optimized Edge Platforms for Telco Providers, Based on the New AMD EPYC 8004 Series Processor

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the AMD based Supermicro H13 generation of WIO Servers, optimized to deliver strong performance and energy efficiency for edge and telco datacenters powered by the new AMD EPYC 8004 Series processors. The new Supermicro H13 WIO and short-depth front I/O systems deliver energy-efficient single socket servers that lower operating costs for enterprise, telco, and edge applications. These systems are designed with a dense form factor and flexible I/O options for storage and networking, making the new servers ideal for deploying in edge networks.

"We are excited to expand our AMD EPYC-based server offerings optimized to deliver excellent TCO and energy efficiency for data center networking and edge computing," said Charles Liang, president and CEO of Supermicro. "Adding to our already industry leading edge-to-cloud rack scale IT solutions, the new Supermicro H13 WIO systems with PCIe 5.0 and DDR5-4800 MHz memory show tremendous performance for edge applications."

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

Intel Demos 6th Gen Xeon Scalable CPUs, Core Counts Leaked

Intel's advanced packaging prowess demonstration took place this week—attendees were able to get an early-ish look at Team Blue's sixth Generation Xeon Scalable "Sapphire Rapids" processors. This multi-tile datacenter-oriented CPU family is projected to hit the market within the first half of 2024, but reports suggest that key enterprise clients have recently received evaluation samples. Coincidentally, renowned hardware leaker—Yuuki_AnS—has managed to source more information from industry insiders. This follows their complete blowout of more mainstream Raptor Lake Refresh desktop SKUs.

The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."

Andes Announces General Availability of the New AndesCore RISC-V Multicore Vector Processor AX45MPV

Andes Technology, a leading supplier of high efficiency, low-power 32/64-bit RISC-V processor cores and Founding Premier member of RISC-V International, today proudly announces general availability of the high-performance AndesCore AX45MPV multicore vector processor IP. The AX45MPV is the third generation of the award winning AndesCore vector processor series. Equipped with powerful RISC-V vector processing and parallel execution capability, it targets the applications with large volumes of data such as ADAS, AI inference and training, AR/VR, multimedia, robotics, and signal processing.

Andes and Meta started collaboration on datacenter AI with RISC-V vector core from early 2019. Andes later unveiled the AndesCore NX27V, marking a significant milestone as the industry's first commercial RISC-V vector processor core with the capability of generating up to 4 512-bit vector (VLEN) results per cycle, at the end of 2019. It immediately attracted the attention of worldwide SoC design teams working on AI accelerators, and has landed over a dozen datacenter AI projects. Since then, the RISC-V vector processor cores have become the choice for ML and AI chip vendors.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.
Return to Keyword Browsing
Apr 29th, 2024 14:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts