News Posts matching #SoC

Return to Keyword Browsing

AMD Releases Chipset Software 6.05.28.016

AMD late Tuesday released the latest version of its Chipset Software. This is an important piece of software that, besides providing drivers for the various onboard SoC interfaces of the processor and chipset, provides your Windows operating system with software-side processor power-management (PPM) awareness. Version 6.05.28.016 is the latest version of it, supporting AMD 300-series, 400-series, 500-series, and 600-series chipsets. Version 6.05.28.016 adds support for the new Windows 11 24H2 Update. Support is added for a new program (possibly the latest version of Ryzen Master). The release also adds a few unspecified bug-fixes.

DOWNLOAD: AMD Chipset Software 6.05.28.016

The change-log follows.

Intel's New SoC Solution Accelerates Electric Vehicle Innovation, Slashing Costs

The high purchase price of an electric vehicle (EV) remains one of the biggest barriers for potential buyers on a global scale. EVs are currently more expensive to build than traditional gasoline-powered cars, primarily because of the high costs associated with advanced battery and e-motor technology. The near-term solution is to enhance the efficiency of the existing battery technology through energy savings at the vehicle level, including improved integration with EV station infrastructure. This is exactly the challenge that Silicon Mobility, an Intel Company, has now solved with today's launch of the new OLEA U310 system-on-chip (SoC). This next-gen technology promises to significantly improve the overall performance of electric vehicles (EVs), streamline design and production processes, and expand SoC services to ensure seamless operation across various EV station platforms.

Representing a first for the industry, the new SoC is the only complete solution that combines hardware and software in one and is engineered to match the need for powertrain domain control in electrical architectures with distributed software. Built with a unique hybrid and heterogeneous architecture, a single OLEA 310 FPCU can replace as many as six standard microcontrollers in a system combination in which it controls an inverter, a motor, a gearbox, a DC-DC converter and an on-board-charger. Using the 310 FPCU, original equipment manufacturers (OEMs) and Tier 1 suppliers can control multiple and diverse power and energy functions simultaneously in real time.

Nightmare Fuel for Intel: Arm CEO Predicts Arm will Take Over 50% Windows PC Market-share by 2029

Arm CEO Rene Haas predicts that SoCs based on the Arm CPU machine architecture will beat x86 in the Windows PC space in the next 5 years (by 2029). Haas is bullish about the current crop of Arm SoCs striking the right balance of performance and power efficiency, along with just the right blend of on-chip acceleration for AI and graphics, to make serious gains in this market, which has traditionally been dominated by the x86 machine architecture, with chips from just two manufacturers—Intel and AMD. On the other hand, Arm has a vibrant ecosystem of SoC vendors. "Arm's market share in Windows - I think, truly, in the next five years, it could be better than 50%." Haas said, in an interview with Reuters.

Currently, Microsoft has an exclusive deal with Qualcomm to power Windows-on-Arm (WoA) Copilot+ AI PCs. Qualcomm's chip lineup spans the Snapdragon Elite X and Snapdragon Elite Plus. This exclusivity, however, could change, with a recent interview of Michael Dell and Jensen Huang hinting at NVIDIA working on a chip for the AI PC market. The writing is on the wall for Intel and AMD—they need to compete with Arm on its terms: to make leaner PC processors with the kinds of performance/Watt and chip costs that Arm SoCs offer to PC OEMs. Intel has taken a big step in this direction with its "Lunar Lake" processor, you can read all about the architecture here.

MediaTek Joins Arm Total Design to Shape the Future of AI Computing

MediaTek announced today at COMPUTEX 2024 that the company has joined Arm Total Design, a fast-growing ecosystem that aims to accelerate and simplify the development of products based on Arm Neoverse Compute Subsystems (CSS). Arm Neoverse CSS is designed to meet the performance and efficiency needs of AI applications in the data center, infrastructure systems, telecommunications, and beyond.

"Together with Arm, we're enabling our customers' designs to meet the most challenging workloads for AI applications, maximizing performance per watt," said Vince Hu, Corporate Vice President at MediaTek. "We will be working closely with Arm as we expand our footprint into data centers, utilizing our expertise in hybrid computing, AI, SerDes and chiplets, and advance packaging technologies to accelerate AI innovation from the edge to the cloud."

Silicon Motion Unveils New USB Display Interface SoC Enabling Cost-Effective Universal Docking Stations

Silicon Motion Technology Corporation, a global leader in designing and marketing controller solutions for display interface and NAND storage, today announced the launch of the SM770 USB display interface SoC for USB docking stations to simplify connectivity for multiple 4K Ultra-High Definition Displays, with low latency and low power consumption.

The new SM770 is a high-performance USB Display interface SoC that supports up to three concurrent 4K UHD (3840x2160@60p) displays, leveraging Silicon Motion's innovative CAT (Content Adaptive Technology) and employs multiple image and video processing algorithms to compress display data and minimize bandwidth usage to deliver ultra-low latency from the computer to the monitors. By transferring most of the compression work to a hardware accelerator, CAT technology operates more efficiently and reduces the CPU load.

MediaTek to Unveil Products for Chromebooks, Smart TVs and Displays at Computex 2024

At COMPUTEX 2024, MediaTek will showcase new products and technology demonstrations, featuring highlights in AI, Smart TVs, Chromebooks, IoT and more, in addition to a keynote by Vice Chairman and CEO Dr. Rick Tsai about how MediaTek can enable ubiquitous AI.

At this year's show, MediaTek will debut two new chipsets with powerful performance and support for the latest AI enhancements across multiple verticals: the Kompanio 838 SoC for premium Chromebooks, and the Pentonic 800 SoC for 4K premium smart TVs and displays.

Arm Announces the Cortex-X925 and Cortex-A725 Armv9 CPU Cores

Arm has announced a pair of new Armv9 CPU cores today, alongside a refresh of a third. The new additions are the Cortex-X925—which is a huge model number jump from the previous Cortex-X4—and the Cortex-A725 which should be an upgraded Cortex-A720. Finally the Cortex-A520 has been refreshed to bring a 15 percent power efficiency improvement as well as support for 3 nm production nodes. Arm claims that the Cortex-A925 delivers its highest performance improvement ever over a previous generation with a single core uplift of up to 36 percent and an AI performance improvement of up to 46 percent compared to the Cortex-X4. The Cortex-X925 will support up to 3 MB private L2 cache and is tape-out ready for 3 nm production nodes.

The Cortex-A725 is said to offer a 35 percent performance efficiency improvement over the Cortex-A720 and it's been given performance boosts both when it comes to AI and gaming workloads. It's said to be up to 25 percent more power efficient than the Cortex-A720 and L3 cache traffic has been improved by up to 20 percent. Again, the Cortex-A720 is ready for production on a 3 nm node. Finally, Arm has also updated its DynamIQ Shared Unit to the DS-120 and here Arm has managed to reduce the typical workload power by up to 50 percent and the cache miss power by up to 60 percent. The DSU-120 scales to up to 14 Arm cores, suggesting that we might get to see some interesting new SoC implementations in the coming years from Arm's partners, although Arm's reference platform is a 2-4-2 configuration of the new cores.

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

Microsoft Introduces Copilot+ PCs

Today, at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs. Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all-day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can't on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

These experiences come to life on a set of thin, light and beautiful devices from Microsoft Surface and our OEM partners Acer, ASUS, Dell, HP, Lenovo and Samsung, with pre-orders beginning today and availability starting on June 18. Starting at $999, Copilot+ PCs offer incredible value. This first wave of Copilot+ PCs is just the beginning. Over the past year, we have seen an incredible pace of innovation of AI in the cloud with Copilot allowing us to do things that we never dreamed possible. Now, we begin a new chapter with AI innovation on the device. We have completely reimagined the entirety of the PC - from silicon to the operating system, the application layer to the cloud - with AI at the center, marking the most significant change to the Window platform in decades.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

ASUS Leaks its own Snapdragon X Elite Notebook

Courtesy of ASUS Vietnam (via @rquandt on X/Twitter), we now have an idea of what ASUS' first Qualcomm Snapdragon X Elite notebook will look like, but also what the main specifications are. It will share the Vivobook S 15 OLED branding with other notebooks from ASUS, although the leaked model carries the model number S5507QA-MA089WS. At its core is a Qualcomm Snapdragon X Elite X1E-78-100 SoC which is the base model from Qualcomm. The SoC consists of 12 Oryon cores, of which eight are performance cores and four are energy efficient cores. A peak, multi-threaded clock speed of 3.4 GHz and 42 MB of cache, as well as a 75 TOPs AI engine rounds off the SoC specs. The SoC is also home to a Qualcomm Adreno GPU, but so far Qualcomm hasn't released any useful specs about the GPU in the Snapdragon X Elite series of chips.

ASUS has paired the SoC with 32 GB of LPDDR5X memory of an unknown clock speed, although Qualcomm officially supports speed of up to 8,448 MT/s in a to PC users unusual configuration of eight channels at 16-bit wide, for a bandwidth of up to 135 GB/s. For comparison, Intel's latest Core Ultra processors max out at LPDDR5X 7,467 MT/s and up to 120 GB/s memory bandwidth. Other features include a 1 TB PCIe 4.0 NVMe SSD, a glossy 15.6-inch 2,880 x 1,620 resolution, 120 Hz OLED display with 600 nits peak brightness and a 70 WHr battery. It's unclear what connectivity options will be on offer, but judging by the screenshot below, we can at least expect an HDMI out as well as a pair of USB Type-C ports, a micro SD card slot and a headphone jack. As far as pricing goes, Roland Quandt is suggesting a €1,500 base price on X/Twitter, but we'll have to wait for the official launch to find out what these Arm based laptops will retail for. ASUS Vietnam has already removed the page from its website.

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

Radxa Launches NAS Friendly ROCK 5 ITX Motherboard with Arm SoC

Radxa is a Chinese manufacturer of various Arm based devices and something of a minor competitor to the Raspberry Pi Foundation. The company has just launched its latest product which is called the ROCK 5 ITX. As the name implies, it's a Mini-ITX form factor motherboard, which in itself is rather unusual for Arm based hardware to start with. However, Radxa has designed the ROCK 5 ITX to be a NAS motherboard and this is the first time we've come across such a product, as most Arm based boards are either intended for hobby projects, software development or routers. This makes the ROCK 5 ITX quite unique, at least based on its form factor, as it'll be compatible with standard Mini-ITX chassis.

The SoC on the board is a Rockchip RK3588 which sports four Cortex-A76 cores at up to 2.4 GHz and four Cortex-A55 cores at 1.8 GHz. This is not exactly cutting edge, but should be plenty fast enough for a SATA drive based NAS. The board offers four SATA 6 Gbps connectors via an ASMedia ASM1164 controller, each with an individual power connector next to it. However, Radxa seems to have chosen to use fan-header type power connectors, which means it'll be hard to get replacement power cables. The board also has a PCIe 3.0 x2 M.2 slot for an NVMe drive. The OS boots from eMMC and Radxa supports its own Roobi OS which is Debian Linux based.

PC Market Returns to Growth in Q1 2024 with AI PCs to Drive Further 2024 Expansion

Global PC shipments grew around 3% YoY in Q1 2024 after eight consecutive quarters of declines due to demand slowdown and inventory correction, according to the latest data from Counterpoint Research. The shipment growth in Q1 2024 came on a relatively low base in Q1 2023. The coming quarters of 2024 will see sequential shipment growth, resulting in 3% YoY growth for the full year, largely driven by AI PC momentum, shipment recovery across different sectors, and a fresh replacement cycle.

Lenovo's PC shipments were up 8% in Q1 2024 off an easy comparison from last year. The brand managed to reclaim its 24% share in the market, compared to 23% in Q1 2023. HP and Dell, with market shares of 21% and 16% respectively, remained flattish, waiting for North America to drive shipment growth in the coming quarters. Apple's shipment performance was also resilient, with the 2% growth mainly supported by M3 base models.

AMD Extends Leadership Adaptive SoC Portfolio with New Versal Series Gen 2 Devices Delivering End-to-End Acceleration for AI-Driven Embedded Systems

AMD today announced the expansion of the AMD Versal adaptive system on chip (SoC) portfolio with the new Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2 adaptive SoCs, which bring preprocessing, AI inference, and postprocessing together in a single device for end-to-end acceleration of AI-driven embedded systems.

These initial devices in the Versal Series Gen 2 portfolio build on the first generation with powerful new AI Engines expected to deliver up to 3x higher TOPs-per-watt than first generation Versal AI Edge Series devicesi, while new high-performance integrated Arm CPUs are expected to offer up to 10x more scalar compute than first gen Versal AI Edge and Prime series devicesii.

SiFive Unveils the HiFive Premier P550 Out-of-Order RISC-V Development Board

Today at Embedded World, SiFive, Inc., the pioneer and leader of RISC-V computing, unveiled its new state-of-the-art RISC-V development board, the HiFive Premier P550. The board will be available for large-scale deployment through Arrow Electronics so developers around the world can test and develop new RISC-V applications like machine vision, video analysis, AI PC and others, allowing them to use AI and other cutting-edge technologies across many different market segments.

With a quad-core SiFive Performance P550 processor, the HiFive Premier P550 is the highest performance RISC-V development board in the industry, and the latest in the popular HiFive family. Designed to meet the computing needs of modern workloads, the out-of-order P550 core delivers superior compute density and performance in an energy-efficient area footprint. Furthermore, the modular design of the HiFive Premier P550, which includes a replaceable system-on-module (SOM) board, gives developers the flexibility they need to tailor their designs.

Imagination's new Catapult CPU is Driving RISC-V Device Adoption

Imagination Technologies today unveils the next product in the Catapult CPU IP range, the Imagination APXM-6200 CPU: a RISC-V application processor with compelling performance density, seamless security and the artificial intelligence capabilities needed to support the compute and intuitive user experience needs for next generation consumer and industrial devices.

"The number of RISC-V based devices is skyrocketing with over 16Bn units forecast by 2030, and the consumer market is behind much of this growth" says Rich Wawrzyniak, Principal Analyst at SHD Group. "One fifth of all consumer devices will have a RISC-V based CPU by the end of this decade. Imagination is set to be a force in RISC-V with a strategy that prioritises quality and ease of adoption. Products like APXM-6200 are exactly what will help RISC-V achieve the promised success."

Arm China Develops NPU Accelerator for AI, Targeting Domestic CPUs

Arm China is making strides in the AI accelerator market with its new neural processing unit (NPU) called Zhouyi. The company aims to integrate the NPU into low-cost domestic CPUs, potentially giving it an edge over competitors like AMD and Intel. Initially a part of Arm Holdings, which licensed IP in China, Arm China took on a new strategy of developing its own IP specifically for Chinese customers a few years ago. While the company does not develop high-performance general-purpose cores, its Zhouyi NPU could become a fundamental building block for affordable processors. A significant step forward is the upcoming addition of an open-source driver for Zhouyi to the Linux kernel. This will make the IP easy to program for software developers, increasing its appeal to chip designers.

Being an open-source driver, the integration in the Linux kernel brings assurance to developers that Zhouyi NPU could be the first in many generations from Arm China. While Zhouyi may not directly compete with offerings from AMD or Intel, its potential for widespread adoption in millions of devices could help Arm China acquire local customers with their IP. The project, which began three years ago with a kernel-only driver, has since evolved into a full driver stack. There is even a development kit board called EAIDK310, powered by Rockwell SoC and Zhouyi NPU, which is available on Aliexpress and Amazon. The integration of AI accelerator technology into the Linux ecosystem is a significant development, though there is still work to be done. Nonetheless, Arm China's Zhouyi NPU and open-source driver are essential to making AI capabilities more accessible and widely available in the domestic Chinese market.

Google Launches Arm-Optimized Chrome for Windows, in Time for Qualcomm Snapdragon X Elite Processors

Google has just released an Arm-optimized version of its popular Chrome browser for Windows PCs. This new version is designed to take full advantage of Arm-based devices' hardware and operating system, promising users a faster and smoother browsing experience. The Arm-optimized Chrome for Windows has been developed in close collaboration with Qualcomm, ensuring that Chrome users get the best possible experience on current Arm-compatible PCs. Hiroshi Lockheimer, Senior Vice President at Google, stated, "We've designed Chrome browser to be fast, secure, and easy to use across desktops and mobile devices, and we're always looking for ways to bring this experience to more people." Early testers of the Arm-optimized Chrome have reported significant performance improvements compared to the x86-emulated version. The new browser is rolling out starting today and will be available on existing Arm devices, including PCs powered by Snapdragon 8cx, 8c, and 7c processors.

Shortly, Chrome will receive an even more performant chip boost with Qualcomm's upcoming Snapdragon X Elite SoC launch. Cristiano Amon, President and CEO of Qualcomm, expressed his excitement about the collaboration, saying, "As we enter the era of the AI PC, we can't wait to see Chrome shine by taking advantage of the powerful Snapdragon X Elite system." Qualcomm's Snapdragon X Elite devices are expected to hit the market in mid-2024 with "dramatic performance improvement in the Speedometer 2.0 benchmark" on reference hardware. Being one of the most essential applications, getting a native Chrome build to run on Windows-on-Arm is a significant step for the platform, promising more investment from software makers.

MediaTek Licenses NVIDIA GPU IP for AI-Enhanced Vehicle Processors

NVIDIA has been offering its GPU IP for more than a decade now ever since the introduction of Kepler uArch, and its IP has had relatively low traction in other SoCs. However, that trend seems to be reaching an inflection point as NVIDIA has given MediaTek a license to use its GPU IP to produce the next generation of processors for the auto industry. The newest MediaTek Dimensity Auto Cockpit family consists of CX-1, CY-1, CM-1, and CV-1, where the CX-1 targets premium vehicles, CM targets medium range, and CV targets lower-end vehicles, probably divided by their compute capabilities. The Dimensity Auto Cockpit family is brimming with the latest technology, as the processor core of choice is an Armv9-based design paired with "next-generation" NVIDIA GPU IP, possibly referring to Blackwell, capable of doing ray tracing and DLSS 3, powered by RTX and DLA.

The SoC is supposed to integrate a lot of technology to lower BOM costs of auto manufacturing, and it includes silicon for controlling displays, cameras (advanced HDR ISP), audio streams (multiple audio DSPs), and connectivity (WiFi networking). Interestingly, the SKUs can play movies with AI-enhanced video and support AAA gaming. MediaTek touts the Dimensity Auto Cockpit family with fully local AI processing capabilities, without requiring assistance from outside servers via WiFi, and 3D spatial sensing with driver and occupant monitoring, gaze-aware UI, and natural controls. All of that fits into an SoC fabricated at TSMC's fab on a 3 nm process and runs on the industry-established NVIDIA DRIVE OS.

Alibaba Unveils Plans for Server-Grade RISC-V Processor and RISC-V Laptop

Chinese e-commerce and cloud giant Alibaba announced its plans to launch a server-grade RISC-V processor later this year, and it showcased a RISC-V-powered laptop running an open-source operating system. The announcements were made by Alibaba's research division, the Damo Academy, at the recent Xuantie RISC-V Ecological Conference in Shenzhen. The upcoming server-class processor called the Xuantie C930, is expected to be launched by the end of 2024. While specific details about the chip have not been disclosed, it is anticipated to cater to AI and server workloads. This development is part of Alibaba's ongoing efforts to expand its RISC-V portfolio and reduce reliance on foreign chip technologies amidst US export restrictions. To complement the C930, Alibaba is also preparing a Xuantie 907 matrix processing unit for AI, which could be an IP block inside an SoC like the C930 or an SoC of its own.

In addition to the C930, Alibaba showcased the RuyiBOOK, a laptop powered by the company's existing T-Head C910 processor. The C910, previously designed for edge servers, AI, and telecommunications applications, has been adapted for use in laptops. Strangely, the RuyiBOOK laptop runs on the openEuler operating system, an open-source version of Huawei's EulerOS, which is based on Red Hat Linux. The laptop also features Alibaba's collaboration suite, Ding Talk, and the open-source office software Libre Office, demonstrating its potential to cater to the needs of Chinese knowledge workers and consumers without relying on foreign software. Zhang Jianfeng, president of the Damo Academy, emphasized the increasing demand for new computing power and the potential for RISC-V to enter a period of "application explosion." Alibaba plans to continue investing in RISC-V research and development and fostering collaboration within the industry to promote innovation and growth in the RISC-V ecosystem, lessening reliance on US-sourced technology.

Sony Semiconductor Solutions Selects Cutting-Edge AMD Adaptive Computing Tech

Yesterday, AMD announced that its cutting-edge adaptive computing technology was selected by Sony Semiconductor Solutions (SSS) for its newest automotive LiDAR reference design. SSS, a global leader in image sensor technology, and AMD joined forces to deliver a powerful and efficient LiDAR solution for use in autonomous vehicles. Using adaptive computing technology from AMD significantly extends the SSS LiDAR system capabilities, offering extraordinary accuracy, fast data processing, and high reliability for next-generation autonomous driving solutions.

In the rapidly evolving landscape of autonomous driving, the demand for precise and reliable sensor technology has never been greater. LiDAR (Light Detection and Ranging) technology plays a pivotal role in enabling depth perception and environmental mapping for various industries. LiDAR delivers image classification, segmentation, and object detection data that is essential for 3D vision perception enhanced by AI, which cannot be provided by cameras alone, especially in low-light or inclement weather. The dedicated LiDAR reference design addresses the complexities of autonomous vehicle development with a standardized platform to enhance safety in navigating diverse driving scenarios.

Samsung Prepares Mach-1 Chip to Rival NVIDIA in AI Inference

During its 55th annual shareholders' meeting, Samsung Electronics announced its entry into the AI processor market with the upcoming launch of its Mach-1 AI accelerator chips in early 2025. The South Korean tech giant revealed its plans to compete with established players like NVIDIA in the rapidly growing AI hardware sector. The Mach-1 generation of chips is an application-specific integrated circuit (ASIC) design equipped with LPDDR memory that is envisioned to excel in edge computing applications. While Samsung does not aim to directly rival NVIDIA's ultra-high-end AI solutions like the H100, B100, or B200, the company's strategy focuses on carving out a niche in the market by offering unique features and performance enhancements at the edge, where low power and efficient computing is what matters the most.

According to SeDaily, the Mach-1 chips boast a groundbreaking feature that significantly reduces memory bandwidth requirements for inference to approximately 0.125x compared to existing designs, which is an 87.5% reduction. This innovation could give Samsung a competitive edge in terms of efficiency and cost-effectiveness. As the demand for AI-powered devices and services continues to soar, Samsung's foray into the AI chip market is expected to intensify competition and drive innovation in the industry. While NVIDIA currently holds a dominant position, Samsung's cutting-edge technology and access to advanced semiconductor manufacturing nodes could make it a formidable contender. The Mach-1 has been field-verified on an FPGA, while the final design is currently going through a physical design for SoC, which includes placement, routing, and other layout optimizations.

NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

Following Monday's blockbuster announcements of the "Blackwell" architecture and NVIDIA's B100, B200, and GB200 AI GPUs, all eyes are now on its client graphics derivatives, or the GeForce RTX GPUs that implement "Blackwell" as a graphics architecture. Leading the effort will be the new GB202 ASIC, a successor to the AD102 powering the current RTX 4090. This will be NVIDIA's biggest GPU with raster graphics and ray tracing capabilities. The GB202 is rumored to be followed by the GB203 in the premium segment, the GB205 a notch lower, and the GB206 further down the stack. Kopite7kimi, a reliable source with NVIDIA leaks, says that the GB202 silicon will be built on the same TSMC 4N foundry node as the GB100.

TSMC 4N is a derivative of the company's mainline N4P node, the "N" in 4N stands for NVIDIA. This is a nodelet that TSMC designed with optimization for NVIDIA SoCs. TSMC still considers the 4N as a derivative of the 5 nm EUV node. There is very little public information on the power- and transistor density improvements of the TSMC 4N over TSMC N5. For reference, the N4P, which TSMC regards as a 5 nm derivative, offers a 6% transistor-density improvement, and a 22% power efficiency improvement. In related news, Kopite7kimi says that with "Blackwell," NVIDIA is focusing on enlarging the L1 caches of the streaming multiprocessors (SM), which suggests a design focus on increasing the performance at an SM-level.
Return to Keyword Browsing
Jun 13th, 2024 18:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts