News Posts matching #Arm

Return to Keyword Browsing

GEEKOM QS1 Pro Mini PC Specs Leak Reveals 12-core Snapdragon X Elite SoC, up to 64GB of Memory

Just a few days ago, we reported on a leaked teaser for GEEKOM's upcoming QS1 Pro mini PC. The system is set to mark GEEKOM's foray into the world of Arm-based PCs, likely in a bid to take on Apple's Mac mini. However, if a recent leak is to be believed, the QS1 Pro may have a tough time pulling that off.

The leaked specifications, courtesy of a Spanish publication, reveal that the QS1 Pro will feature the Snapdragon X1E-80-100 SoC - the second-fastest member of the X Elite family, slotting in below the 84-100 SKU. The X1E-80-100 boasts 12 Oryon cores, along with a 3.8 TFLOPs Adreno GPU. Interestingly, the leaked specs claim GPU performance of up to 4.6 TFLOPs, which is either a typo, or an indication that an X1E-84-100 variant will be available.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

GEEKOM Teases World's First Snapdragon X Elite Desktop Mini PC

This was bound to happen sooner rather than later—desktop mini-PC designer GEEKOM, which specializes in mini-PCs powered by mobile processors, teased its first product powered by a Qualcomm Snapdragon X Elite processor. This marks one of the first consumer desktop with Windows 11 Arm. The company hasn't put out specs for the desktop, but it should go up against the base model of the Apple Mac Mini M4 in use-case—as a slick and efficient everyday desktop for Internet and office productivity. The GEEKOM desktop has a very Mac Mini-like product design. The front features a power button in the right place, next to a 4-pole headset jack, and a couple of type-A USB 3.x ports. The side appears to have a multi-format card reader. There are no pics of the rear I/O.

Raspberry Pi Compute Module 5 Officially Launches With Broadcom BCM2712 Quad-Core SoC

Today we're happy to announce the much-anticipated launch of Raspberry Pi Compute Module 5, the modular version of our flagship Raspberry Pi 5 single-board computer, priced from just $45.

An unexpected journey
We founded the Raspberry Pi Foundation back in 2008 with a mission to give today's young people access to the sort of approachable, programmable, affordable computing experience that I benefitted from back in the 1980s. The Raspberry Pi computer was, in our minds, a spiritual successor to the BBC Micro, itself the product of the BBC's Computer Literacy Project. But just as the initially education-focused BBC Micro quickly found a place in the wider commercial computing marketplace, so Raspberry Pi became a platform around which countless companies, from startups to multi-billion-dollar corporations, chose to innovate. Today, between seventy and eighty percent of Raspberry Pi units go into industrial and embedded applications.

TOP500: El Capitan Achieves Top Spot, Frontier and Aurora Follow Behind

The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier and Aurora. Both systems have since moved down to No. 2 and No. 3 spots, respectively. Additionally, new systems have found their way onto the Top 10.

The new El Capitan system at the Lawrence Livermore National Laboratory in California, U.S.A., has debuted as the most powerful system on the list with an HPL score of 1.742 EFlop/s. It has 11,039,616 combined CPU and GPU cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. El Capitan relies on a Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 58.89 GigaFLOPS/watt. This power efficiency rating helped El Capitan achieve No. 18 on the GREEN500 list as well.

SC24: Supercomputer Fugaku Retains First Place Worldwide in HPCG and Graph500 Rankings

The supercomputer Fugaku, jointly developed by RIKEN and Fujitsu, has successfully retained the top spot for 10 consecutive terms in two major high-performance computer rankings, HPCG and Graph500 BFS (Breadth-First Search), and has also taken sixth place for the TOP500 and fourth place for the HPL-MxP rankings. The HPCG is a performance ranking for computing methods often used for real-world applications, and the Graph500 ranks systems based on graph analytic performance, an important element in data-intensive workloads. The results of the rankings were announced on November 19 at SC24, which is currently being held at Georgia World Congress Center in Atlanta, Georgia, USA.

The top ranking on Graph500 was won by a collaboration involving RIKEN, Institute of Science Tokyo, Fixstars Corporation, Nippon Telegraph and Telephone Corporation, and Fujitsu. It earned a score of 204.068 TeraTEPS with Fugaku's 152,064 nodes, an improvement of 38.038 TeraTEPS in performance from the previous measurement. This is the first time that a score of over 200 TeraTEPS has been recorded on the Graph500 benchmark.

Microsoft Releases Official ISO for Windows 11 on Arm

Microsoft's Windows-on-Arm (WoA) project has been going through an expansion phase, with the recent range of Snapdragon X SoCs powering many laptops. However, as we are about to get bombed with WoA devices in 2025, Microsoft has prepared an official ISO image of the Windows 11 operating system, available for users to download on the official website. The download size is about 5 GB and requires an Arm-based system to work, as expected. The need for Windows 11 ISO image for WoA comes from the increased number of desktop builds shipped to developers worldwide based on Arm. There are many workstations like the ones offered by ODMs, with an Ampere Altra or Altra Max processor inside.

This is also good news for enthusiasts waiting for the NVIDIA-MediaTek collaboration to drop its first goodies next year, and we expect to see some interesting solutions arise. With Microsoft investing its developer resources into producing Windows 11 Arm builds, it signals that the adoption of Arm-based devices is about to get much higher interest from the consumer standpoint.

ECS CubeSat On-Board Computer Ready for 2025 Space Mission

Elitegroup Computer Systems, with its long-standing expertise in computer motherboard design, has successfully developed the CubeSat On-Board Computer (OBC). This groundbreaking product will carry a payload and is set to launch aboard the Lilium3 CubeSat from National Cheng Kung University, expected to lift off in Q4 2025, initiating space experiments.

In parallel, ECS has developed the high-performance OBCC6M7R motherboard specifically designed for CubeSats, which will officially begin accepting orders for sale starting in November this year. The introduction of this product will accelerate ECS's commercialization of space industry technologies, injecting strong momentum into the company's future growth.

AMD and Fujitsu to Begin Strategic Partnership to Create Computing Platforms for AI and High-Performance Computing (HPC)

AMD and Fujitsu Limited today announced that they have signed a memorandum of understanding (MOU) to form a strategic partnership to create computing platforms for AI and high-performance computing (HPC). The partnership, encompassing aspects from technology development to commercialization, will seek to facilitate the creation of open source and energy efficient platforms comprised of advanced processors with superior power performance and highly flexible AI/HPC software and aims to accelerate open-source AI and/or HPC initiatives.

Due to the rapid spread of AI, including generative AI, cloud service providers and end-users are seeking optimized architectures at various price and power per performance configurations. From end-to-end, AMD supports an open ecosystem, and strongly believes in giving customers choice. Fujitsu has worked to develop FUJITSU-MONAKA, a next-generation Arm-based processor that aims to achieve both high performance and low power consumption. With FUJITSU-MONAKA, together with AMD Instinct accelerators, customers have an additional choice to achieve large-scale AI workload processing to whilst attempting to reduce the data center total cost of ownership.

New Arm CPUs from NVIDIA Coming in 2025

According to DigiTimes, NVIDIA is reportedly targeting the high-end segment for its first consumer CPU attempt. Slated to arrive in 2025, NVIDIA is partnering with MediaTek to break into the AI PC market, currently being popularized by Qualcomm, Intel, and AMD. With Microsoft and Qualcomm laying the foundation for Windows-on-Arm (WoA) development, NVIDIA plans to join and leverage its massive ecosystem of partners to design and deliver regular applications and games for its Arm-based processors. At the same time, NVIDIA is also scheduled to launch "Blackwell" GPUs for consumers, which could end up in these AI PCs with an Arm CPU at its core.

NVIDIA's partner, MediaTek, has recently launched a big core SoC for mobile called Dimensity 9400. NVIDIA could use something like that as a base for its SoC and add its Blackwell IP to the mix. This would be similar to what Apple is doing with its Apple Silicon and the recent M4 Max chip, which is apparently the fastest CPU in single-threaded and multithreaded workloads, as per recent Geekbench recordings. For NVIDIA, the company already has a team of CPU designers that delivered its Grace CPU to enterprise/server customers. Using off-the-shelf Arm Neoverse IP, the company's customers are acquiring systems with Grace CPUs as fast as they are produced. This puts a lot of hope into NVIDIA's upcoming AI PC, which could offer a selling point no other WoA device currently provides, and that is tried and tested gaming-grade GPU with AI accelerators.

Google's Upcoming Tensor G5 and G6 Specs Might Have Been Revealed Early

Details of what is claimed to be Google's upcoming Tensor G5 and G6 SoCs have popped up over on Notebookcheck.net and the site claims to have found the specs on a public platform, without going into any further details. Those that were betting on the Tensor G5—codenamed Laguna—delivering vastly improved performance over the Tensor G4, are likely to be disappointed, at least on the CPU side of things. As previous rumours have suggested, the chip is expected to be manufactured by TSMC, using its N3E process node, but the Tensor G5 will retain the single Arm Cortex-X4 core, although it will see a slight upgrade to five Cortex-A725 cores vs. the three Cortex-A720 cores of the Tensor G4. The G5 loses two Cortex-A520 cores in favour of the extra Cortex-A725 cores. The Cortex-X4 will also remain clocked at the same peak 3.1 GHz as that of the Tensor G4.

Interestingly it looks like Google will drop the Arm Mali GPU in favour of an Imagination Technologies DXT GPU, although the specs listed by Notebookcheck doesn't add up with any of the specs listed by Imagination Technologies. The G5 will continue to support 4x 16-bit LPDDR5 or LPDDR5X memory chips, but Google has added support for UFS 4.0 memory, something that's been a point of complaint for the Tensor G4. Other new additions is support for 10 Gbps USB 3.2 Gen 2 and PCI Express 4.0. Some improvements to the camera logic has also been made, with support for up to 200 Megapixel sensors or 108 Megapixels with zero shutter lag, but if Google will use such a camera or not is anyone's guess at this point in time.

Arm Plans to Cancel Qualcomm's License, Issues 60-Day Notice

According to Bloomberg, Arm Holding PLC, the holding company behind the Arm instruction set and Arm chip designs, just issued a 60-day notice period of license retirement to Qualcomm, its long-time partner. The UK-based ISA provider has notified Qualcomm that it will cancel the Arm ISA architectural license agreement after the contract-mandated 60-day notice. The issues between the two arose in 2022, just a year after Qualcomm acquired Nuvia and its IP. Arm filed a lawsuit claiming that the reason was "Qualcomm attempted to transfer Nuvia licenses without Arm's consent, which is a standard restriction under Arm's license agreements." To transfer Nuvia core licensing, Qualcomm would need to ask Arm first and create a new licensing deal.

The licensing reworking came just in time when Qualcomm experienced its biggest expansion. The new Snapdragon 8 Elite is being used in the mobile sector, the Snapdragon X Elite/Plus is being used in Copilot+ PCs, and the automotive sector is also getting the new Snapdragon Cockpit/Ride Elite chipsets. Most of that is centered around Nuvia Oryon core IP, a high-performance, low-power design. Arm's representatives declined to comment on this move for Bloomberg, while a Qualcomm spokesman noted that the British company was trying to "strong-arm a longtime partner."

Arm and Partners Develop AI CPU: Neoverse V3 CSS Made on 2 nm Samsung GAA FET

Yesterday, Arm has announced significant progress in its Total Design initiative. The program, launched a year ago, aims to accelerate the development of custom silicon for data centers by fostering collaboration among industry partners. The ecosystem has now grown to include nearly 30 participating companies, with recent additions such as Alcor Micro, Egis, PUF Security, and SEMIFIVE. A notable development is a partnership between Arm, Samsung Foundry, ADTechnology, and Rebellions to create an AI CPU chiplet platform. This collaboration aims to deliver a solution for cloud, HPC, and AI/ML workloads, combining Rebellions' AI accelerator with ADTechnology's compute chiplet, implemented using Samsung Foundry's 2 nm Gate-All-Around (GAA) FET technology. The platform is expected to offer significant efficiency gains for generative AI workloads, with estimates suggesting a 2-3x improvement over the standard CPU design for LLMs like Llama3.1 with 405 billion parameters.

Arm's approach emphasizes the importance of CPU compute in supporting the complete AI stack, including data pre-processing, orchestration, and advanced techniques like Retrieval-augmented Generation (RAG). The company's Compute Subsystems (CSS) are designed to address these requirements, providing a foundation for partners to build diverse chiplet solutions. Several companies, including Alcor Micro and Alphawave, have already announced plans to develop CSS-powered chiplets for various AI and high-performance computing applications. The initiative also focuses on software readiness, ensuring that major frameworks and operating systems are compatible with Arm-based systems. Recent efforts include the introduction of Arm Kleidi technology, which optimizes CPU-based inference for open-source projects like PyTorch and Llama.cpp. Notably, as Google claims, most AI workloads are being inferenced on CPUs, so creating the most efficient and most performant CPUs for AI makes a lot of sense.

What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise. A decade later, the implementation of this instruction set on CPU cores remains wildly spotty—Intel implemented it first on an HPC accelerator, then its Xeon server processors, then its client processors, before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner, before deprecating it on the client. AMD implemented it just a couple of years ago with Zen 4 with a dual-pumped 256-bit FPU on 5 nm, before finally implementing a true 512-bit FPU on 4 nm. AVX-512 is a microcosm of what's wrong with the x86 ecosystem.

There are only two x86 CPU core vendors, the IP owner Intel, and its only surviving licensee capable of contemporary CPU cores, AMD. Any new additions to the ISA introduced by either of the two have to go through the grind of their duopolistic competition before software vendors could assume that there's a uniform install base to implement something new. x86 is a net-loser of this, and Arm is a net-winner. Arm Holdings makes no hardware of its own, except continuously developing the Arm machine architecture, and a first-party set of reference-design CPU cores that any licensee can implement. Arm's great march began with tiny embedded devices, before its explosion into client computing with smartphone SoCs. There are now Arm-based server processors, and the architecture is making inroads to the last market that x86 holds sway over—the PC. Apple's M-series processors compete with all segments of PC processors—right from the 7 W class, to the HEDT/workstation class. Qualcomm entered this space with its Snapdragon Elite family, and now Dell believes NVIDIA will take a swing at client processors in 2025. Then there's RISC-V. Intel finally did something it should have done two decades ago—set up a multi-brand Ecosystem Advisory Group. Here's what it is, and more importantly, what it's not.

Latest Asahi Linux Brings AAA Windows Games to Apple M1 MacBooks With Intricate Graphics Driver and Translation Stack

While Apple laptops have never really been the first stop for PC gaming, Linux is slowly shaping up to be an excellent gaming platform, largely thanks to open-source development efforts as well as work from the likes of AMD and NVIDIA, who have both put significant work into their respective Linux drivers in recent years. This makes efforts like the Asahi Linux Project all the more intriguing. Asahi Linux is a project that aims to bring Linux to Apple Silicon Macs—a task that has proven rather difficult, thanks to the intricacies of developing a bespoke GPU driver for Apple's custom ARM GPUs. In a recent blog post, the graphics developer behind the Asahi Linux Project showed off a number of AAA games, albeit older titles, running on an Apple M1 processor on the latest Asahi Linux build.

To run the games on Apple Silicon, Asahi Linux uses a "game playing toolkit," which relies on a number of custom graphics drivers and emulators, including tools from Valve's Proton translation layer, which ironically was also the foundation for Apple's Game Porting Toolkit. Asahi uses FEX to emulate x86 on ARM, Wine as a translation layer for Windows apps, and DXVK and vkd3d-proton for DirectX-Vulkan translation. In the blog post, the Asahi developer claims that the alpha is capable of running games like Control, The Witcher 3, and Cyberpunk 2077 at playable frame rates. Unfortunately, 60 FPS is not yet attainable in the majority of new high-fidelity games, there are a number of indie titles that run quite well on Asahi Linux, including Hollow Knight, Ghostrunner, and Portal 2.

MediaTek Announces Dimensity 9400 Flagship SoC with All Big Core Design

MediaTek today launched the Dimensity 9400, the company's new flagship smartphone chipset optimized for edge-AI applications, immersive gaming, incredible photography, and more. The Dimensity 9400, the fourth and latest in MediaTek's flagship mobile SoC lineup, offers a massive boost in performance with its second-generation All Big Core design built on Arm's v9.2 CPU architecture, combined with the most advanced GPU and NPU for extreme performance in a super power-efficient design.

The Dimensity 9400 adopts MediaTek's second-gen All Big Core design, integrating one Arm Cortex-X925 core operating over 3.62 GHz, combined with 3x Cortex-X4 and 4x Cortex-A720 cores. This design offers 35% faster single-core performance and 28% faster multi-core performance compared to MediaTek's previous generation flagship chipset, the Dimensity 9300. Built on TSMC's second-generation 3 nm process, the Dimensity 9400 is up to 40% more power-efficient than its predecessor, allowing users to enjoy longer battery life.

Fujitsu and Supermicro Collaborate to Develop Green Arm-Based AI Computing Technology and Liquid-cooled Datacenter Solutions

Fujitsu Limited and Supermicro, Inc. (NASDAQ: SMCI), today announced they will collaborate to establish a long-term strategic engagement in technology and business, to develop and market a platform with Fujitsu's future Arm-based "FUJITSU-MONAKA" processor that is designed for high-performance and energy efficiency and targeted for release in 2027. In addition, the two companies will also collaborate on developing liquid-cooled systems for HPC, Gen AI, and next-generation green data centers.

"Supermicro is excited to collaborate with Fujitsu to deliver state-of-the-art servers and solutions that are high performance, power efficient, and cost-optimized," said Charles Liang, president and CEO of Supermicro. "These systems will be optimized to support a broad range of workloads in AI, HPC, cloud and edge environments. The two companies will focus on green IT designs with energy-saving architectures, such as liquid cooling rack scale PnP, to minimize technology's environmental impact."

GIGABYTE Announces Availability for Its New Servers Using AmpereOne Family of Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in servers for x86 and ARM platforms as well as advanced cooling technologies, today announced its initial wave of GIGABYTE servers that support the full stack of AmpereOne family of processors. Last year, AmpereOne processors were announced and there were GIGABYTE servers in support of the platform available for select customers. Now, GIGABYTE servers have general availability with single and dual socket servers already in production, and more coming in late Q4. GIGABYTE servers for Ampere Altra and AmpereOne processors will be showcased at the GIGABYTE booth and Ampere pavilion at Yotta 2024 in Las Vegas on Oct. 7-9.

The AmpereOne family of processors, designed for cloud-native computing, features up to 192 custom designed Ampere cores, DDR5 memory, and 128 lanes of PCIe Gen 5 per socket. Overall, this line of processors target cloud instances with incredible VM density to boot, all while excelling at performance per watt. Delivering more cores, more IO, more memory, more performance, and more cloud features, this full stack of CPUs has additional applications in AI inference, data analytics, and more.

NVIDIA Cancels Dual-Rack NVL36x2 in Favor of Single-Rack NVL72 Compute Monster

NVIDIA has reportedly discontinued its dual-rack GB200 NVL36x2 GPU model, opting to focus on the single-rack GB200 NVL72 and NVL36 models. This shift, revealed by industry analyst Ming-Chi Kuo, aims to simplify NVIDIA's offerings in the AI and HPC markets. The decision was influenced by major clients like Microsoft, who prefer the NVL72's improved space efficiency and potential for enhanced inference performance. While both models perform similarly in AI large language model (LLM) training, the NVL72 is expected to excel in non-parallelizable inference tasks. As a reminder, the NVL72 features 36 Grace CPUs, delivering 2,592 Arm Neoverse V2 cores with 17 TB LPDDR5X memory with 18.4 TB/s aggregate bandwidth. Additionally, it includes 72 Blackwell GB200 SXM GPUs that have a massive 13.5 TB of HBM3e combined, running at 576 TB/s aggregate bandwidth.

However, this shift presents significant challenges. The NVL72's power consumption of around 120kW far exceeds typical data center capabilities, potentially limiting its immediate widespread adoption. The discontinuation of the NVL36x2 has also sparked concerns about NVIDIA's execution capabilities and may disrupt the supply chain for assembly and cooling solutions. Despite these hurdles, industry experts view this as a pragmatic approach to product planning in the dynamic AI landscape. While some customers may be disappointed by the dual-rack model's cancellation, NVIDIA's long-term outlook in the AI technology market remains strong. The company continues to work with clients and listen to their needs, to position itself as a leader in high-performance computing solutions.

AVerMedia Expands Its "GO Series" with VERSATI go Microphone and FLEXI go Mic Arm

Following the successful release of the X'TRA GO (GC515), AVerMedia is proud to announce its additions to the GO series: the VERSATI go (AM310G2), a USB cardioid microphone, and the FLEXI go (BA311L), a multi-angle mic arm. Besides embodying the GO series's mobility and versatility design concept, the VERSATI go and FLEXI go further excel in their ease of use.

VERSATI go: Exceptional Clarity, Versatile Application
The VERSATI go (AM310G2) microphone is designed for creators seeking exceptional sound quality. With an enhanced 96 kHz/24-bit sample rate, it captures richer, more natural audio, ensuring every word is delivered with clarity for recording. The cardioid pickup pattern focuses on capturing users' voices while minimizing background noise, so users can be heard clearly in any environment. The USB Type-C connection provides convenience, allowing users to seamlessly integrate it into their setups.

Nintendo Switch 2 Allegedly Not Powered by AMD APU Due to Poor Battery Life

Nintendo's next-generation Switch 2 handheld gaming console is nearing its release. As leaks intensify about its future specifications, we get information about its planning stages. According to Moore's Law is Dead YouTube video, we learn that Nintendo didn't choose AMD APU to be the powerhouse behind Switch 2 due to poor battery life. In a bid to secure the best chip at a mere five watts of power, the Japanese company had two choices: NVIDIA Tegra or AMD APU. With some preliminary testing and evaluation, AMD APU wasn't reportedly power-efficient at 5 Watt TDP, while the NVIDIA Tegra chip was maintaining sufficient battery life and performance at target specifications.

Allegedly the AMD APU was good for 15 W design, but Nintendo didn't want to place a bigger battery so that the device remains lighter and cheaper. The final design will likely carry a battery with a 20 Wh capacity, which will be the main power source behind the NVIDIA Tegra T239 SoC. As a reminder, the Tegra T239 SoC features eight-core Arm A78C cluster with modified NVIDIA Ampere cores in combination with DLSS, featuring some of the latest encoding/decoding elements from Ada Lovelace, like AV1. There are likely 1536 CUDA cores paired with 128-bit LPDDR5 memory running at 102 GB/s bandwidth. For final specifications, we have to wait for the official launch, but with rumors starting to intensify, we can expect to see it relatively soon.

Valve Testing ARM64 Support for Steam Gaming Platform, Android Expansion Possible Too

Gaming giant Valve appears to be venturing into uncharted territory. Recent findings on SteamDB have revealed that the company may be working on integrating ARM architecture and Android app support into its ecosystem. A mysterious application, codenamed "ValveTestApp3043620," has been spotted with an update that includes interesting changes. The update features new tags for several popular games, including Left 4 Dead 2, Garry's Mod, and Kerbal Space Program. These tags, such as "proton-arm64" and "proton-arm64e," indicate that Valve is testing a version of Proton specifically designed for ARM64-based systems. Proton, Valve's brainchild developed in partnership with CodeWeavers, is the magic behind running Windows games on Linux systems. By extending this technology to ARM64, Valve could be paving the way for PC gaming on portable Arm-based devices, potentially signaling a new hardware strategy.

The plot thickens with mentions of "proton_experimental" and "proton-arm64ec-vanguard" in the changelog, hinting at ongoing tests and experimental builds. Additionally, references to Waydroid, a tool enabling Android apps on Linux, suggest that Valve's ambitions may extend beyond gaming, possibly aiming to broaden software accessibility of its platform. This development has sparked speculation about the possibility of Steam games running on Android smartphones and tablets or even a possible Arm-powered version of the Steam Deck. However, a more likely scenario could be testing for Windows support on Arm-based chips. The timing of these experiments aligns with recent announcements of laptops featuring Qualcomm's ARM64-based Snapdragon X CPUs. These devices can run Windows-based games through Microsoft and Qualcomm's emulation layer, Prism. If Valve's Proton can provide superior performance or more stable emulation for Windows-based games on Arm devices, it could position itself as a strong competitor to Prism.

AAEON Unveils RICO-MX8P pITX Motherboard for Kiosks

Award-winning embedded solutions provider AAEON (stock code: 6579) has announced the launch of the RICO-MX8P, an NXP i.MX 8M Plus-powered fanless single-board built on the Pico-ITX Plus form factor. Utilizing the i.MX 8M Plus, the RICO-MX8P leverages a platform comprised of a quad-core Arm Cortex -A53 processor, a secondary core Arm Cortex M7, and an optional Neural Processing Unit (NPU) offering up to 2.3 TOPS of inference performance.

Equipped with an integrated Vivante GC7000 UltraLite 3D GPU, a dedicated VPU, MIPI DSI interface, and an HDMI 2.0 port, it is clear AAEON is positioning the RICO-MX8P as a candidate for multimedia applications, with digital signage, smart kiosk, and interactive Digital Out-of-Home (DOOH) advertising earmarked as potential uses. Further benefits to such use are evident from the board's support for APIs like OpenGL ES 3.1, Vulkan, and OpenCL 1.2, alongside its multiformat encoding and decoding capabilities.

Arm to Dip its Fingers into Discrete GPU Game, Plans on Competing with Intel, AMD, and NVIDIA

According to a recent report from Globes, Arm, the chip design giant and maker of the Arm ISA, is reportedly developing a new discrete GPU at its Ra'anana development center in Israel. This development signals Arm's intention to compete directly with industry leaders like Intel, AMD, and NVIDIA in the massive discrete GPU market. Sources close to the matter reveal that Arm has assembled a team of approximately 100 skilled chip and software development engineers at its Israeli facility. The team is focused on creating GPUs primarily aimed at the video game market. However, industry insiders speculate that this technology could potentially be adapted for AI processing in the future, mirroring the trajectory of NVIDIA, which slowly integrated AI hardware accelerators into its lineup.

The Israeli development center is playing a crucial role in this initiative. The hardware teams are overseeing the development of key components for these GPUs, including the flagship Immortalis and Mali GPU. Meanwhile, the software teams are creating interfaces for external graphics engine developers, working with both established game developers and startups. Arm is already entering the PC market through its partners like Qualcomm with Snapdragon X chips. However, these chips run an integrated GPU, and Arm wants to provide discrete GPUs and compete there. While details are still scarce, Arm could make GPUs to accompany Arm-based Copilot+ PCs and some desktop builds. The final execution plan still needs to be discovered, and we are still waiting to see which stage Arm's discrete GPU project is in.

TSMC Reportedly to Manufacture SoftBank's AI Chips, Replacing Intel

SoftBank has reportedly decided against using Intel's foundry for its ambitious AI venture, Project Izanagi, and is opting for TSMC instead. The conglomerate aims to challenge NVIDIA in the AI accelerator market by developing its own AI processors. This decision marks another setback for Intel, which has faced several challenges recently. In February 2024, reports emerged that SoftBank's CEO, Masayoshi Son, planned to invest up to $100 billion to create a company similar to NVIDIA, focused on selling AI accelerators. Although SoftBank initially worked with Intel, it recently switched to TSMC, citing concerns about Intel's ability to meet demands for "volume and speed."

The decision, reported by the Financial Times, raises questions about Intel's future involvement and how SoftBank's ownership of Arm Holdings will factor into the project. While TSMC is now SoftBank's choice, the foundry is already operating at full capacity, making it uncertain how it will accommodate this new venture. Neither SoftBank, Intel nor TSMC has commented on the situation, but given the complexities involved, it will likely take time for this plan to materialize. SoftBank will need to replicate NVIDIA's entire ecosystem, from chip design to data centers and a software stack rivaling CUDA, a bold and ambitious goal.
Return to Keyword Browsing
Dec 7th, 2024 20:28 CST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts