News Posts matching #AI

Return to Keyword Browsing

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Arm Also Announces Three New GPUs for Consumer Devices

In addition to its two new CPU cores, Arm has announced three new GPU cores, namely the Immortalis-G925, Mali-G725 and Mali-G625. Starting from the top, the Immortalis-G925 is said to bring up to 37 percent better performance at 30 percent lower power usage compared to last year's Immortalis-G720 GPU core, whilst having two additional GPU cores in the test scenario. It's also said to bring up to 52 percent better ray tracing performance and up to 36 percent improved inference in AI/ML workloads. It's also been given a big overhaul when it comes to ray tracing—due to it being aimed towards gaming phones—and Arm claims that it can either offer up to 52 percent increased performance by reducing the accuracy in scenes with intricate objects, or 27 percent more performance with maintained accuracy.

The Immortalis-G925 supports 50 percent more shader cores and it supports configurations of up to 24 cores, compared to 16 cores for the Immortalis-G720. The Mali-G725 will be available with between six and nine cores, whereas the Mali-G625 will sport between one and five cores. The Mali-G625 is intended for smartwatches and entry-level mobile devices where a more complex GPU might not be suitable due to power draw. The Mali-G725 on the other hand is targeting upper mid-range devices and the Immortalis-G925 is aimed towards flagship devices or gaming phones as mentioned above. In related news, Arm said it's working with Epic Games to get its Unreal Engine 5 desktop renderer up and running on Android, which could lead to more complex games on mobile devices.

Micron First to Achieve Qualification Sample Milestone to Accelerate Ecosystem Adoption of CXL 2.0 Memory

Micron Technology, a leader in innovative data center solutions, today announced it has achieved its qualification sample milestone for the Micron CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the data center to tackle the growing memory challenges stemming from existing data-intensive workloads and emerging artificial intelligence (AI) and machine learning (ML) workloads.

Using a new and emerging CXL standard, the CZ120 required substantial hardware testing for reliability, quality and performance across CPU providers and OEMs, along with comprehensive software testing for compatibility and compliance with OS and hypervisor vendors. This achievement reflects the collaboration and commitment across the data center ecosystem to validate the advantages of CXL memory. By testing the combined products for interoperability and compatibility across hardware and software, the Micron CZ120 memory expansion modules satisfy the rigorous standards for reliability, quality and performance required by customers' data centers.

Arm Announces the Cortex-X925 and Cortex-A725 Armv9 CPU Cores

Arm has announced a pair of new Armv9 CPU cores today, alongside a refresh of a third. The new additions are the Cortex-X925—which is a huge model number jump from the previous Cortex-X4—and the Cortex-A725 which should be an upgraded Cortex-A720. Finally the Cortex-A520 has been refreshed to bring a 15 percent power efficiency improvement as well as support for 3 nm production nodes. Arm claims that the Cortex-A925 delivers its highest performance improvement ever over a previous generation with a single core uplift of up to 36 percent and an AI performance improvement of up to 46 percent compared to the Cortex-X4. The Cortex-X925 will support up to 3 MB private L2 cache and is tape-out ready for 3 nm production nodes.

The Cortex-A725 is said to offer a 35 percent performance efficiency improvement over the Cortex-A720 and it's been given performance boosts both when it comes to AI and gaming workloads. It's said to be up to 25 percent more power efficient than the Cortex-A720 and L3 cache traffic has been improved by up to 20 percent. Again, the Cortex-A720 is ready for production on a 3 nm node. Finally, Arm has also updated its DynamIQ Shared Unit to the DS-120 and here Arm has managed to reduce the typical workload power by up to 50 percent and the cache miss power by up to 60 percent. The DSU-120 scales to up to 14 Arm cores, suggesting that we might get to see some interesting new SoC implementations in the coming years from Arm's partners, although Arm's reference platform is a 2-4-2 configuration of the new cores.

Micron DRAM Production Plant in Japan Faces Two-Year Delay to 2027

Last year, Micron unveiled plans to construct a cutting-edge DRAM factory in Hiroshima, Japan. However, the project has faced a significant two-year delay, pushing back the initial timeline for mass production of the company's most advanced memory products. Originally slated to begin mass production by the end of 2025, Micron now aims to have the new facility operational by 2027. The complexity of integrating extreme ultraviolet lithography (EUV) equipment, which enables the production of highly advanced chips, has contributed to the delay. The Hiroshima plant will produce next-generation 1-gamma DRAM and high-bandwidth memory (HBM) designed for generative AI applications. Micron expects the HBM market, currently dominated by rivals SK Hynix and Samsung, to experience rapid growth, with the company targeting a 25% market share by 2025.

The project is expected to cost between 600 and 800 billion Japanese yen ($3.8 to $5.1 billion), with Japan's government covering one-third of the cost. Micron has received a subsidy of up to 192 billion yen ($1.2 billion) for construction and equipment, as well as a subsidy to cover half of the necessary funding to produce HBM at the plant, amounting to 25 billion yen ($159 million). Despite the delay, the increased investment in the factory reflects Micron's commitment to advancing its memory technology and capitalizing on the growing demand for HBM. An indication of that is the fact that customers have pre-ordered 100% of the HBM capacity for 2024, not leaving a single HBM die unused.

ZOTAC to Debut Limit-Pushing Handheld Gaming PC and Showcase AI-Centric Computing Solutions at Computex 2024

ZOTAC Technology, a global manufacturer focused on innovative and high-performance hardware solutions, will return to COMPUTEX 2024 to showcase its biggest push yet into brand-new product categories. At this year's exhibition, ZOTAC will unveil its first attempt at creating a unique Handheld Gaming PC with advanced controls and features, allowing gamers to enjoy their favorite games on the go like never before with maximum competitive advantage.

Also in ZOTAC's extensive lineup is a full-fledged selection of AI-focused computational hardware, including a new workstation-grade External GPU Box series for hassle-free GPU compute and AI acceleration, ZBOX mini PCs powered by Intel Core Ultra CPUs equipped with integrated neural processing units (NPU), as well as other enterprise-grade solutions, such as GPU Servers and Arm-based NVIDIA Jetson systems, offering users a broad selection of AI accelerators in applications big and small.

COLORFUL Teases New Prebuilt Desktop Gaming PCs and Notebooks for Computex

Colorful Technology Company Limited, a leading brand in gaming PC components, gaming laptops, and Hi-fi audio products, is excited to announce its participation at COMPUTEX 2024 from Jun 4 to Jun 7, 2024. COLORFUL Technology will be exhibiting its latest and upcoming products at the booth L0417a located at 4F, Hall 1 of the Nangang Exhibition Center. This year, COLORFUL Technology will expand its popular Vulcan, Neptune and Mini Series to more than just graphics cards and will include other PC components to complete a full gaming PC. COLORFUL will also be introducing new liquid cooling for the COLORFUL iGame GeForce RTX 4090 Vulcan LAB.

Featuring the themed gaming PCs launched and launching this year which includes the COLORFIRE MEOW and iGame ULTRA gaming PC Sets. Upcoming Vulcan, and MINI Series gaming PCs. The Vulcan Series consists of new concept products which will include a motherboard, DDR5 memory, chassis, peripherals, PSU, and liquid cooler featuring Asetek pump technology. The iGame Z790D5 Vulcan is designed to be a high-end motherboard. COLORFUL is already preparing a new Vulcan motherboard to the next-generation Intel Core processors.

ZOTAC Teases Several Products it will Unveil at Computex: Handheld Gaming PC, Mini PCs, Graphics Cards

ZOTAC Technology, a global manufacturer focused on innovative and high-performance hardware solutions, will return to COMPUTEX 2024 to showcase its biggest push yet into brand-new product categories. At this year's exhibition, ZOTAC will unveil its first attempt at creating a unique Handheld Gaming PC with advanced controls and features, allowing gamers to enjoy their favorite games on the go like never before with maximum competitive advantage.

Also in ZOTAC's extensive lineup is a full-fledged selection of AI-focused computational hardware, including a new workstation-grade External GPU Box series for hassle-free GPU compute and AI acceleration, ZBOX mini PCs powered by Intel Core Ultra CPUs equipped with integrated neural processing units (NPU), as well as other enterprise-grade solutions, such as GPU Servers and Arm-based NVIDIA Jetson systems, offering users a broad selection of AI accelerators in applications big and small.

Opera Partners with Google Cloud to Power its Browser AI with Gemini Models

Opera, the browser innovator, today announced a collaboration with Google Cloud to integrate Gemini models into its Aria browser AI. Aria is powered by Opera's multi-LLM Composer AI engine, which allows the Norwegian company to curate the best experiences for its users based on their needs.

Opera's Aria browser AI is unique as it doesn't just utilize one provider or LLM. Opera's Composer AI engine processes the user's intent and can decide which model to use for which task. Google's Gemini model is a modern, powerful, and user-friendly LLM that is the company's most capable model yet. Thanks to this integration, Opera will now be able to provide its users with the most current information, at high performance.
Opera Aria Gemini AI Opera Aria Gemini AI

The Race is Heating Up, Elon Musk's AI Startup xAI Raises $6 Billion

Elon Musk's AI company xAI just scored big (according to Reuters), raising a massive $6 billion in new funding. This sky-high investment values xAI at a whopping $24 billion as investors go all-in on challengers to top AI players like OpenAI. Big-name funders like Andreessen Horowitz and Sequoia backed the funding round, according to xAI's blog post on Sunday. Before this, xAI was valued at $18 billion, Musk said on social app X.

The huge cash influx will help xAI launch its first products, build advanced tech, and turbocharge their research, the company stated. "More news coming soon," Musk teased cryptically after the funding announcement. It's an AI investment frenzy as tech giants like Microsoft and Google parent Alphabet pour fortunes into leading the red-hot generative AI race. With its new war chest, xAI is gearing up to make some serious waves.
Elon Musk xAI

NVIDIA's Arm-based AI PC Processor Could Leverage Arm Cortex X5 CPU Cores and Blackwell Graphics

Last week, we got confirmation from the highest levels of Dell and NVIDIA that the latter is making a client PC processor for the Windows on Arm (WoA) AI PC ecosystem that only has one player in it currently, Qualcomm. Michael Dell hinted that this NVIDIA AI PC processor would be ready in 2025. Since then, speculation has been rife about the various IP blocks NVIDIA could use in the development of this chip, the two key areas of debate have been the CPU cores and the process node.

Given that NVIDIA is gunning toward a 2025 launch of its AI PC processor, the company could implement reference Arm IP CPU cores, such as the Arm Cortex X5 "Blackhawk," and not venture out toward developing its own CPU cores on the Arm machine architecture, unlike Apple. Depending on how the market recieves its chips, NVIDIA could eventually develop its own cores. Next up, the company could use the most advanced 3 nm-class foundry node available in 2025 for its chip, such as the TSMC N3P. Given that even Apple and Qualcomm will build their contemporary notebook chips on this node, it would be a logical choice of node for NVIDIA. Then there's graphics and AI acceleration hardware.

AMD Adds RDNA 4 Generation Navi 44 and MI300X1 GPUs to ROCm Software

AMD has quietly added some interesting codenames to its ROCm hardware support list. The biggest surprise is the appearance of "RDNA 4" and "Navi 44" codenames, hinting at a potential successor to the current RDNA 3 GPU architecture powering AMD's Radeon RX 7000 series graphics cards. The upcoming Radeon RX 8000 series could see Navi 44 SKU with a codename "gfx1200". While details are scarce, the inclusion of RDNA 4 and Navi 44 in the ROCm list suggests AMD is working on a new GPU microarchitecture that could bring significant performance and efficiency gains. While RDNA 4 may be destined for future Radeon gaming GPUs, in the data center GPU compute market, AMD is preparing a CDNA 4 based successors to the MI300 series. However, it appears that we haven't seen all the MI300 variants first. Equally intriguing is the "MI300X1" codename, which appears to reference an upcoming AI-focused accelerator from AMD.

While we wait for more information, we can't decipher whether the Navi 44 GPU SKU is for the high-end or low-end segment. If previous generations are for reference, then the Navi 44 SKU would target the low end of the GPU performance spectrum. The previous generation RDNA 3 had Navi 33 as an entry-level model, whereas the RDNA 2 had a Navi 24 SKU for entry-level GPUs. We have reported on RDNA 4 merely being a "bug correction" generation to fix the perf/Watt curve and offer better efficiency overall. What happens finally, we have to wait and see. AMD could announce more details in its upcoming Computex keynote.

Lenovo Releases Fiscal Year 2023/24 Earnings Report

Lenovo Group today announced Q4 and full-year results for fiscal year 2023/24. After resuming growth in Q3, the Group reported year-on-year revenue growth across all business groups in Q4, with Group revenue increasing nearly 10% year-on-year to US$13.8 billion, net income doubling year-on-year to US$248 million, and non-PC revenue mix reaching a historic high of 45%. The Group's Q4 and overall 2nd half performance demonstrates how Lenovo has navigated the past year's industry downturn, captured the tremendous growth opportunities presented by AI, and accelerated momentum across the business. Revenue for the full fiscal year was US$56.9 billion, and net income was US$1 billion. From the second half of the fiscal year, Lenovo achieved year-on-year revenue growth of 6% and net margin recovered from a first half year-on-year decline to flat in the second half.

The Group is leading in an era of unprecedented AI opportunities with its pocket-to-cloud portfolio, strong ecosystem and partnerships, and full-stack AI capabilities. Since announcing its AI strategy in October 2023 at its annual Tech World event, Lenovo has launched its first wave of AI PCs as well as AI capabilities covering other smart devices, smart infrastructure, and smart solutions and services. The Group expects the AI PC - which is defined as equipped with a personal AI agent based on natural interactions, heterogeneous computing, personal knowledge base, connected to an open AI application ecosystem, and with privacy and security protection - to grow from its current premium position to mainstream over the next three years, driving a new refresh cycle for the industry. Hybrid AI is also driving greater demand for AI infrastructure and customers are increasingly asking for customized AI solutions and services, particularly consulting, design, deployment and maintenance of AI.

European Union's AI Act Adopted, Full Implementation to Follow by 2026

The European Union has recently approved the AI Act, a new type of regulation to ensure the safe and responsible development of artificial intelligence (AI) within the EU. This legislation marks the first case in the global efforts to regulate AI, as it sets a new standard for transparency, accountability, and ethical considerations in AI development and deployment. The AI Act, also known as the KI-Gesetz, is designed to address concerns about the potential risks and negative impacts of AI on society. Key aspects of the regulation include the requirement for AI systems to be transparent and explainable, ensuring that they are not discriminatory and environmentally friendly. Furthermore, the AI Act emphasizes the importance of human oversight in AI development and deployment, ensuring that other technologies do not solely control AI systems.

The regulation also includes specific restrictions on certain AI applications, such as mass surveillance using biometric data or the evaluation of social behavior, similar to China's Social Scoring system. Additionally, emotion recognition, both in the workplace and educational settings, is prohibited under the new law. While the AI Act has been widely praised for its forward-thinking approach, there are also concerns about the potential limitations and challenges it may pose. The Digital Industry Association, Bitkom, has criticized the law for leaving certain questions unanswered and warning that the regulations may become outdated within two years. The AI Act is set to come into effect in the coming weeks, with full implementation expected by 2026. The regulation will apply to all entities developing, offering, or using AI systems within the EU, including both public and private parties. This comprehensive approach aims to ensure that AI is developed and used in a way that benefits society while minimizing potential risks.

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this

Simply NUC Unveils BMC-Enabled extremeEDGE Servers, Spearheading Edge Computing Revolution

Simply NUC, a global leading provider of customized PC solutions, proudly announced its most ambitious product launch to date: BMC-Enabled extremeEDGE Servers. Representing a massive leap in edge computing technology, the new product family of servers are positioned to reshape how businesses manage and process data in an ever-evolving digital landscape.

Traditional servers are bulky, expensive, and inefficient for edge computing applications. However, extremeEDGE Servers revolutionize this paradigm. Equipped with NANO-BMC technology, the product line offers server-like manageability in ultra-efficient compute nodes, resulting in enhanced energy efficiency by "right-sizing" hardware for specific workloads. Integrating AI inference at the edge enables machine learning directly at the source, reducing costs, machine power, network latency, and bandwidth usage compared to cloud-based solutions.

Windows Auto Super Resolution Limited to Copilot+ PCs with Snapdragon X Elite SoCs Only—Not x86

Microsoft Auto Super Resolution (ASR), the standardized game super-resolution based performance enhancement, is initially only being offered to Copilot+ AI PCs powered by Qualcomm Snapdragon X Elite processors, says the Microsoft FAQ for Copilot+ AI PCs. "At initial launch, this feature will be exclusive to Copilot+ PCs equipped with a Qualcomm Snapdragon X Elite processor and a curated set of games that can be found here, a third-party open-source site that Microsoft has contributed compatibility data to," the FAQ answer reads, in response to the question "What is automatic super resolution?"

The way we understand this, Microsoft ASR will be launched initially only Windows Arm devices, specifically those powered by the Snapdragon X Elite SoC. The Snapdragon X Plus is excluded; but more importantly, all x86-64 platforms (Intel or AMD) are excluded from the initial rollout. This doesn't mean that ASR won't make it to x86, it just will at a later date. Copilot+ devices based on the Snapdragon X Elite tend to have a high degree of design collaboration between the OEM and Microsoft, and Redmond wants to use them as tech demonstrators, mostly since these platforms lack the usual super resolution tech such as AMD FSR, Intel XeSS, or NVIDIA DLSS. It's important to note that ASR is a super-resolution tech that's not meant to be confused with DirectSR, the API it's based on. Development of DirectSR for Windows PCs on x86-64 platforms continues.

Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft's Phi-3 family of open models. The Phi-3 family of small, open models can run on lower-compute hardware, be more easily fine-tuned to meet specific requirements and enable developers to build applications that run locally. Intel's supported products include Intel Gaudi AI accelerators and Intel Xeon processors for data center applications and Intel Core Ultra processors and Intel Arc graphics for client.

"We provide customers and developers with powerful AI solutions that utilize the industry's latest AI models and software. Our active collaboration with fellow leaders in the AI software ecosystem, like Microsoft, is key to bringing AI everywhere. We're proud to work closely with Microsoft to ensure Intel hardware - spanning data center, edge and client - actively supports several new Phi-3 models," said Pallavi Mahajan, Intel corporate vice president and general manager, Data Center and AI Software.

New Performance Optimizations Supercharge NVIDIA RTX AI PCs for Gamers, Creators and Developers

NVIDIA today announced at Microsoft Build new AI performance optimizations and integrations for Windows that help deliver maximum performance on NVIDIA GeForce RTX AI PCs and NVIDIA RTX workstations. Large language models (LLMs) power some of the most exciting new use cases in generative AI and now run up to 3x faster with ONNX Runtime (ORT) and DirectML using the new NVIDIA R555 Game Ready Driver. ORT and DirectML are high-performance tools used to run AI models locally on Windows PCs.

WebNN, an application programming interface for web developers to deploy AI models, is now accelerated with RTX via DirectML, enabling web apps to incorporate fast, AI-powered capabilities. And PyTorch will support DirectML execution backends, enabling Windows developers to train and infer complex AI models on Windows natively. NVIDIA and Microsoft are collaborating to scale performance on RTX GPUs. These advancements build on NVIDIA's world-leading AI platform, which accelerates more than 500 applications and games on over 100 million RTX AI PCs and workstations worldwide.

AMD Instinct MI300X Accelerators Power Microsoft Azure OpenAI Service Workloads and New Azure ND MI300X V5 VMs

Today at Microsoft Build, AMD (NASDAQ: AMD) showcased its latest end-to-end compute and software capabilities for Microsoft customers and developers. By using AMD solutions such as AMD Instinct MI300X accelerators, ROCm open software, Ryzen AI processors and software, and Alveo MA35D media accelerators, Microsoft is able to provide a powerful suite of tools for AI-based deployments across numerous markets. The new Microsoft Azure ND MI300X virtual machines (VMs) are now generally available, giving customers like Hugging Face, access to impressive performance and efficiency for their most demanding AI workloads.

"The AMD Instinct MI300X and ROCm software stack is powering the Azure OpenAI Chat GPT 3.5 and 4 services, which are some of the world's most demanding AI workloads," said Victor Peng, president, AMD. "With the general availability of the new VMs from Azure, AI customers have broader access to MI300X to deliver high-performance and efficient solutions for AI applications."

AMD Said to be Planning Taiwan R&D Center

According to the Taiwan Central News Agency AMD is considering opening up an R&D center in Taiwan. AMD is said to have applied with the Ministry of Economic Affairs (MOEA) as part of Taiwan's "A+ global R&D and innovation partnership program" to set up a new R&D facility. The government partnership program covers three types of fields, namely AI, new-generation semiconductors including high-power and high-frequency ICs, and new 5G network structures and it's aiming for both local and international businesses to set up new R&D centers. AMD has yet to announce any plans about the potential R&D center and the MOEA has declined to share any details with local media in Taiwan.

However, an unnamed source with inside knowledge in the matter has revealed that AMD is looking at investing around NT$5 billion (~US$155 million), based on the application. The same source also mentioned that the MOEA has stipulated conditions that AMD has to meet, which among other things involves working with local IC design companies to help further develop Taiwan's IC design industry, working with local companies to produce servers with AI chips and working with local universities to cultivate talent. Furthermore, the MOEA is said to have asked AMD to recruit at least 20 percent of its R&D centre workforce from outside of Taiwan, to avoid competing with local companies for staff. AMD could be making an announcement about the R&D center at Computex, but it's worth keeping in mind that these things take time. Back in 2021, NVIDIA announced that it would set up an R&D center in Taiwan, but with a much bigger budget of NT$24.3 billion plus a government subsidy of a further NT$6.7 billion. NVIDIA has as yet to announce the opening of its Taiwan R&D center.

Microsoft Has Lost its Mind—$450 for a Keyboard, AI Button Included

During yesterday's Microsoft Event, in addition to the new AI features and Surface devices, the company also announced its new Surface Pro Flex wireless keyboard, compatible with the Surface Pro 8, 9, 10, and 11 models. Officially called "Surface Pro Flex Keyboard with Slim Pen," the wireless keyboard is a $449.98 add-on keyboard that doesn't include Microsoft's Surface Slim Pen 2, only an indentation at the top of the keyboard to charge it. So, without a pen, what does the keyboard offer? According to Microsoft, up to 41 hours of continuous typing while detached, a dedicated Copilot key, and a touchpad with a vibration function for haptic feedback. The touchpad can have varying pressure levels, and applications can use it to build different functionalities.

Its thin and light design measures 289x221x5.25 mm with a weight of 340 g. The dedicated Copilot key summons the AI assistant and all the functionality Microsoft plans for its Windows 11 integration, which has been completely revamped for AI. The material choice is a Alcantara polyester-blend, which can be difficult for maintenance, and the keyboard is sold in either black or bright sapphire colors, becoming available on June 18th. Interestingly, given the name "Surface Pro Flex Keyboard with Slim Pen" and a price tag of $449.98, it is intriguing as to why Microsoft doesn't bundle the Surface Slim Pen 2 with the keyboard, as it is designed with a dedicated wireless charging indentation for the pen.

RISC-V Adoption to Grow 50% Yearly Due to AI Processor Demand

The open-source RISC-V instruction set architecture is shaping up for explosive growth over the next several years, primarily fueled by the increasing demand for artificial intelligence (AI) across industries. A new forecast from tech research firm Omdia predicts that shipments of RISC-V-based chips will skyrocket at an astonishing 50% annual growth rate between 2024 and 2030, sitting at a staggering 17 billion RISC-V units in 2030. The automotive sector is expected to see the most significant growth in RISC-V adoption, with a forecasted annual increase of 66%. This growth is largely attributed to the unique benefits RISC-V offers in this industry, including its flexibility and customizability.

The rise of AI in the automotive sector, particularly in applications such as autonomous driving and advanced driver assistance systems (ADAS), is also expected to contribute to RISC-V's success. Industrial applications will continue to be the largest domain for RISC-V, accounting for approximately 46% of sales. However, the growth in the automotive sector is expected to outpace other industries, driven by the increasing demand for AI-enabled technologies in this sector. The forecast from Omdia is based on current trends and the growing adoption of RISC-V by major players in the tech industry, including Google and Meta, which are investing in RISC-V to power their custom solutions. Additionally, chip producers like Qualcomm are creating their RISC-V chips for consumer use, further solidifying the technology's future position in the market.

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

AAEON Releases de Next, the World's Smallest Edge PC with an Embedded Intel Core CPU

Following the success of the de next form factor, which included the world's smallest single board computers to host Intel Core and AMD Ryzen V2000 Series processors, AAEON has announced de next brand's first edge system, the de next-TGU8-EZBOX. Mirroring its record-breaking SBC counterparts, the de next-TGU8-EZBOX is the world's smallest edge PC with an embedded Intel Core processor, measuring just 95.5 mm x 69.5 mm x 42.5 mm with a heatsink, and 95.5 mm x 69.5 mm x 45.4 mm with an active cooler. The system is suitable for edge computing and IoT gateway solutions, especially in scenarios with there is limited space. Furthermore, the benefits that come with fanless heat dissipation, such as low noise and protection against dust, will appeal to customers in both the digital signage and industrial markets.

The system retains many of the I/O features of its board counterparts, including multiple high-speed interfaces for industrial-grade performance. Two LAN ports, operating at both 2.5GbE and GbE speeds, are conveniently grouped together with two USB 3.2 Gen 2 ports and one HDMI 1.4b port for easy interface access during operation.
Return to Keyword Browsing
May 30th, 2024 22:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts