News Posts matching #AI

Return to Keyword Browsing

RISC-V Adoption to Grow 50% Yearly Due to AI Processor Demand

The open-source RISC-V instruction set architecture is shaping up for explosive growth over the next several years, primarily fueled by the increasing demand for artificial intelligence (AI) across industries. A new forecast from tech research firm Omdia predicts that shipments of RISC-V-based chips will skyrocket at an astonishing 50% annual growth rate between 2024 and 2030, sitting at a staggering 17 billion RISC-V units in 2030. The automotive sector is expected to see the most significant growth in RISC-V adoption, with a forecasted annual increase of 66%. This growth is largely attributed to the unique benefits RISC-V offers in this industry, including its flexibility and customizability.

The rise of AI in the automotive sector, particularly in applications such as autonomous driving and advanced driver assistance systems (ADAS), is also expected to contribute to RISC-V's success. Industrial applications will continue to be the largest domain for RISC-V, accounting for approximately 46% of sales. However, the growth in the automotive sector is expected to outpace other industries, driven by the increasing demand for AI-enabled technologies in this sector. The forecast from Omdia is based on current trends and the growing adoption of RISC-V by major players in the tech industry, including Google and Meta, which are investing in RISC-V to power their custom solutions. Additionally, chip producers like Qualcomm are creating their RISC-V chips for consumer use, further solidifying the technology's future position in the market.

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

AAEON Releases de Next, the World's Smallest Edge PC with an Embedded Intel Core CPU

Following the success of the de next form factor, which included the world's smallest single board computers to host Intel Core and AMD Ryzen V2000 Series processors, AAEON has announced de next brand's first edge system, the de next-TGU8-EZBOX. Mirroring its record-breaking SBC counterparts, the de next-TGU8-EZBOX is the world's smallest edge PC with an embedded Intel Core processor, measuring just 95.5 mm x 69.5 mm x 42.5 mm with a heatsink, and 95.5 mm x 69.5 mm x 45.4 mm with an active cooler. The system is suitable for edge computing and IoT gateway solutions, especially in scenarios with there is limited space. Furthermore, the benefits that come with fanless heat dissipation, such as low noise and protection against dust, will appeal to customers in both the digital signage and industrial markets.

The system retains many of the I/O features of its board counterparts, including multiple high-speed interfaces for industrial-grade performance. Two LAN ports, operating at both 2.5GbE and GbE speeds, are conveniently grouped together with two USB 3.2 Gen 2 ports and one HDMI 1.4b port for easy interface access during operation.

AMD Promises Next-Generation Product Announcements in its Computex Keynote

AMD on Monday said that its 2024 Computex Keynote address slated for June 3, will see a slew of next-generation product announcements. "Join us as Dr. Lisa Su delivers the Computex 2024 opening keynote and shares the latest on how AMD and our partners are pushing the envelope with our next generation of high-performance PC, data center and AI solutions," the brief release said.

AMD is widely expected to unveil its next-generation Ryzen 9000 "Strix Point" mobile processors for AI PCs capable of powering the recently announced Microsoft Copilot+, its next-generation Ryzen 9000 "Granite Ridge" desktop processors, its 5th Generation EPYC "Turin" server processors, and possibly even its next-generation Radeon RX RDNA 4 generation. At the heart of all its processor announcements is the new "Zen 5" CPU microarchitecture that's expected to introduce an over 10% IPC improvement with significant improvements in AVX512 performance over "Zen 4," which should benefit certain kinds of AI workloads.

Intel's Lunar Lake Processors Arriving Q3 2024

Starting Q3 2024 in time for the holiday season, Intel's upcoming client processors (code-named Lunar Lake) will power more than 80 new laptop designs across more than 20 original equipment manufacturers, delivering AI performance at a global scale for Copilot+ PCs. Lunar Lake will get the Copilot+ experiences, like Recall, via an update when available. Building on the success of Intel Core Ultra processors and with the addition of Lunar Lake, Intel will ship more than 40 million AI PC processors this year.

"With breakthrough power efficiency, the trusted compatibility of x86 architecture and the industry's deepest catalog of software enablement across the CPU, GPU and NPU, we will deliver the most competitive joint client hardware and software offering in our history with Lunar Lake and Copilot+," said Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group.

Microsoft Introduces Copilot+ PCs

Today, at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs. Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all-day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can't on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

These experiences come to life on a set of thin, light and beautiful devices from Microsoft Surface and our OEM partners Acer, ASUS, Dell, HP, Lenovo and Samsung, with pre-orders beginning today and availability starting on June 18. Starting at $999, Copilot+ PCs offer incredible value. This first wave of Copilot+ PCs is just the beginning. Over the past year, we have seen an incredible pace of innovation of AI in the cloud with Copilot allowing us to do things that we never dreamed possible. Now, we begin a new chapter with AI innovation on the device. We have completely reimagined the entirety of the PC - from silicon to the operating system, the application layer to the cloud - with AI at the center, marking the most significant change to the Window platform in decades.

HP Unlocks New Era of AI Experiences Powering Work and Creation

Today HP Inc. unveiled a new class of next-generation AI PCs at Microsoft's AI Vision Event designed for the AI era to empower people to work and create more meaningfully. The way we live and work is no longer confined to a desk; we are creating at home, from the office, and everywhere in between. At the same time, AI is rapidly changing everything, including consumer and professional expectations for productivity and creativity. More than half of early tech adopters believe AI will save them time, and 72% of business leaders believe AI will help employees focus on more productive work. Additionally, developers are releasing a steady stream of AI-enabled software designed for new experiences and capabilities that can only be unlocked with next-gen AI PCs.

To capitalize on both flexible work and AI, HP has created a new category of devices crafted for work and creation. Today, HP unveiled the HP OmniBook X AI PC and HP EliteBook Ultra AI PC, HP's first next-gen AI PCs built from the ground up with the latest ARM architecture to harness the most powerful AI technologies.

Qualcomm Snapdragon X Series is the Exclusive Platform to Power Windows PCs with Copilot+

During the Copilot+ debut, Microsoft and global OEMs announced PCs powered by Snapdragon X Elite and Snapdragon X Plus - the only devices that are capable of bringing Copilot+ experiences to life today. The leading AI technology and performance efficiency of these platforms will power this groundbreaking new category, as Copilot+ revolutionizes how users interact with their PCs. Together, Qualcomm Technologies, Inc. and Microsoft are taking intelligent computing to the next level and transforming the PC experience.

Qualcomm Technologies is restoring performance leadership to the Windows PC ecosystem with our leading NPU powered by Snapdragon X Elite which delivers the highest NPU performance per watt for laptops, up to 2.6X vs. M3 and up to 5.4X vs. Core Ultra 7. With the integrated Qualcomm Hexagon NPU architecture, it can deliver up to 24 TOPS/watt peak performance in uses cases like Super Resolution and with our leading Qualcomm Oryon CPU, Snapdragon X Elite leads in performance per watt, matching competitor peak PC CPU performance at 60% less power.

Dell Introduces Comprehensive Portfolio of Copilot+ AI PCs

There's no doubt the AI PC revolution is here. We're as excited now about the future of the PC as we were when Michael founded the company 40 years ago. New transformative AI experiences will bring enhanced productivity and efficiency to the hands of consumers and professionals alike. Those with an AI PC will never look back, and everyone will want to be part of the action. We're here to lead the way.

As part of the Dell AI Factory's growing AI devices and infrastructure offerings, Dell expands our broad portfolio of AI PCs and workstations with the introduction of the most Copilot+ PCs powered by Snapdragon X Elite and Snapdragon X Plus processors. This brand-new class of devices transforms your AI PC experience with extraordinary performance and battery life, upleveled productivity and powerful security. Our five new laptops, XPS 13, Inspiron 14 Plus, Inspiron 14, Latitude 7455 and Latitude 5455, offer a range of consumer and commercial options that deliver exceptional speed and AI performance to elevate computing and simplify tasks. So, whether you're a business professional looking to spend more time on strategic work and less time on tedious to-dos, or you're a student looking to prioritize your studies while being untethered to a desk, we have you covered.

Lenovo Supercharges Copilot+ PCs with Latest Yoga Slim 7x and ThinkPad T14s Gen 6 20 May 2024

Today, Lenovo launched the Lenovo Yoga Slim 7x and Lenovo ThinkPad T14s Gen 6, its first next generation Copilot+ PCs powered by Snapdragon X Elite. As the PC industry enters a new phase of the artificial intelligence era, Lenovo is poised to offer new levels of personalization in personal computing across its PC portfolio. Intelligent software-powered local processing of tasks, and increased productivity, creativity, and security, these Copilot+ PC's combine to deliver a whole new experience in PC interaction. Lenovo is expanding its already comprehensive portfolio of AI-ready devices, software, and optimized services with two new laptops for consumers and business users—the Lenovo Yoga Slim 7x and the Lenovo ThinkPad T14s Gen 6.

Powered by Qualcomm Technologies' new Snapdragon X Elite processor featuring the 12-core Qualcomm Oryon CPU, Qualcomm Adreno GPU and a dedicated Qualcomm Hexagon NPU (neural processing unit), the new laptops deliver leading PC performance per watt with the fastest to date AI NPU processing up to 45 trillion operations per second (TOPS). With the latest enhancements from Microsoft and Copilot+, users can now access Large Language Model (LLM) capabilities even when offline, offering seamless productivity and creativity. The latest Lenovo laptops allow users to tap into the extensive Copilot+ knowledge base, empowering them to explore endless creative possibilities. By leveraging generative AI and machine learning, Copilot+ assists in composing compelling text, crafting engaging visuals, and streamlining common productivity tasks. With the ability to work offline with the same fluidity as online, the Yoga Slim 7x and the ThinkPad T14s Gen 6 set new standards in AI PC innovation, promising a futuristic and streamlined user experience for end users.

Acer Announces Its First Copilot+ PC, the Swift 14 AI Laptop

Acer today launched its first Copilot+ PC with the Swift 14 AI laptop, in collaboration with Microsoft and Qualcomm Technologies, Inc., ushering in a new AI era with brand new user experiences and AI capabilities on Windows 11. The Swift 14 AI has multiple device models, powered by Snapdragon X Elite and Snapdragon X Plus platforms, and both feature one of the world's fastest NPUs for laptops to enable on-device AI processing. Users can streamline everyday tasks with smarter PC functions and tackle complex workloads more effectively.

"Engineered for AI from the inside out, the Swift 14 AI is the first among many Acer Copilot+ PCs to come," said Jerry Kao, COO, Acer Inc. "These next-generation AI PCs see significant leaps in AI processing power, unlocking brand new experiences that we know users will love."

Galaxy Book4 Edge: Samsung's Next-Gen AI PC Expands the Galaxy AI Ecosystem

Samsung Electronics today unveiled the Galaxy Book4 Edge, the next-generation of AI PCs. With 45 TOPS NPU AI computing performance and cutting-edge hybrid AI integrations, the Galaxy Book4 Edge amplifies the PC experience and offers the most hyperconnected mobile AI ecosystem yet. Together, Samsung and leading industry partners are propelling AI innovation and unleashing a next-generation AI PC with the Galaxy Book4 Edge.

"The Galaxy Book4 Edge expands our vision of AI connectivity—bridging the gap between mobile and PC, while democratizing AI experiences that change the way we operate each and every day," said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. "This next-generation AI PC unlocks new levels of computing performance, made possible by our proven Samsung Galaxy interface and open collaboration with industry leaders to bring our users world-class hardware and software solutions."

Intel's Next-Gen Falcon Shores GPU to Consume 1500 W, No Air-Cooled Variant Planned

Intel's upcoming Falcon Shores GPU is shaping up to be a powerhouse for AI and high-performance computing (HPC) workloads, but it will also be an extreme power hog. The processor, combining Gaudi and Ponte Vecchio successors into a single GPU, is expected to consume an astonishing 1500 W of power - more than even Nvidia's beefy B200 accelerator, which draws 1000 W. This immense power consumption will require advanced cooling solutions to ensure the Falcon Shores GPU operates efficiently and safely. Intel's partners may turn to liquid cooling or even full immersion liquid cooling, a technology Intel has been promoting for power-hungry data center hardware. The high power draw is the cost of the Falcon Shores GPU's formidable performance promises. Intel claims it will deliver 5x higher performance per watt and 5x more memory capacity and bandwidth compared to its Ponte Vecchio products.

Intel may need to develop proprietary hardware modules or a new Open Accelerator Module (OAM) spec to support such extreme power levels, as the current OAM 2.0 tops out around 1000 W. Slated for release in 2025, the Falcon Shores GPU will be Intel's GPU IP based on its next-gen Xe graphics architecture. It aims to be a major player in the AI accelerator market, backed by Intel's robust oneAPI software development ecosystem. While the 1500 W power consumption is sure to raise eyebrows, Intel is betting that the Falcon Shores GPU's supposedly impressive performance will make it an enticing option for AI and HPC customers willing to invest in robust cooling infrastructure. The ultra-high-end accelerator market is heating up, and the HPC accelerator market needs a Ponte Vecchio successor.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

ASUS Leaks its own Snapdragon X Elite Notebook

Courtesy of ASUS Vietnam (via @rquandt on X/Twitter), we now have an idea of what ASUS' first Qualcomm Snapdragon X Elite notebook will look like, but also what the main specifications are. It will share the Vivobook S 15 OLED branding with other notebooks from ASUS, although the leaked model carries the model number S5507QA-MA089WS. At its core is a Qualcomm Snapdragon X Elite X1E-78-100 SoC which is the base model from Qualcomm. The SoC consists of 12 Oryon cores, of which eight are performance cores and four are energy efficient cores. A peak, multi-threaded clock speed of 3.4 GHz and 42 MB of cache, as well as a 75 TOPs AI engine rounds off the SoC specs. The SoC is also home to a Qualcomm Adreno GPU, but so far Qualcomm hasn't released any useful specs about the GPU in the Snapdragon X Elite series of chips.

ASUS has paired the SoC with 32 GB of LPDDR5X memory of an unknown clock speed, although Qualcomm officially supports speed of up to 8,448 MT/s in a to PC users unusual configuration of eight channels at 16-bit wide, for a bandwidth of up to 135 GB/s. For comparison, Intel's latest Core Ultra processors max out at LPDDR5X 7,467 MT/s and up to 120 GB/s memory bandwidth. Other features include a 1 TB PCIe 4.0 NVMe SSD, a glossy 15.6-inch 2,880 x 1,620 resolution, 120 Hz OLED display with 600 nits peak brightness and a 70 WHr battery. It's unclear what connectivity options will be on offer, but judging by the screenshot below, we can at least expect an HDMI out as well as a pair of USB Type-C ports, a micro SD card slot and a headphone jack. As far as pricing goes, Roland Quandt is suggesting a €1,500 base price on X/Twitter, but we'll have to wait for the official launch to find out what these Arm based laptops will retail for. ASUS Vietnam has already removed the page from its website.

Phison Announces Pascari Brand of Enterprise SSDs, Debuts X200 Series Across Key Form-factors

Phison is arguably the most popular brand for SSD controllers in the client segment, but is turning more of attention to the vast enterprise segment. The company had been making first-party enterprise SSDs under its main marquee, but decided that the lineup needed its own brand that enterprise customers could better discern from the controller ASIC main brand. We hence have Pascari and Imagin. Pascari is an entire product family of fully built enterprise SSDs from Phison. The company's existing first-party drives under the main brand will probably migrate to the Pascari catalog. Imagin, on the other hand, is a design service for large cloud and data-center customers, so they could develop bespoke tiered storage solutions at scale.

The Pascari line of enterprise SSDs are designed completely in-house by Phison, feature their latest controllers, firmware, PCB, PMIC, and on-device power-failure protection on select products. The third-party components here are the NAND flash and DRAM chips, which have both been thoroughly evaluated by Phison for the best performance, endurance, and reliability, at their enterprise SSD design facility in Broomfield, Colorado. Phison already had a constellation of industry partners and suppliers to go around with, and the company's drives even power space missions; but the Pascari brand better differentiates the fully-built SSD lineup from the ASIC make. Pascari makes its debut with the X200 series high-performance SSDs for high-access heat data. The drive leverages Phison's latest PCIe Gen 5 controller technology, the most optimized memory components, and availability in all contemporary server storage form-factors.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

Dell XPS Roadmap Leak Spills Beans on Several Upcoming Intel, AMD, and Qualcomm Processors

A product roadmap leak at leading PC OEM Dell, disclosed the tentative launch dates of several future generations of processors by Intel, AMD, and Qualcomm. The slide was detailing hardware platforms for future revisions of the company's premium XPS notebooks. Given that Dell remains one of the largest PC OEMs, the dates revealed in the leaked slides are highly plausible.

In chronological order, Dell expects Intel's Core Ultra 200V series "Lunar Lake-MX" processor in September 2024, which should mean product unveilings at Computex. It's interesting to note that Intel is only designing "Lunar Lake" for the -MX memory-on-package segment. This chip squares off against Apple's M3, M4, and possibly even the M3 Pro. Intel also has its ambitious "Arrow Lake" architecture planned for the second half of 2024, hence the lack of product overlap—there won't be an "Arrow Lake-MX."

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Intel-powered Aurora Supercomputer Ranks Fastest for AI

At ISC High Performance 2024, Intel announced in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE) that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is the fastest AI system in the world dedicated to AI for open science, achieving 10.6 AI exaflops. Intel will also detail the crucial role of open ecosystems in driving AI-accelerated high performancehigh -performance computing (HPC). "The Aurora supercomputer surpassing exascale will allow it to pave the road to tomorrow's discoveries. From understanding climate patterns to unraveling the mysteries of the universe, supercomputers serve as a compass guiding us toward solving truly difficult scientific challenges that may improve humanity," said Ogi Brkic, Intel vice president and general manager of Data Center AI Solutions.

Designed as an AI-centric system from its inception, Aurora will allow researchers to harness generative AI models to accelerate scientific discovery. Significant progress has been made in Argonne's early AI-driven research. Success stories include mapping the human brain's 80 billion neurons, high-energy particle physics enhanced by deep learning, and drug design and discovery accelerated by machine learning, among others. The Aurora supercomputer is an expansive system with 166 racks, 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Center GPU Max Series units, making it one of the world's largest GPU clusters.

NVIDIA Blackwell Platform Pushes the Boundaries of Scientific Computing

Quantum computing. Drug discovery. Fusion energy. Scientific computing and physics-based simulations are poised to make giant steps across domains that benefit humanity as advances in accelerated computing and AI drive the world's next big breakthroughs. NVIDIA unveiled at GTC in March the NVIDIA Blackwell platform, which promises generative AI on trillion-parameter large language models (LLMs) at up to 25x less cost and energy consumption than the NVIDIA Hopper architecture.

Blackwell has powerful implications for AI workloads, and its technology capabilities can also help to deliver discoveries across all types of scientific computing applications, including traditional numerical simulation. By reducing energy costs, accelerated computing and AI drive sustainable computing. Many scientific computing applications already benefit. Weather can be simulated at 200x lower cost and with 300x less energy, while digital twin simulations have 65x lower cost and 58x less energy consumption versus traditional CPU-based systems and others.

NVIDIA Grace Hopper Ignites New Era of AI Supercomputing

Driving a fundamental shift in the high-performance computing industry toward AI-powered systems, NVIDIA today announced nine new supercomputers worldwide are using NVIDIA Grace Hopper Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.

New Grace Hopper-based supercomputers coming online include EXA1-HE, in France, from CEA and Eviden; Helios at Academic Computer Centre Cyfronet, in Poland, from Hewlett Packard Enterprise (HPE); Alps at the Swiss National Supercomputing Centre, from HPE; JUPITER at the Jülich Supercomputing Centre, in Germany; DeltaAI at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign; and Miyabi at Japan's Joint Center for Advanced High Performance Computing - established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo.

Apple Inches Closer to a Deal with OpenAI to Bring ChatGPT Technology to iPhone

To bring cutting-edge artificial intelligence capabilities to its flagship product, Apple is said to be finalizing a deal with OpenAI to integrate the ChatGPT technology into the upcoming iOS 18 for iPhones. According to Bloomberg, multiple sources report that after months of negotiations, the two tech giants are putting the finishing touches on a partnership that would be an important moment for consumer AI. However, OpenAI may not be Apple's only AI ally. The company has also reportedly been in talks with Google over licensing the Gemini chatbot, though no known agreement has been reached yet. The rare team-up between the fiercely competitive firms underscores the intense focus on AI integration across the industry.

Apple's strategic moves are a clear indication of its recognition of the transformative potential of advanced AI capabilities for the iPhone experience. The integration of OpenAI's language model could empower Siri to understand and respond to complex voice queries with deep contextual awareness. This could revolutionize the way Apple's customers interact with devices, offering hope for a more intuitive and advanced iPhone experience. Potential Gemini integration opens up another realm of possibilities around Google's image and multimodal AI capabilities. Future iPhones may be able to analyze and describe visual scenes, annotate images, generate custom imagery from natural language prompts, and even synthesize audio using AI vocals - all within a conversational interface. As the AI arms race intensifies, Apple wants to position itself at the forefront through these partnerships.
Apple and OpenAI

ASRock Announces Intel Arc GPU Version AI QuickSet Software Tool With OpenVINO Support

Leading global motherboard manufacturer, ASRock, has successively released its AI QuickSet software tool based on Microsoft Windows 10/11 and Canonical Ubuntu Linux platforms since the end of last year, which can help users quickly download, install and set up many artificial intelligence (AI) applications. It can also be accelerated by ASRock's custom AMD graphics cards, and has received widespread response from the market. Today ASRock continues its efforts and launch the AI QuickSet software tool that supports ASRock's custom Intel Arc A-Series graphics cards, including Intel Arc A770, A750, A580, A380, and A310 models, allowing users to enjoy the fun brought by generative artificial intelligence (AI) applications at their fingertips!

ASRock AI QuickSet software tool v1.0.3i supports Microsoft Windows 10/11 64-bit operating system, allowing users to easily install Stable Diffusion web UI OpenVINO without delving into complex configuration settings. This Stable Diffusion Web UI artificial intelligence (AI) image generation tool optimized through the Intel OpenVINO tool suite can be used with ASRock's own Intel Arc A series graphics cards (including Intel Arc A770, A750, A580, A380, A310 model)'s powerful computing capabilities provide excellent operational performance. This once again demonstrates ASRock's strong software and hardware R&D capabilities and its spirit of considering users. It also makes ASRock's AI QuickSet software tool once again a premium choice for consumers to quickly experience the fun of generative artificial intelligence (AI) applications!

NVIDIA "Blackwell" Successor Codenamed "Rubin," Coming in Late-2025

NVIDIA barely started shipping its "Blackwell" line of AI GPUs, and its next-generation architecture is already on the horizon. Codenamed "Rubin," after Vera Rubin, the new architecture will power NVIDIA's future AI GPUs with generational jumps in performance, but more importantly, a design focus on lowering the power draw. This will become especially important as NVIDIA's current architectures already approach the kilowatt range, and cannot scale boundlessly. TF International Securities analyst, Mich-Chi Kuo says that NVIDIA's first AI GPU based on "Rubin," the R100 (not to be confused with an ATI GPU from many moons ago); is expected to enter mass-production in Q4-2025, which means it could be unveiled and demonstrated sooner than that; and select customers could have access to the silicon sooner, for evaluations.

The R100, according to Mich-Chi Kuo, is expected to leverage TSMC's 3 nm EUV FinFET process, specifically the TSMC-N3 node. In comparison, the new "Blackwell" B100 uses the TSMC-N4P. This will be a chiplet GPU, and use a 4x reticle design compared to Blackwell's 3.3x reticle design, and use TSMC's CoWoS-L packaging, just like the B100. The silicon is expected to be among the first users of HBM4 stacked memory, and feature 8 stacks of a yet unknown stack height. The Grace Ruben GR200 CPU+GPU combo could feature a refreshed "Grace" CPU built on the 3 nm node, likely an optical shrink meant to reduce power. A Q4-2025 mass-production roadmap target would mean that customers will start receiving the chips by early 2026.
Return to Keyword Browsing
May 21st, 2024 03:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts