News Posts matching #AI

Return to Keyword Browsing

Raja Koduri to Present at Samsung Foundry Forum amid Intel's Outsourcing Efforts

Intel's chief architect and senior vice president of discrete graphics division, Mr. Raja Koduri, is said to be scheduled to present at Samsung Electronics Event day. With a presentation titled "1000X More Compute for AI by 2025", the event is called Samsung Foundry SAFE Forum. It is a global virtual conference designed to be available to everyone. So you might be wondering what is Mr. Koduri doing there. Unless you have been living under a rock, you know about Intel's struggles with node manufacturing. Specifically, the 10 nm node delays that show the company's efforts to deliver a node on time. The same is happening with the 7 nm node that also experienced significant delays.

Intel has a contract to develop an exascale supercomputer at Argonne National Laboratory, called Aurora. That supercomputer is using Intel's CPUs and the company's upcoming Xe GPUs. Since the company has problems with manufacturing and has to deliver the products (it is bound by several contracts) to its contractors and customers, it decided to look at external manufacturers for its products, specifically Xe graphics. Being that Mr. Koduri tweeted an image of him visiting Samsung Giheung Fab in Korea, and now presenting at the Samsung Foundry event, it is possible that Intel will tap Samsung's semiconductor manufacturing process for its Xe GPU efforts and that Samsung will be the contractor in charge.

Dialog Semiconductor Licenses its Non-Volatile ReRAM Technology to GLOBALFOUNDRIES for 22FDX Platform

DIALOG SEMICONDUCTOR, a leading provider of battery and power management, Wi-Fi and Bluetooth low energy (BLE) and Industrial edge computing solutions and GLOBALFOUNDRIES (GF ), the world's leading specialty foundry, today announced that they have entered into an agreement in which Dialog licenses its Conductive Bridging RAM (CBRAM) technology to GLOBALFOUNDRIES. The resistive ram (ReRAM)-based technology was pioneered by Adesto Technologies which was recently acquired by Dialog Semiconductor in 2020. GLOBALFOUNDRIES will first offer Dialog's CBRAM as an embedded, non-volatile memory (NVM) option on its 22FDX platform, with the plan to extend to other platforms.

Dialog's proprietary and production proven CBRAM technology is a low power NVM solution designed to enable a range of applications from IoT and 5G connectivity to artificial intelligence (AI). Low power consumption, high read/write speeds, reduced manufacturing costs and tolerance for harsh environments make CBRAM particularly suitable for consumer, medical, and select industrial and automotive applications. Furthermore, CBRAM technology enables cost-effective embedded NVM for advanced technology nodes required for products in these markets.

NVIDIA and Atos Team Up to Build World's Fastest AI Supercomputer

NVIDIA today announced that the Italian inter-university consortium CINECA—one of the world's most important supercomputing centers—will use the company's accelerated computing platform to build the world's fastest AI supercomputer.

The new "Leonardo" system, built with Atos, is expected to deliver 10 exaflops of FP16 AI performance to enable advanced AI and HPC converged application use cases. Featuring nearly 14,000 NVIDIA Ampere architecture-based GPUs and NVIDIA Mellanox HDR 200 Gb/s InfiniBand networking, Leonardo will propel Italy as the global leader in AI and high performance computing research and innovation.

Arm Highlights its Next Two Generations of CPUs, codenamed Matterhorn and Makalu, with up to a 30% Performance Uplift

Editor's Note: This is written by Arm vice president and general manager Paul Williamson.

Over the last year, I have been inspired by the innovators who are dreaming up solutions to improve and enrich our daily lives. Tomorrow's mobile applications will be even more imaginative, immersive, and intelligent. To that point, the industry has come such a long way in making this happen. Take app stores for instance - we had the choice of roughly 500 apps when smartphones first began shipping in volume in 2007 and today there are 8.9 million apps available to choose from.

Mobile has transformed from a simple utility to the most powerful, pervasive device we engage with daily, much like Arm-based chips have progressed to more powerful but still energy-efficient SoCs. Although the chip-level innovation has already evolved significantly, more is still required as use cases become more complex, with more AI and ML workloads being processed locally on our devices.

NVIDIA Building UK's Most Powerful Supercomputer, Dedicated to AI Research in Healthcare

NVIDIA today announced that it is building the United Kingdom's most powerful supercomputer, which it will make available to U.K. healthcare researchers using AI to solve pressing medical challenges, including those presented by COVID-19.

Expected to come online by year end, the "Cambridge-1" supercomputer will be an NVIDIA DGX SuperPOD system capable of delivering more than 400 petaflops of AI performance and 8 petaflops of Linpack performance, which would rank it No. 29 on the latest TOP500 list of the world's most powerful supercomputers. It will also rank among the world's top 3 most energy-efficient supercomputers on the current Green500 list.

Seagate Launches SkyHawk AI 18TB Hard Drive

Seagate Technology plc, a world leader in data storage and management solutions, today announced it is shipping 18 TB SkyHawk Artificial Intelligence drives in volume. SkyHawk AI is the world's first purpose-built hard drive for artificial intelligence (AI)-enabled Surveillance solutions, enabling quicker and smarter decisions. The new drive supports deep learning and machine learning workload streams for Edge applications with ImagePerfectAI.

The capacity to retain more data over time is required for deep learning systems to become smarter and more accurate in their predictive analysis, and behavior analysis requires significantly more data than traditional video capture. SkyHawk AI simultaneously sustains 32 AI streams alongside 64 video streams and supports multi-bay NVR and AI-enabled NVR. SkyHawk AI offers a 550 TB/year workload rate, more than 3× the workload rate of standard surveillance hard drives in order to manage data deluge in complex video security system environments without sacrificing performance. This drive intelligently adapts between traditional video workloads and video+AI workloads.

DFI's Miniaturized IPCs Empower Edge AI Applications

In the era of Artificial Intelligence of Things (AIoT), Industrial PC (IPC) is expected more than just a computer for general data processing. Faced with the increasing workload at the edge, end devices are required to be smart, automated and interconnected, which reflects on the demands of AI computing and M2M (Machine-to-Machine) communication in small-sized PCs.

The demand for AI computing emerged on the account of the decentralization trends in recent years to reduce cloud computing workloads and costs, and to reinforce AI performance at the edge, high-end embedded solutions is a must. But to downsize them and meanwhile support the conditions required by edge environments, like tight spaces and abrupt temperature changes, it's definitely a challenge for IPC manufactures.

Intel Enters Strategic Collaboration with Lightbits Labs

Intel Corp. and Lightbits Labs today announced an agreement to propel development of disaggregated storage solutions to solve the challenges of today's data center operators who are craving improved total-cost-of-ownership (TCO) due to stranded disk capacity and performance. This strategic partnership includes technical co-engineering, go-to-market collaboration and an Intel Capital investment in Lightbits Labs. Lightbits' LightOS product delivers high-performance shared storage across servers while providing high availability and read-and-write management designed to maximize the value of flash-based storage. LightOS, while being fully optimized for Intel hardware, provides customers with vastly improved storage efficiency and reduces underutilization while maintaining compatibility with existing infrastructure without compromising performance and simplicity.

Lightbits Labs will enhance its composable disaggregated software-defined storage solution, LightOS, for Intel technologies, creating an optimized software and hardware solution. The system will utilize Intel Optane persistent memory and Intel 3D NAND SSDs based on Intel QLC Technology, Intel Xeon Scalable processors with unique built-in artificial intelligence (AI) acceleration capabilities and Intel Ethernet 800 Series Network Adapters with Application Device Queues (ADQ) technology. Intel's leadership FPGAs for next-generation performance, flexibility and programmability will complement the solution.

Lenovo Announces the Lightest ThinkPad Ever - ThinkPad X1 Nano

Lenovo is very excited to unveil the latest addition to our premium X1 portfolio, ThinkPad X1 Nano. The lightest ThinkPad ever at just 1.99 pounds (907 g) breaks new ground for performance and functionality in an incredibly featherweight package. Lenovo's first ThinkPad based on Intel Evo platform and powered by 11th Gen Intel Core processors, the X1 Nano delivers supreme speed and intelligence while maintaining outstanding battery life. Stunning visuals are delivered through a narrow bezel 13-inch 2K display with a 16:10 aspect ratio, and four speakers and four 360-degree microphones enhance the audio-visual capabilities. For a truly immersive user experience, the X1 Nano supports Dolby Vision and Dolby Atmos. State of the art connectivity is provided by WiFi 65 and optional 5G will deliver higher bandwidth capability and drive new levels of always on always connected efficiency and collaboration in a new hybrid working world.

Lenovo today is also delighted to announce that the world's first foldable PC, ThinkPad X1 Fold, is available to order and will ship in a few weeks. A pinnacle of engineering innovation, the X1 Fold offers a revolutionary mix of portability and versatility that defines a new computing category enabled by Intel Core processors with Intel Hybrid Technology and made possible by Intel's Project Athena innovation program. Blending familiar functionality that we all know from smartphones, tablets and laptops into a single foldable PC device that will forever reshape the way you work, play, create and connect. With optional 5G, you can trust that your connection speed is more secure and optimized where available and that you are better protected with ThinkShield security features. Find out more how ThinkPad X1 Fold is pioneering a new category: A Game Changing Category

AAEON Introduces BOXER 8222AI AI Edge Computing

AAEON, an industry leader in AI Edge Computing, announces the BOXER-8222AI compact system powered by NVIDIA Jetson Nano. Featuring a PoE PD LAN port, the BOXER-8222AI can be powered by a PSE Gateway allowing for deployment without requiring a dedicated power supply.

The BOXER-8222AI is the latest platform in AAEON's family of compact rugged AI systems powered by the innovative and energy efficient NVIDIA Jetson Nano SoC. Specialized for AI edge computing, the Jetson Nano utilizes the quad-core ARM Cortex -A57 MPCore processor with NVIDIA Maxwell GPU architecture featuring 128 NVIDIA CUDA cores. This innovative design allows processing speeds up to 472 GFLOPs and is capable of operating multiple neural networks or processing several high-resolution images at the same time. The NVIDIA Jetson Nano also features onboard 4 GB RAM and MicroSD card slot supporting up to 32 GB of storage.

Intel Introduces IoT-Enhanced Processors to Increase Performance, AI, Security

Today at the Intel Industrial Summit 2020, Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers. With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024.

"By 2023, up to 70% of all enterprises will process data at the edge. 11th Gen Intel Core processors, Intel Atom x6000E series, and Intel Pentium and Celeron N and J series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers' current needs, while setting the foundation for capabilities with advancements in AI and 5G," said John Healy, Intel vice president of the Internet of Things Group and general manager of Platform Management and Customer Engineering.

Qualcomm Adds New 5G Mobile Platform to Snapdragon 7-Series

Qualcomm Technologies, Inc. announced a new 5G mobile platform in the 7-series, the Qualcomm Snapdragon 750G 5G Mobile Platform, which enables truly global 5G with brilliant HDR gaming and amazing on-device AI. To date, there have been more than 275 designs announced or in development based on Snapdragon 7-series mobile platforms, including 140 5G designs.

"We continue to see great traction with our high-tier Snapdragon 7-series 5G mobile platforms," said Kedar Kondap, vice president of product management, Qualcomm Technologies, Inc. "As we continue to build out this relatively new tier of our mobile roadmap, we're always looking for ways to support the growing needs of our OEM customers. Snapdragon 750G delivers a selection of premium mobile features to an even wider audience."

Qualcomm Snapdragon XR2 Platform Commercially Debuts in Oculus Quest 2

Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, is powering better-than-ever virtual reality (VR) gameplay and experiences with the launch of Oculus Quest 2, the first-to-launch VR device powered by the Qualcomm Snapdragon XR2 Platform. Oculus Quest 2 is a culmination of years of collaboration between Qualcomm Technologies and Facebook to create the most advanced and immersive gaming VR experiences for consumers to-date.

Purpose-built for extended reality (XR), the Snapdragon XR2 Platform unlocks staggering improvements including twice the CPU and GPU performance compared to its predecessor which powers the original Oculus Quest headset. The Snapdragon XR2 Platform delivers significant performance enhancements in Oculus Quest 2 including:

Qualcomm Announces First Shipments of Qualcomm Cloud AI 100 Accelerator and Edge Development Kit

Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, announced the Qualcomm Cloud AI 100, a high-performance AI inference accelerator, is shipping to select worldwide customers. Qualcomm Cloud AI 100 uses advanced signal processing and cutting-edge power efficiency to support AI solutions for multiple environments including the datacenter, cloud edge, edge appliance, and 5G infrastructure. The newly announced Qualcomm Cloud AI 100 Edge Development Kit is engineered to accelerate adoption of edge applications by offering a complete system solution for AI processing up to 24 simultaneous 1080p video streams along with 5G connectivity.

"Qualcomm Technologies is well positioned to support complete edge-to-cloud high performance AI solutions that lead the industry in performance per watt," said Keith Kressin, senior vice president and general manager, computing and edge cloud, Qualcomm Technologies. "Qualcomm Cloud AI 100 is now shipping to select worldwide customers and we look forward to seeing commercial products launch in the first half of 2021."

NVIDIA to Acquire Arm for $40 Billion, Creating World's Premier Computing Company for the Age of AI

NVIDIA and SoftBank Group Corp. (SBG) today announced a definitive agreement under which NVIDIA will acquire Arm Limited from SBG and the SoftBank Vision Fund (together, "SoftBank") in a transaction valued at $40 billion. The transaction is expected to be immediately accretive to NVIDIA's non-GAAP gross margin and non-GAAP earnings per share.

The combination brings together NVIDIA's leading AI computing platform with Arm's vast ecosystem to create the premier computing company for the age of artificial intelligence, accelerating innovation while expanding into large, high-growth markets. SoftBank will remain committed to Arm's long-term success through its ownership stake in NVIDIA, expected to be under 10 percent.

Rambus Advances HBM2E Performance to 4.0 Gbps for AI/ML Training Applications

Rambus Inc. (NASDAQ: RMBS), a premier silicon IP and chip provider making data faster and safer, today announced it has achieved a record 4 Gbps performance with the Rambus HBM2E memory interface solution consisting of a fully-integrated PHY and controller. Paired with the industry's fastest HBM2E DRAM from SK hynix operating at 3.6 Gbps, the solution can deliver 460 GB/s of bandwidth from a single HBM2E device. This performance meets the terabyte-scale bandwidth needs of accelerators targeting the most demanding AI/ML training and high-performance computing (HPC) applications.

"With this achievement by Rambus, designers of AI and HPC systems can now implement systems using the world's fastest HBM2E DRAM running at 3.6 Gbps from SK hynix," said Uksong Kang, vice president of product planning at SK hynix. "In July, we announced full-scale mass-production of HBM2E for state-of-the-art computing applications demanding the highest bandwidth available."

Qualcomm Announces Next-Generation Snapdragon 8cx Gen 2 5G Compute Platform

Qualcomm Technologies, Inc. today announced at Internationale Funkausstellung (IFA) Berlin the Qualcomm Snapdragon 8cx Gen 2 5G compute platform, the company's most advanced and efficient compute platform. Users will benefit from superior performance and multi-day battery life*, 5G connectivity, enterprise-grade security and AI acceleration, as well as advanced camera and audio technology. These features will enable and support the digital transformation and mobility needs for remote productivity and learning experiences. The Snapdragon 8cx Gen 2 is designed to enable industry leading 5G PC experiences and build on the innovative first-generation Snapdragon 8cx 5G compute platform, which powered the World's First 5G PC.

"Working remotely is the new reality, and more businesses are looking to quickly and securely connect their employee base, accelerating the need for always on, always connected PCs that are thin, light and truly mobile, equipped with blazing fast 5G cellular and Wi-Fi 6 connectivity, and coupled with multi-day battery-life to boost productivity," said Miguel Nunes, Senior Director, Product Management, Qualcomm Technologies, Inc. "We are excited to expand our customer reach with Acer, to continue bringing the best of the Smartphone to the PC. Together, we will deliver cutting-edge devices and experiences to consumers and businesses, enabling them to connect, create and collaborate anytime from virtually anywhere."
Qualcomm Snapdragon 8cx Gen 2 5G Compute Platform

PALIT Announces GeForce RTX 3090, 3080, 3070 GamingPro and GameRock Series

Palit Microsystems Ltd, the leading graphics card manufacturer, today launched the GeForce RTX 3090, RTX 3080, RTX 3070 GameRock and GamingPro Series powered by the NVIDIA Ampere architecture.

The new NVIDIA GeForce RTX 30 Series GPUs, the 2nd generation of RTX, features new RT Cores, Tensor Cores and streaming multiprocessors, bringing stunning visuals, amazingly fast frame rates, and AI acceleration to games and creative applications. Powered by the NVIDIA Ampere architecture, which delivers increases of up to 1.9X performance-per-watt over the previous generation, the RTX 30 Series effortlessly powers graphics experiences at all resolutions, even up to 8K at the top end. The GeForce RTX 3090, 3080, and 3070 represent the greatest GPU generational leap in the history of NVIDIA.

Tachyum Prodigy Native AI Supports TensorFlow and PyTorch

Tachyum Inc. today announced that it has further expanded the capabilities of its Prodigy Universal Processor through support for TensorFlow and PyTorch environments, enabling a faster, less expensive and more dynamic solution for the most challenging artificial intelligence/machine learning workloads.

Analysts predict that AI revenue will surpass $300 billion by 2024 with a compound annual growth rate (CAGR) of up to 42 percent through 2027. AI is being heavily invested in by technology giants looking to make the technology more accessible for enterprise use-cases. They include self-driving vehicles to more sophisticated and control-intensive disciplines like Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI. When deployed into AI environments, Prodigy is able to simplify software processes, accelerate performance, save energy and better incorporate rich data sets to allow for faster innovation.

Elon Musk to Show Working Neuralink Device This Friday

Elon Musk, via its Neuralink company, is set to reveal a working device this Friday. Neuralink Corporation was started back in 2016 with the mission to develop a BMI (Brain-Machine Interface), ultimately allowing for integration of a computer with the human mind. Work has gone on in relative secrecy until now, but the announcement from Elon Musk shows that the company has been diligently working behind closed doors - as one would expect for such a fundamental technology. The first step is for Neuralink to serve as a "treatment" of sorts for brain diseases and assorted conditions. The device works by implanting threads into the brain, for which Neuralink is developing a "sewing machine-like" device that can manipulate and insert 4 to 6 μm in width threads throughout a recipient's brain (note that patient wasn't the word used there).

The basis behind Neuralink's foundation, and its ultimate goal, is the belief for a need for human augmentation (sometimes referred to as transhumanism). This aims to keep up with the increasingly entrenched Dataist interpretation of humankind, and the advent of increasingly complex algorithms - and even AI - throughout the sphere of our lives. Apart from showing off a working Neuralink prototype, which will supposedly demonstrate the ability to "fire neurons in real time", the company is unveiling a second-generation robot for sewing the threads into the brain. The objective is to develop flexible threads that circumvent currently-employed rigid threads in BMI interfaces, which always run the risk of damaging the brain. Eventually, this surgery will be non-invasive - an objective example is the workings of LASIK eye surgery. Being a Musk-backed project, lofty claims and unrealistic deadlines are aplenty; the company first expected to start human trials by the end of this year. For now, no more information on that milestone has been shared.

Lightmatter Introduces Optical Processor to Speed Compute for Next-Gen AI

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data. Using light to calculate and communicate within the chip reduces heat—leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed. Since 2010, the amount of compute power needed to train a state-of-the-art AI algorithm has grown at five times the rate of Moore's Law scaling—doubling approximately every three and a half months. Lightmatter's processor solves the growing need for computation to support next-generation AI algorithms.

"The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world's power. Transistors, the workhorse of traditional processors, aren't improving; they're simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress," said Nicholas Harris, PhD, founder and CEO at Lightmatter. "We need a new computing paradigm. Lightmatter's optical processors are dramatically faster and more energy efficient than traditional processors. We're simultaneously enabling the growth of computing and reducing its impact on our planet."

Raja Koduri Previews "PetaFLOPs Scale" 4-Tile Intel Xe HP GPU

Raja Koduri, Intel's chief architect and senior vice president of Intel's discrete graphics division, has today held a talk at HotChips 32, the latest online conference of 2020, that shows off the latest architectural advancements in the semiconductor industry. So Intel has prepared two talks, one about Ice Lake-SP server CPUs and one about Intel's efforts in the upcoming graphics card launch. So what has Intel been working on the whole time? Raja Koduri took over the talk and has benchmarked the upcoming GPU and recorded how much raw power the GPUs posses, possibly counting in PetaFLOPs.

When Mr. Koduri got to talk, he pulled the 4-tile Xe HP GPU out of his pocket and showed for the first time how the chip looks. And it is one big chip. Featuring 4 tiles, the GPU represents Intel's fastest and biggest variant of Xe HP GPUs. The benchmark Intel ran was made to show off scaling on the Xe architecture and how the increase in the number of tiles results in a scalable increase in performance. Running on a single tile, the GPU managed to develop the performance of 10588 GFLOPs or around 10.588 TeraFLOPs. When there are two tiles, the performance scales almost perfectly at 21161 GFLOPS (21.161 TeraFLOPs) for 1.999X improvement. At four tiles the GPU achieves 3.993 times scaling and scores 41908 GFLOPs resulting in 41.908 TeraFLOPS, all measured in single-precision FP32.
Intel Xe HP GPU Demo Intel Xe HP GPU Demo Intel Xe HP GPU Demo

IBM Reveals Next-Generation IBM POWER10 Processor

IBM today revealed the next generation of its IBM POWER central processing unit (CPU) family: IBM POWER10. Designed to offer a platform to meet the unique needs of enterprise hybrid cloud computing, the IBM POWER10 processor uses a design focused on energy efficiency and performance in a 7 nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the IBM POWER9 processor.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM's roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021. Some of the new processor innovations include:
IBM POWER10 Processor IBM POWER10 Processor

Blaize Delivers Breakthrough for AI Edge Computing

Blaize today announced the company's first AI computing hardware and software products built to overcome today's unmet requirements for compute and productization of AI applications at the edge. With multiple feature advancements vs. legacy GPU/CPU solutions, the Blaize Pathfinder and Xplorer platforms coupled with the Blaize AI Software Suite enable developers to usher in a new era of more practical and commercially viable edge AI products across a wide range of edge use cases and industries.

"Today's edge solutions are either too small to compute the load or too costly and too hard to productize," says Dinakar Munagala, Co-founder and CEO, Blaize. "Blaize AI edge computing products overcome these limitations of power, complexity and cost to unleash the adoption of AI at the edge, facilitating the migration of AI computing out of the data center to the edge."

SiFive Secures $61 Million in Series E Funding Led by SK Hynix

SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions, today announced it raised $61 million in a Series E round led by SK hynix, joined by new investor Prosperity7 Ventures, with additional funding from existing investors, Sutter Hill Ventures, Western Digital Capital, Qualcomm Ventures, Intel Capital, Osage University Partners, and Spark Capital.

"Global demand for storage and memory in the data center is increasing as AI-powered business intelligence and data processing growth continues", said Youjong Kang, VP of Growth Strategy, SK hynix. "SiFive is well-positioned to grow with opportunities created from data center, enterprise, storage and networking requirements for workload-focused processor IP."
Return to Keyword Browsing