News Posts matching #AI

Return to Keyword Browsing

Intel Announces New Xeon W-3300 Processors

Intel today launched its newest generation Intel Xeon W-3300 processors, available today from its system integrator partners. Built for advanced workstation professionals, Intel Xeon W-3300 processors offer uncompromised performance, expanded platform capabilities, and enterprise-grade security and reliability in a single-socket solution.

Intel Xeon W-3300 processors are intelligently engineered to push the boundaries of performance, with a new processor core architecture that transforms for what expert workstation users can accomplish on a workstation.

The Intel Xeon W-3300 processors are designed for next-gen professional applications with heavily threaded, input/output-intensive workloads. Use cases stretch across artificial intelligence (AI), architecture, engineering, construction (AEC), and media and entertainment (M&E). With a new processor core architecture to transform efficiency and advanced technologies to support data integrity, Intel Xeon W-3300 processors are equipped to deliver uncompromising workstation performance.

AAEON Introduces BOXER-8230AI Edge Computer Powered by NVIDIA Jetson TX2 NX

AAEON, a leader in AI edge solutions, announces the release of the BOXER-8230AI AI Edge box PC powered by the NVIDIA Jetson TX2 NX System on Module (SOM). The BOXER-8230AI delivers powerful computing performance without breaking budgets, along with rugged design and diverse I/O layout including five Gigabit Ethernet ports. The BOXER-8230AI offers a solution that's perfect for intelligent applications including surveillance, smart factory, and smart retail.
The BOXER-8230AI platform brings flexibility to meet the needs of customers, with industrial design and storage capacity. With two available configurations, the BOXER-8230AI-A3 and BOXER-8230AI-A4 systems offer flexible I/O loadout with five Gigabit Ethernet LAN ports to connect with IP cameras and other devices, as well as four USB 3.2 Gen 1 ports and two COM ports. Storage flexibility is provided with 16 GB onboard eMMC storage, microSD slot and a 2.5" SATA III bay (A4 model).

The BOXER-8230AI platform is powered by the NVIDIA Jetson TX2 NX SOM, delivering powerful AI edge computing without compromising costs. The Jetson TX2 NX delivers more than twice the performance of the NVIDIA Jetson Nano thanks to its six-core ARM processor and NVIDIA Pascal GPU with 256 CUDA cores. This allows the Jetson TX2 NX to achieve speeds up to 1.33 TFLOPS, and enables the system to power a wide range of AI Edge applications.

AMD CDNA2 "Aldebaran" MI200 HPC Accelerator with 256 CU (16,384 cores) Imagined

AMD Instinct MI200 will be an important product for the company in the HPC and AI supercomputing market. It debuts the CDNA2 compute architecture, and is based on a multi-chip module (MCM) codenamed "Aldebaran." PC enthusiast Locuza, who conjures highly detailed architecture based on public information, imagined what "Aldebaran" could look like. The MCM contains two logic dies, and eight HBM2E stacks. Each of the two dies has a 4096-bit HBM2E interface, which talks to 64 GB of memory (128 GB per package). A silicon interposer provides microscopic wiring among the ten dies.

Each of the two logic dies, or chiplets, has sixteen shader engines that have 16 compute units (CU), each. The CDNA2 compute unit is capable of full-rate FP64, packed FP32 math, and Matrix Engines V2 (fixed function hardware for matrix multiplication, accelerating DNN building, training, and AI inference). With 128 CUs per chiplet, assuming the CDNA2 CU has 64 stream processors, one arrives at 8,192 SP. Two such dies add up to a whopping 16,384, more than three times that of the "Navi 21" RDNA2 silicon. Each die further features its independent PCIe interface, and XGMI (AMD's rival to CXL), an interconnect designed for high-density HPC scenarios. A rudimentary VCN (Video CoreNext) component is also present. It's important to note here, that the CDNA2 CU, as well as the "Aldebaran" MCM itself, doesn't have a dual-use as a GPU, since it lacks much of the hardware needed for graphics processing. The MI200 is expected to launch later this year.

MediaTek Launches Dimensity 5G Open Resource Architecture

MediaTek today announced the Dimensity 5G Open Resource Architecture that provides brands with more flexibility to customize key 5G mobile device features to address different market segments. The open resource architecture gives smartphone brands closer-to-metal access to customize features for cameras, displays, graphics, artificial intelligence (AI) processing units (APUs), sensors and connectivity sub-systems within the Dimensity 1200 chipset.

"MediaTek is collaborating with the world's largest smartphone brands to unlock customized consumer experiences that differentiate flagship 5G smartphones," said Dr. Yenchi Lee, Deputy General Manager of MediaTek's Wireless Communications Business Unit. "Whether it's novel multimedia features, unmatched performance, brilliant imaging or more synergy between smartphones and services, with our architecture device makers can tailor their devices to complement a variety of consumer lifestyles."

NVIDIA Releases Canvas App Beta

NVIDIA has released a public beta for their Canvas AI/Deep Learning program which can turn simple brushstrokes into realistic landscape images. The tool has been in development by NVIDIA for several years and allows users to paint a simple scene using 15 different materials such as grass, rock, water, fog, snow, and trees. The program uses this sketch to generate a photorealistic background in nine different styles. The program can also export the results as an Adobe Photoshop PSD file for further enhancement and refinement. The application requires a NVIDIA RTX, Quadro RTX, or TITAN RTX graphics card with driver version 460.89 or later and is only available for Windows 10. You can now download the NVIDIA Canvas Beta from the link below.

AI-Designed Microchips Now Outperform Human-Designed Ones

A recent Google study led by Mirhoseini et al. and published in Nature details how AI can be leveraged to improve upon semiconductor design practices currently employed - and which are the result of more than 60 years of engineering and physics studies. The paper describes a trained machine-learning 'agent' that can successfully place macro blocks, one by one, into a chip layout. This agent has a brain-inspired architecture known as a deep neural network, and is trained using a paradigm called reinforcement learning - where positive changes to a design are committed to memory as possible solutions, while negative changes are discarded, effectively allowing the neural network to build a decision-tree of sorts that's optimized every step of the way.

The AI isn't applied to every stage of microchip design as of yet, but that will surely change in years to come. For now, the AI is only being employed in the chip floorplanning stage of microchip production, which is actually one of the more painstaking ones. Essentially, microchip designers have to place macro blocks on their semiconductor designs - pre-made arrangements of transistors whose placement relative to one another and to the rest of the chips' components are of seminal importance for performance and efficiency targets. Remember that electric signals have to traverse different chip components to achieve a working semiconductor, and the way these are arranged in the floorplanning stage can have tremendous impact on performance characteristics of a given chip. Image A, below, showcases the tidy design a human engineer would favor - while image B showcases the apparently chaotic nature of the AI's planning.

COLORFUL Launches GeForce RTX 3080 Ti and RTX 3070 Ti Graphics Cards

Colorful Technology Company Limited, a professional manufacturer of graphics cards, motherboards, all-in-one gaming and multimedia solutions, and high-performance storage, proudly introduces the COLORFUL iGame GeForce RTX 3080 Ti and RTX 3070 Ti graphics cards. The line-up consists of the Vulcan, Advanced OC, and NB models. The all-new GeForce RTX 3070 Ti NB takes a new look with its improved cooling and mightier design. The COLORFUL iGame GeForce RTX 3080 Ti and RTX 3070 Ti comes packed with premium features including the One-Key Overclock, customizable RGB lighting, and more to cater to different types of power users, gamers, and PC enthusiasts.

Powered by the NVIDIA Ampere architecture, the GeForce RTX 3080 Ti delivers an incredible leap in performance and fidelity with acclaimed features such as raytracing, NVIDIA DLSS performance-boosting AI, NVIDIA Reflex latency-reduction, NVIDIA Broadcast streaming features and additional memory that allows it to speed through the most popular creator applications as well.

Intel CEO Predicts Chip Shortages Across the Ecosystem to Run Another Couple of Years

Intel CEO Pat Gelsinger, speaking at the company's 2021 Computex Opening Keynote address stated that the explosive demand for chips caused by recent inflections of technology, accelerated by the COVID-19 pandemic, has resulted in demand outstripping supply by such extent, that it could "still take a couple of years for the ecosystem to address the shortages."

Gelsinger detailed how the world of information technology is at its biggest crossroads ever, with the emergence of Cloud, 5G, AI, and smarter edge computing changing the way people work, learn, and interact. This has caused a huge growth in the demand for semiconductors straining technology supply chains around the world. Gelsinger stated that his company is working with partners across the technology ecosystem to increase output to meet demand. He detailed how Intel has nearly doubled its own chip wafer manufacturing capacity over the past four years. "But while the industry has taken steps to address near-term constraints, it could still take a couple of years for the ecosystem to address chip shortages of foundry capacity, substrate, and components.

Raytheon Technologies and GLOBALFOUNDRIES Partner to Accelerate 5G Wireless Connectivity Using Gallium Nitride on Silicon (GaN-on-Si)

Raytheon Technologies (NYSE: RTX), a leading aerospace and defense technology company, and GLOBALFOUNDRIES (GF ), the global leader in feature-rich semiconductor manufacturing, will collaborate to develop and commercialize a new gallium nitride on silicon (GaN-on-Si) semiconductor that will enable game-changing radio frequency performance for 5G and 6G mobile and wireless infrastructure applications.

Under the agreement, Raytheon Technologies will license its proprietary gallium nitride on silicon technology and technical expertise to GF, which will develop the new semiconductor at its Fab 9 facility in Burlington, Vermont. Gallium nitride is a unique material used to build high-performance semiconductors that can handle significant heat and power levels. This makes it ideal to handle 5G and 6G wireless signals, which require higher performance levels than legacy wireless systems.

Redesigned Apple MacBook Pro Coming This Summer with up to 64 GB of RAM and 10-Core Processor

According to Bloomberg, which first predicted the arrival of Apple custom processors in MacBooks, we have another piece of information regarding Apple's upcoming MacBook Pro lineup, set to arrive this summer. As you are aware, MacBook Pro right now comes in two different variants. The first is a smaller 13-inch design that is powered by Apple's M1 chip, while the second is a 16-inch design powered by an Intel Core processor. However, it seems like that will no longer be the case when the next-generation lineup arrives. Starting this summer, all of the MacBook Pro models will have Apple's custom silicon powering these devices, which bring Intel's presence to an end.

And the successor to the now-famous M1 chip seems to be very good. As per the report, Apple is upgrading the architecture and the total core count. There are two different chips, codenamed Jade C-Chop and Jade C-Die. Both are 10-core designs, equipped with two small and eight big cores. The difference between the two is the total number of graphics cores enabled. The smaller version will have 16 graphics cores, while the bigger one will have 32 graphics cores. On the SoC, there will be an updated Neural Engine, for better AI processing. These new processors will come with up to 64 GB of RAM in selected configurations as well. The report also notes the arrival of HDMI port, SD card slot, and MagSafe for charging.

Flexxon Announces X-Fly AI-embedded NVMe SSD

Singapore cybersecurity firm Flexxon on Monday launched the world's first solid-state drive (SSD) embedded with artificial intelligence (AI) data security. As the "last line of defence" to protect data at the hardware level, the SSD is able to guard against both remote and physical attacks, boasting a range of features including temperature sensors to detect unusual movements that occur. In the event of such an incursion, the device - also known as X-PHY - will alert the user via email and lock itself to prevent any physical tampering. Users may then unlock the device via a dynamic authentication process. For clients with high security needs, such as the military, data may be automatically wiped should the device fall into the wrong hands.

Minister for Foreign Affairs and Minister-in-charge of Singapore's Smart Nation Initiative Vivian Balakrishnan said that the innovation was a "breakthrough" and a significant step in increasing security and safety for end users, having put AI into the firmware layer of an SSD. Congratulating Flexxon on its new product, Dr Balakrishnan - who was the guest-of-honour at the launch - also called on similar companies to include research institutes, universities and the private sector to enhance local research and development efforts in AI.

IBM Announces World's First 2nm Chip Technology

IBM today unveiled a breakthrough in semiconductor design and process with the development of the world's first chip announced with 2 nanometer (nm) nanosheet technology. Semiconductors play critical roles in everything from computing, to appliances, to communication devices, transportation systems, and critical infrastructure.

Demand for increased chip performance and energy efficiency continues to rise, especially in the era of hybrid cloud, AI, and the Internet of Things. IBM's new 2 nm chip technology helps advance the state-of-the-art in the semiconductor industry, addressing this growing demand. It is projected to achieve 45 percent higher performance, or 75 percent lower energy use, than today's most advanced 7 nm node chips.

Intel Ponte Vecchio GPU Scores Another Win in Leibniz Supercomputing Centre

Today, Lenovo in partnership with Intel has announced that Leibniz Supercomputing Centre (LRZ) is building a supercomputer powered by Intel's next-generation technologies. Specifically, the supercomputer will use Intel's Sapphire Rapids CPUs in combination with the highly-teased Ponte Vecchio GPUs to power the applications running at Leibniz Supercomputing Centre. Along with the various processors, the LRZ will also deploy Intel Optane persistent memory to process the huge amount of data the LRZ has and is producing. The integration of HPC and AI processing will be enabled by the expansion of LRZ's current supercomputer called SuperMUG-NG, which will receive an upgrade in 2022, which will feature both Sapphire Rapids and Ponte Vecchio.

Mr. Raja Koduri, Intel graphics guru, has on Twitter teased that this supercomputer installment will represent a combination of Sapphire Rapids, Ponte Vecchio, Optane, and One API all in one machine. The system will use over one petabyte of Distributed Asynchronous Object Storage (DAOS) based on the Optane technologies. Then, Mr. Koduri has teased some Ponte Vecchio eye candy, which is a GIF of tiles combining to form a GPU, which you can check out here. You can also see some pictures of Ponte Vecchio below.
Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU

IBASE Launches 3.5" SBC IB953 Powered by 11th Gen Intel Tiger Lake Processors

IBASE Technology Inc. (TPEx: 8050), IBASE, a leading provider of industrial motherboards and embedded systems, launches its latest 3.5" SBC IB953 powered by 11th Gen Intel Core processors (codenamed Tiger Lake). Measuring 146 x 102 mm in a compact footprint, the single board computer offers impressive I/O and computing performance, making it an ideal platform for a broad range of demanding applications in factory automation, machine vision, healthcare, as well as retail environments.

The IB953 features a 5G compatible M.2 3052 socket and the latest 11th Gen. Intel Core and Celeron processors built on 10 nm SuperFin process, delivering up to 25% faster performance and greater AI acceleration. With the Intel Core i7-1185G7 (IBASE model IB953AF-I7) comes the new Iris Xe (Xe-LP) graphics architecture supporting 96 EUs (Execution Units) which equals 768 cores and runs at 1.35 GHz, an increase of 250 MHz over the previous 11-gen graphics to provide up to twice the performance and improved power efficiency. The graphics engine's hardware acceleration can drive up to four simultaneous 4K HDR displays (via 2x DisplayPort + eDP and LVDS).

AAEON Announces Official Support for NVIDIA Ubuntu, Jetpack 4.5 and Secureboot on BOXER-8200 Systems

AAEON, an industry leader in embedded AI Edge systems, announces new software support for the BOXER-8200 series of embedded PCs featuring NVIDIA Jetson System on Modules (SOM). AAEON has officially signed an agreement with Canonical to provide customers with the NVIDIA Ubuntu operating system pre-installed on new BOXER-8200 systems. Systems with the NVIDIA Ubuntu OS will also ship with the Jetpack 4.5 drivers and toolkit package preinstalled. Additionally, AAEON announces a new customization services to provide Secureboot to clients in addition to other customization options.

AAEON is dedicated to delivering the most comprehensive platform solutions powered by NVIDIA Jetson SOMs. To meet the needs of their clients, AAEON has signed an agreement with Canonical to provide the official NVIDIA Ubuntu OS image on the entire range of BOXER-8200 series systems. Developers and customers who purchase new BOXER-8200 series systems can receive the system with the OS preinstalled, with no need to flash the image before starting the system up for the first time. The BOXER-8200 series includes the BOXER-822x platforms with Jetson Nano, BOXER-8240AI with Jetson AGX Xavier, BOXER-825x platforms with Jetson Xavier NX, and BOXER-823x platforms with Jetson TX2 NX (currently under development).

AAEON Announces ARES-WHI0 Server Board

AAEON, an industry leader of AI and IOT solutions, is excited to celebrate the launch of Intel's latest scalable platform, the 3rd Generation Intel Xeon Scalable Processors (formerly Ice Lake-SP). As an associate member of the Intel IoT Solutions Alliance, AAEON is excited to bring this new technology to market with the ARES-WHI0 industrial ATX server board and other future products.

The 3rd Generation Intel Xeon Scalable Processors deliver the next generation of high-end computing performance and support for vital data integrity and security technologies. This new generation of Xeon SP brings higher processing speeds and Intel's Deep Learning Boost technology, allowing for greater acceleration and more efficient processing for AI server applications.

Tenstorrent Selects SiFive Intelligence X280 for Next-Generation AI Processors

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced that Tenstorrent, an AI semiconductor and software start-up developing next-generation computers, will license the new SiFive Intelligence X280 processor in its AI training and inference processor. SiFive will deliver more details of its SiFive Intelligence initiative including the SiFive Intelligence X280 processor at the Linley Spring Processor Conference on April 23rd.

Tenstorrent's novel approach to inference and training effectively and efficiently accommodates the exponential growth in the size of machine learning models while offering best-in-class performance.

Intel CEO on NVIDIA CPUs: They Are Responding to Us

NVIDIA has recently announced the company's first standalone Grace CPU that will come out as a product in 2023. NVIDIA has designed Grace on Arm ISA, likely ARM v9, to represent a new way that data centers are built and deliver a whole new level of HPC and AI performance. However, the CPU competition in a data center space is considered one of the hardest markets to enter. Usually, the market is a duopoly between Intel and AMD, which supply x86 processors to server vendors. In the past few years, there have been few Arm CPUs that managed to enter the data canter space, however, NVIDIA is aiming to deliver much more performance and grab a bigger piece of the market.

As a self-proclaimed leader in AI, Intel is facing hard competition from NVIDIA in the coming years. In an interview with Fortune, Intel's new CEO Pat Gelsinger has talked about NVIDIA and how the company sees the competition between the two. Mr. Gelsinger is claiming that Intel is a leader in CPUs that feature AI acceleration built in the chip and that they are not playing defense, but rather offense against NVIDIA. You can check out the whole quote from the interview below.

OpenFive Tapes Out SoC for Advanced HPC/AI Solutions on TSMC 5 nm Technology

OpenFive, a leading provider of customizable, silicon-focused solutions with differentiated IP, today announced the successful tape out of a high-performance SoC on TSMC's N5 process, with integrated IP solutions targeted for cutting edge High Performance Computing (HPC)/AI, networking, and storage solutions.

The SoC features an OpenFive High Bandwidth Memory (HBM3) IP subsystem and D2D I/Os, as well as a SiFive E76 32-bit CPU core. The HBM3 interface supports 7.2 Gbps speeds allowing high throughput memories to feed domain-specific accelerators in compute-intensive applications including HPC, AI, Networking, and Storage. OpenFive's low-power, low-latency, and highly scalable D2D interface technology allows for expanding compute performance by connecting multiple dice together using an organic substrate or a silicon interposer in a 2.5D package.

Global Server Shipment for 2021 Projected to Grow by More than 5% YoY, Says TrendForce

Enterprise demand for cloud services has been rising steady in the past two years owing to the rapidly changing global markets and uncertainties brought about by the COVID-19 pandemic. TrendForce's investigations find that most enterprises have been prioritizing cloud service adoption across applications ranging from AI to other emerging technologies as cloud services have relatively flexible costs. Case in point, demand from clients in the hyperscale data center segment constituted more than 40% of total demand for servers in 4Q20, while this figure may potentially approach 45% for 2021. For 2021, TrendForce expects global server shipment to increase by more than 5% YoY and ODM Direct server shipment to increase by more than 15% YoY.

Cloudflare Partners with NVIDIA to Bring AI to its Global Edge Network

Cloudflare, Inc., the security, performance, and reliability company helping to build a better Internet, today announced it is partnering with NVIDIA to bring AI to the edge at scale.

Today's applications use AI for a variety of tasks from translating text on webpages to object recognition in images, making machine learning models a critical part of application development. Users expect this functionality to be fast and reliable, while developers want to keep proprietary machine learning models reliable and secure. Cloudflare is seamlessly solving for their security, performance, and reliability needs while NVIDIA provides developers with a broad range of AI-powered application frameworks including Jarvis for natural language processing, Clara for healthcare and life sciences, and Morpheus for cybersecurity.

The combination of NVIDIA accelerated computing technology and Cloudflare's edge network will create a massive platform on which developers can deploy applications that use pre-trained or custom machine learning models in seconds. By leveraging the TensorFlow platform developers can use familiar tools to build and test machine learning models, and then deploy them globally onto Cloudflare's edge network.

NVIDIA and Global Computer Makers Launch Industry-Standard Enterprise Server Platforms for AI

NVIDIA today introduced a new class of NVIDIA-Certified Systems, bringing AI within reach for organizations that run their applications on industry-standard enterprise data center infrastructure. These include high-volume enterprise servers from top manufacturers, which were announced in January and are now certified to run the NVIDIA AI Enterprise software suite—which is exclusively certified for VMware vSphere 7, the world's most widely used compute virtualization platform.

Further expanding the NVIDIA-Certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, also announced today.

NVIDIA Extends Data Center Infrastructure Processing Roadmap with BlueField-3 DPU

NVIDIA today announced the NVIDIA BlueField -3 DPU, its next-generation data processing unit, to deliver the most powerful software-defined networking, storage and cybersecurity acceleration capabilities available for data centers.

The first DPU built for AI and accelerated computing, BlueField-3 lets every enterprise deliver applications at any scale with industry-leading performance and data center security. It is optimized for multi-tenant, cloud-native environments, offering software-defined, hardware-accelerated networking, storage, security and management services at data-center scale.

NVIDIA Announces Grace CPU for Giant AI and High Performance Computing Workloads

NVIDIA today announced its first data center CPU, an Arm-based processor that will deliver 10x the performance of today's fastest servers on the most complex AI and high performance computing workloads.

The result of more than 10,000 engineering years of work, the NVIDIA Grace CPU is designed to address the computing requirements for the world's most advanced applications—including natural language processing, recommender systems and AI supercomputing—that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. It combines energy-efficient Arm CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency.

NVIDIA Announces New DGX SuperPOD, the First Cloud-Native, Multi-Tenant Supercomputer, Opening World of AI to Enterprise

NVIDIA today unveiled the world's first cloud-native, multi-tenant AI supercomputer—the next-generation NVIDIA DGX SuperPOD featuring NVIDIA BlueField -2 DPUs. Fortifying the DGX SuperPOD with BlueField-2 DPUs—data processing units that offload, accelerate and isolate users' data—provides customers with secure connections to their AI infrastructure.

The company also announced NVIDIA Base Command, which enables multiple users and IT teams to securely access, share and operate their DGX SuperPOD infrastructure. Base Command coordinates AI training and operations on DGX SuperPOD infrastructure to enable the work of teams of data scientists and developers located around the globe.
Return to Keyword Browsing
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.