News Posts matching #AI

Return to Keyword Browsing

NVIDIA DLSS Gets Ported to 10 Additional Titles, Including the New Back 4 Blood Game

NVIDIA's Deep Learning Super Sampling (DLSS) technology has been one of the main selling points of GeForce RTX graphics cards. With the broad adoption of the technology amongst many popular game titles, the gaming community has enjoyed the AI-powered upscaling technology that boosts frame-rate output and delivers better overall performance. Today, the company announced that DLSS arrived in 10 additional game titles, and those include today's release of Back 4 Blood, Baldur's Gate 3, Chivalry 2, Crysis Remastered Trilogy, Rise of the Tomb Raider, Shadow of the Tomb Raider, Sword and Fairy 7, and Swords of Legends Online.

With so many titles receiving the DLSS update, NVIDIA advertises using the latest GeForce driver to achieve the best possible performance in the listed games. If you are wondering just how much DLSS adds to the performance, in the newest Back 4 Blood title, RTX GPUs see a 46% boost in FPS. Similar performance gains translate to other labels that received the DLSS patch. You can expect to achieve more than double the number of frames in older titles like Alan Wake Remastered, Tomb Raider saga, and FIST.
For more information about performance at 4K resolution, please see the slides supplied by NVIDIA below.

NVIDIA CEO Jensen Huang to Unveil New AI Technologies, Products in GTC Keynote

NVIDIA today announced that it will host a global, virtual GTC from Nov. 8-11, featuring a news-filled keynote by NVIDIA founder and CEO Jensen Huang and talks from some of the world's preeminent AI research and industry leaders. Huang's keynote will be livestreamed on Nov. 9 at 9 a.m. Central European Time/4 p.m. China Standard Time/12 a.m. Pacific Daylight Time, with a rebroadcast at 8 a.m. PDT for viewers in the Americas. Registration is free and is not required to view the keynote.

More than 200,000 developers, innovators, researchers and creators are expected to register for the event, which will focus on deep learning, data science, high performance computing, robotics, data center/networking and graphics. Speakers share the latest breakthroughs that are transforming some of the world's largest industries, such as healthcare, transportation, manufacturing, retail and finance.

AAEON Introduces the BOXER-6643-TGU Compact Industrial System with Intel Tiger lake Processor

AAEON, an industry leader in embedded solutions, introduces the BOXER-6643-TGU compact industrial system. With rugged construction, the system delivers the performance and innovative technologies of the 11th Generation Intel Core U processors (formerly Tiger Lake) to applications in tough environments, providing wide operating temperature range and 5G support, to power embedded controller and Industrial IoT (IIoT) gateway applications.

The BOXER-6643-TGU is powered by the 11th Generation Intel Core U processors, delivering greater performance over previous generations, with innovative Intel technologies ensuring more accurate, secure data processing. With up to 64 GB of memory, the system allows users to utilize the full extent of the system's processing capabilities, and with the Intel Iris Xe embedded graphics, users can leverage more powerful GPU processing to power AI and Edge Computing industrial applications. With dual HDMI ports, the system can also support 4K high definition video on two monitors, perfect for powering digital signage.

ASUS Republic of Gamers Announces Moonlight White Gaming Peripherals

ASUS Republic of Gamers (ROG) today announced an all-new gaming peripherals lineup that channels the minimalist feel of monochrome through a striking Moonlight White color scheme. The ROG Strix Scope NX TKL 80%, tenkeyless mechanical RGB gaming keyboard, ROG Strix Impact II ambidextrous gaming mouse, ROG Strix Go Core gaming headset and ROG Cetra II Core in-ear gaming headphones are all now available in North America in this stunning colorway.

ROG has a long history of weaving Aura Sync into a huge ecosystem of devices to let gamers shine a light on their personalities through their gear - but many players also seek a minimalist look. That's why the ROG color palette is expanding to include the Moonlight White series, providing a commanding counterpart to the signature red-and-black color scheme.

NVIDIA Prepares to Deliver Deep Learning Anti-Aliasing Technology for Improved Visuals, Coming first to The Elder Scrolls Online

Some time ago, NVIDIA launched its Deep Learning Super Sampling (DLSS) technology to deliver AI-enhanced upscaling images to your favorite AAA titles. It uses proprietary algorithms developed by NVIDIA and relies on the computational power of Tensor cores found in GeForce graphics cards. In the early days of DLSS, NVIDIA talked about an additional technology called DLSS2X, which was supposed to be based on the same underlying techniques as DLSS, however, just to do image sharpening and not any upscaling. That technology got its official name today: Deep Learning Anti-Aliasing or DLAA shortly.

DLAA uses technology similar to DLSS, and it aims to bring NVIDIA's image-sharpening tech to video games. It aims to use the Tensor cores found in GeForce graphics cards, and provide much better visual quality, without sacrificing the performance, as it runs on dedicated cores. It is said that the technology will be offered alongside DLSS and other additional anti-aliasing technologies in in-game settings. The first game to support it is The Elder Scrolls Online, which now has it in the public beta test server, and will be available to the general public later on.

Samsung Receives its First Global Carbon Footprint Certification for Logic Chips

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced that four of its System LSI products received product carbon footprint label certification from the Carbon Trust, the first of Samsung's logic chips to do so. Having received the semiconductor industry's first carbon footprint accreditation for memory chips from the Carbon Trust in 2019, Samsung has now broadened its ESG (Environmental, Social, and Governance) spectrum with this global recognition of 'eco-friendly' logic chips. Samsung also grabbed the industry's first triple Carbon Trust Standard for Carbon, Water and Waste in June 2021.

The Carbon Trust is an independent and expert partner of organizations around the world that advises businesses on their opportunities in a sustainable, low carbon world. The Carbon Trust also measures and certifies the environmental footprint of organizations, supply chains and products. Of the various certification categories of the Carbon Trust, Samsung's System LSI products received the CO2 Measured product carbon footprint label. The label certifies the chip's carbon footprint, which informs consumers of the impact that the product and its manufacturing process have on the environment. Receiving the CO₂ Measured label is a critical first step for carbon reduction, since it verifies the current carbon emissions of the product with globally recognized specifications (PAS 2050), which Samsung can use as a benchmark to measure future carbon reductions.

IBM Unveils New Generation of IBM Power Servers for Frictionless, Scalable Hybrid Cloud

IBM (NYSE: IBM) today announced the new IBM Power E1080 server, the first in a new family of servers based on the new IBM Power10 processor, designed specifically for hybrid cloud environments. The IBM Power10-equipped E1080 server is engineered to be one of the most secured server platforms and is designed to help clients operate a secured, frictionless hybrid cloud experience across their entire IT infrastructure.

The IBM Power E1080 server is launching at a critical time for IT. As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them. The IBM Institute of Business Value's 2021 CEO Study found that, of the 3,000 CEOs surveyed, 56% emphasized the need to enhance operational agility and flexibility when asked what they'll most aggressively pursue over the next two to three years.

AAEON Unveils PICO-TGU4 Edge AI Board Powered by 11th Gen Core Processors

AAEON, an industry leader in AI Edge Computing solutions, announces the PICO-TGU4 compact PICO-ITX board powered by the 11th Generation Intel Core U processors. By leveraging AAEON's expertise and the cutting-edge technologies offered with this latest generation of processors, the PICO-TGU4 delivers performance and flexibility to power the next generation of industrial AI and machine vision applications.

The 11th Generation Intel Core processors (formerly Tiger Lake) are the third generation of Intel's 10 nm microarchitecture, delivering up to 15~20% better performance than the previous generation processors. The PICO-TGU4 offers users the choice of Intel Core U i3/i5/i7 or Intel Celeron processors to power their projects. These processors support a range of technologies to ensure data integrity, accuracy and security, including on-board TPM 2.0 and in-band memory ECC (with select SKUs). Combined with up to 32 GB of on-board LPDDR4x memory, the PICO-TGU4 helps unlock the performance needed to deploy a wide range of embedded and edge computing applications.

Tachyum Boots Linux on Prodigy FPGA

Tachyum Inc. today announced that it has successfully executed the Linux boot process on the field-programmable gate array (FPGA) prototype of its Prodigy Universal Processor, in 2 months after taking delivery of the IO motherboard from manufacturing. This achievement proves the stability of the Prodigy emulation system and allows the company to move forward with additional testing before advancing to tape out.

Tachyum engineers were able to perform the Linux boot, execute a short user-mode program and shutdown the system on the fully functional FPGA emulation system. Not only does this successful test prove that the basic processor is stable, but interrupts, exceptions, timing, and system-mode transitions are, as well. This is a key milestone, which dramatically reduces risk, as booting and running large and complex pieces of software like Linux reliably on the Tachyum FPGA processor prototype shows that verification and hardware stability are past the most difficult turning point, and it is now obvious that verification and testing should successfully complete in the coming months. Designers are now shifting their attention to debug and verification processes, running hundreds of trillions of test cycles over the next few months, and running large scale user mode applications with compatibility testing to get the processor to production quality.

IBM Unveils On-Chip Accelerated Artificial Intelligence Processor

At the annual Hot Chips conference, IBM (NYSE: IBM) today unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM's first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.

Today, businesses typically apply detection techniques to catch fraud after it occurs, a process that can be time consuming and compute-intensive due to the limitations of today's technology, particularly when fraud analysis and detection is conducted far away from mission critical transactions and data. Due to latency requirements, complex fraud detection often cannot be completed in real-time - meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place.

NVIDIA Announces Financial Results for Second Quarter Fiscal 2022

NVIDIA (NASDAQ: NVDA) today reported record revenue for the second quarter ended August 1, 2021, of $6.51 billion, up 68 percent from a year earlier and up 15 percent from the previous quarter, with record revenue from the company's Gaming, Data Center and Professional Visualization platforms. GAAP earnings per diluted share for the quarter were $0.94, up 276 percent from a year ago and up 24 percent from the previous quarter. Non-GAAP earnings per diluted share were $1.04, up 89 percent from a year ago and up 14 percent from the previous quarter.

"NVIDIA's pioneering work in accelerated computing continues to advance graphics, scientific computing and AI," said Jensen Huang, founder and CEO of NVIDIA. "Enabled by the NVIDIA platform, developers are creating the most impactful technologies of our time - from natural language understanding and recommender systems, to autonomous vehicles and logistic centers, to digital biology and climate science, to metaverse worlds that obey the laws of physics.

Intel Powers Latest Amazon EC2 General Purpose Instances with 3rd Gen Intel Xeon Scalable Processors

Intel today announced AWS customers can access the latest 3rd Gen Intel Xeon Scalable processors via the new Amazon Elastic Compute Cloud (Amazon EC2) M6i instances. Optimized for high-performance, general-purpose compute, the latest Intel-powered Amazon EC2 instances provide customers increased flexibility and more choices when running their Intel-powered infrastructure within the AWS cloud. Today's news is a further continuation of Intel and AWS' close collaboration, giving customers scalable compute instances in the cloud for almost 15 years.

"Our latest 3rd Gen Intel Xeon Scalable processors are our highest performance data center CPU and provide AWS customers an excellent platform to run their most critical business applications. We look forward to continuing our long-term collaboration with AWS to deploy industry-leading technologies within AWS' cloud infrastructure." -Sandra Rivera, Intel executive vice president and general manager, Datacenter and AI Group.

Rambus Innovates 8.4 Gbps HBM3-ready Memory Subsystem

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced the Rambus HBM3-ready memory interface subsystem consisting of a fully-integrated PHY and digital controller. Supporting breakthrough data rates of up to 8.4 Gbps, the solution can deliver over a terabyte per second of bandwidth, more than double that of high-end HBM2E memory subsystems. With a market-leading position in HBM2/2E memory interface deployments, Rambus is ideally suited to enable customers' implementations of accelerators using next-generation HBM3 memory.

"The memory bandwidth requirements of AI/ML training are insatiable with leading-edge training models now surpassing billions of parameters," said Soo Kyoum Kim, associate vice president, Memory Semiconductors at IDC. "The Rambus HBM3-ready memory subsystem raises the bar for performance enabling state-of-the-art AI/ML and HPC applications."

NVIDIA Founder and CEO Jensen Huang to Receive Prestigious Robert N. Noyce Award

The Semiconductor Industry Association (SIA) today announced Jensen Huang, founder and CEO of NVIDIA and a trailblazer in building accelerated computing platforms, is the 2021 recipient of the industry's highest honor, the Robert N. Noyce Award. SIA presents the Noyce Award annually in recognition of a leader who has made outstanding contributions to the semiconductor industry in technology or public policy. Huang will accept the award at the SIA Awards Dinner on Nov. 18, 2021.

"Jensen Huang's extraordinary vision and tireless execution have greatly strengthened our industry, revolutionized computing, and advanced artificial intelligence," said John Neuffer, SIA president and CEO. "Jensen's accomplishments have fueled countless innovations—from gaming to scientific computing to self-driving cars—and he continues to advance technologies that will transform our industry and the world. We're pleased to recognize Jensen with the 2021 Robert N. Noyce Award for his many achievements in advancing semiconductor technology."

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

MediaTek Announces Dimensity 920 and Dimensity 810 Chips for 5G Smartphones

MediaTek today announced the new Dimensity 920 and Dimensity 810 chipsets, the latest additions to its Dimensity 5G family. This debut gives smartphone makers the ability to provide boosted performance, brilliant imaging and smarter displays to their customers.

Designed for powerful 5G smartphones, the Dimensity 920 balances performance, power and cost to provide an incredible mobile experience. Built using the 6nm high-performance manufacturing node, it supports intelligent displays and hardware-based 4K HDR video capture, while also offering a 9% boost in gaming performance compared to its predecessor, the Dimensity 900.

Xiaomi Announces CyberDog Powered by NVIDIA Jetson NX and Intel RealSense D450

Xiaomi today took another bold step in the exploration of future technology with its new bio-inspired quadruped robot - CyberDog. The launch of CyberDog is the culmination of Xiaomi's engineering prowess, condensed into an open source robot companion that developers can build upon.

CyberDog is Xiaomi's first foray into quadruped robotics for the open source community and developers worldwide. Robotics enthusiasts interested in CyberDog can compete or co-create with other like-minded Xiaomi Fans, together propelling the development and potential of quadruped robots.

MSI Announces Availability of Summit E16 Flip Convertible Business Laptop

It is often a dilemma for business users to choose between performance and portability when buying a laptop, but MSI is now offering the best of both worlds. If you're looking for a commercial laptop that is convertible, with pen support, and has excellent graphics, MSI Summit E16 Flip may be the solution for someone like you who demand performance on the go.

After its announcement in CES 2021, MSI Summit E16 Flip, this powerful yet beautifully-designed laptop, is finally on the market. Not only has it exemplified the law of "Golden Ratio" with the 16:10 QHD+ Display, the performance will not disappoint mobile commercial users. Moreover, the Dynamic Cooler Boost design will help users to distribute its CPU and GPU performance to the tasks accordingly. If you are engineer, architect, designer, or even startup ownerwho needs to pitch and show proposals, you will no longer have to struggle with loading and editing the files on the spot with the 11th Gen Intel Core i7 processors and NVIDIA GeForce RTX 30 Series graphics.Along with the support of the MSI Pen, the first award-winning stylus on 2021 CES, it will be a booster for your productivity and allows you to work more modernly with its low latency, stable connection, long battery life, and customizable functions.

Intel Announces New Xeon W-3300 Processors

Intel today launched its newest generation Intel Xeon W-3300 processors, available today from its system integrator partners. Built for advanced workstation professionals, Intel Xeon W-3300 processors offer uncompromised performance, expanded platform capabilities, and enterprise-grade security and reliability in a single-socket solution.

Intel Xeon W-3300 processors are intelligently engineered to push the boundaries of performance, with a new processor core architecture that transforms for what expert workstation users can accomplish on a workstation.

The Intel Xeon W-3300 processors are designed for next-gen professional applications with heavily threaded, input/output-intensive workloads. Use cases stretch across artificial intelligence (AI), architecture, engineering, construction (AEC), and media and entertainment (M&E). With a new processor core architecture to transform efficiency and advanced technologies to support data integrity, Intel Xeon W-3300 processors are equipped to deliver uncompromising workstation performance.

AAEON Introduces BOXER-8230AI Edge Computer Powered by NVIDIA Jetson TX2 NX

AAEON, a leader in AI edge solutions, announces the release of the BOXER-8230AI AI Edge box PC powered by the NVIDIA Jetson TX2 NX System on Module (SOM). The BOXER-8230AI delivers powerful computing performance without breaking budgets, along with rugged design and diverse I/O layout including five Gigabit Ethernet ports. The BOXER-8230AI offers a solution that's perfect for intelligent applications including surveillance, smart factory, and smart retail.
The BOXER-8230AI platform brings flexibility to meet the needs of customers, with industrial design and storage capacity. With two available configurations, the BOXER-8230AI-A3 and BOXER-8230AI-A4 systems offer flexible I/O loadout with five Gigabit Ethernet LAN ports to connect with IP cameras and other devices, as well as four USB 3.2 Gen 1 ports and two COM ports. Storage flexibility is provided with 16 GB onboard eMMC storage, microSD slot and a 2.5" SATA III bay (A4 model).

The BOXER-8230AI platform is powered by the NVIDIA Jetson TX2 NX SOM, delivering powerful AI edge computing without compromising costs. The Jetson TX2 NX delivers more than twice the performance of the NVIDIA Jetson Nano thanks to its six-core ARM processor and NVIDIA Pascal GPU with 256 CUDA cores. This allows the Jetson TX2 NX to achieve speeds up to 1.33 TFLOPS, and enables the system to power a wide range of AI Edge applications.

AMD CDNA2 "Aldebaran" MI200 HPC Accelerator with 256 CU (16,384 cores) Imagined

AMD Instinct MI200 will be an important product for the company in the HPC and AI supercomputing market. It debuts the CDNA2 compute architecture, and is based on a multi-chip module (MCM) codenamed "Aldebaran." PC enthusiast Locuza, who conjures highly detailed architecture based on public information, imagined what "Aldebaran" could look like. The MCM contains two logic dies, and eight HBM2E stacks. Each of the two dies has a 4096-bit HBM2E interface, which talks to 64 GB of memory (128 GB per package). A silicon interposer provides microscopic wiring among the ten dies.

Each of the two logic dies, or chiplets, has sixteen shader engines that have 16 compute units (CU), each. The CDNA2 compute unit is capable of full-rate FP64, packed FP32 math, and Matrix Engines V2 (fixed function hardware for matrix multiplication, accelerating DNN building, training, and AI inference). With 128 CUs per chiplet, assuming the CDNA2 CU has 64 stream processors, one arrives at 8,192 SP. Two such dies add up to a whopping 16,384, more than three times that of the "Navi 21" RDNA2 silicon. Each die further features its independent PCIe interface, and XGMI (AMD's rival to CXL), an interconnect designed for high-density HPC scenarios. A rudimentary VCN (Video CoreNext) component is also present. It's important to note here, that the CDNA2 CU, as well as the "Aldebaran" MCM itself, doesn't have a dual-use as a GPU, since it lacks much of the hardware needed for graphics processing. The MI200 is expected to launch later this year.

MediaTek Launches Dimensity 5G Open Resource Architecture

MediaTek today announced the Dimensity 5G Open Resource Architecture that provides brands with more flexibility to customize key 5G mobile device features to address different market segments. The open resource architecture gives smartphone brands closer-to-metal access to customize features for cameras, displays, graphics, artificial intelligence (AI) processing units (APUs), sensors and connectivity sub-systems within the Dimensity 1200 chipset.

"MediaTek is collaborating with the world's largest smartphone brands to unlock customized consumer experiences that differentiate flagship 5G smartphones," said Dr. Yenchi Lee, Deputy General Manager of MediaTek's Wireless Communications Business Unit. "Whether it's novel multimedia features, unmatched performance, brilliant imaging or more synergy between smartphones and services, with our architecture device makers can tailor their devices to complement a variety of consumer lifestyles."

NVIDIA Releases Canvas App Beta

NVIDIA has released a public beta for their Canvas AI/Deep Learning program which can turn simple brushstrokes into realistic landscape images. The tool has been in development by NVIDIA for several years and allows users to paint a simple scene using 15 different materials such as grass, rock, water, fog, snow, and trees. The program uses this sketch to generate a photorealistic background in nine different styles. The program can also export the results as an Adobe Photoshop PSD file for further enhancement and refinement. The application requires a NVIDIA RTX, Quadro RTX, or TITAN RTX graphics card with driver version 460.89 or later and is only available for Windows 10. You can now download the NVIDIA Canvas Beta from the link below.

AI-Designed Microchips Now Outperform Human-Designed Ones

A recent Google study led by Mirhoseini et al. and published in Nature details how AI can be leveraged to improve upon semiconductor design practices currently employed - and which are the result of more than 60 years of engineering and physics studies. The paper describes a trained machine-learning 'agent' that can successfully place macro blocks, one by one, into a chip layout. This agent has a brain-inspired architecture known as a deep neural network, and is trained using a paradigm called reinforcement learning - where positive changes to a design are committed to memory as possible solutions, while negative changes are discarded, effectively allowing the neural network to build a decision-tree of sorts that's optimized every step of the way.

The AI isn't applied to every stage of microchip design as of yet, but that will surely change in years to come. For now, the AI is only being employed in the chip floorplanning stage of microchip production, which is actually one of the more painstaking ones. Essentially, microchip designers have to place macro blocks on their semiconductor designs - pre-made arrangements of transistors whose placement relative to one another and to the rest of the chips' components are of seminal importance for performance and efficiency targets. Remember that electric signals have to traverse different chip components to achieve a working semiconductor, and the way these are arranged in the floorplanning stage can have tremendous impact on performance characteristics of a given chip. Image A, below, showcases the tidy design a human engineer would favor - while image B showcases the apparently chaotic nature of the AI's planning.

COLORFUL Launches GeForce RTX 3080 Ti and RTX 3070 Ti Graphics Cards

Colorful Technology Company Limited, a professional manufacturer of graphics cards, motherboards, all-in-one gaming and multimedia solutions, and high-performance storage, proudly introduces the COLORFUL iGame GeForce RTX 3080 Ti and RTX 3070 Ti graphics cards. The line-up consists of the Vulcan, Advanced OC, and NB models. The all-new GeForce RTX 3070 Ti NB takes a new look with its improved cooling and mightier design. The COLORFUL iGame GeForce RTX 3080 Ti and RTX 3070 Ti comes packed with premium features including the One-Key Overclock, customizable RGB lighting, and more to cater to different types of power users, gamers, and PC enthusiasts.

Powered by the NVIDIA Ampere architecture, the GeForce RTX 3080 Ti delivers an incredible leap in performance and fidelity with acclaimed features such as raytracing, NVIDIA DLSS performance-boosting AI, NVIDIA Reflex latency-reduction, NVIDIA Broadcast streaming features and additional memory that allows it to speed through the most popular creator applications as well.
Return to Keyword Browsing
May 17th, 2024 23:43 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts