News Posts matching #AI

Return to Keyword Browsing

GeForce NOW Gains NVIDIA DLSS 2.0 Support In Latest Update

NVIDIA's game streaming service GeForce NOW has gained support for NVIDIA Deep Learning Super Sampling (DLSS) 2.0 in the latest update. DLSS 2.0 uses the tensor cores found in RTX series graphics cards to render games at a lower resolution and then use custom AI to construct sharp, higher resolution images. The introduction of DLSS 2.0 to GeForce NOW should allow for graphics quality to be improved on existing server hardware and deliver a smoother stutter-free gaming experience. NVIDIA announced that Control would be the first game on the platform to support DLSS 2.0, with additional games such as MechWarrior 5: Mercenaries and Deliver Us The Moon to support the feature in the future.

NVIDIA Completes Acquisition of Mellanox

NVIDIA today announced the completion of its acquisition of Mellanox Technologies, Ltd., for a transaction value of $7 billion. The acquisition, initially announced on March 11, 2019, unites two of the world's leading companies in high performance and data center computing. Combining NVIDIA's leading computing expertise with Mellanox's high-performance networking technology, the move will enable customers to achieve higher performance, greater utilization of computing resources and lower operating costs.

"The expanding use of AI and data science is reshaping computing and data center architectures," said Jensen Huang, founder and CEO of NVIDIA. "With Mellanox, the new NVIDIA has end-to-end technologies from AI computing to networking, full-stack offerings from processors to software, and significant scale to advance next-generation data centers. Our combined expertise, supported by a rich ecosystem of partners, will meet the challenge of surging global demand for consumer internet services, and the application of AI and accelerated data science from cloud to edge to robotics."
NVIDIA finishes acquiring Mellanox

NVIDIA Unveils RTX Voice, AI-based Audio Noise-Cancellation Software

Perhaps the biggest gripe about attending office calls and meetings from home these days is the background noise - everyone's home. NVIDIA developed an interesting new piece of free software that can help those on desktops cut out background noise in the audio, called RTX Voice, released to web as a beta. The app uses AI to filter out background audio noise not just at your end, but also from the audio of others in your meeting as you receive it (they don't need the app running on their end). The app leverages tensor cores, and requires an NVIDIA GeForce RTX 20-series GPU, Windows 10, and GeForce drivers R410 or later. RTX Voice runs in conjunction with your meetings software. Among the supported ones are Cisco Webex, Zoom, Skype, Twitch, XSplit, OBS, Discord, and Slack. For more information and FAQs, visit the download link.

DOWNLOAD: NVIDIA RTX Voice beta
NVIDIA RTX Voice

Xilinx Announces World's Highest Bandwidth, Highest Compute Density Adaptable Platform for Network and Cloud Acceleration

Xilinx, Inc. today announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry's highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.

Versal is the industry's first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC's 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to-market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Xilinx Versal ACAP FPGA

UNISOC Launches Next-Gen 5G SoC T7520 on 6 nm EUV Manufacturing Node

UNISOC, a leading global supplier of mobile communication and IoT chipsets, today officially launched its new-generation 5G SoC mobile platform - T7520. Using cutting-edge process technology, T7520 enables an optimized 5G experience with substantially enhanced AI computing and multimedia imaging processing capabilities while lowering power consumption.

T7520 is UNISOC's second-generation 5G smartphone platform. Built on a 6 nm EUV process technology and empowered by some of the latest design techniques, it offers substantially enhanced performance at a lower level of power consumption than ever.

Samsung Electronics Begins Mass Production at New EUV Manufacturing Line

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that its new cutting-edge semiconductor fabrication line in Hwaseong, Korea, has begun mass production.

The facility, V1, is Samsung's first semiconductor production line dedicated to the extreme ultraviolet (EUV) lithography technology and produces chips using process node of 7 nanometer (nm) and below. The V1 line broke ground in February 2018, and began test wafer production in the second half of 2019. Its first products will be delivered to customers in the first quarter.

VR as a Coping Mechanism for Loss: Meet Nayeon

VR has been hailed as the next coming of truly ingenious, engrossing, enveloping experiences, and to some extent, it already does offer those. There are still limitations to the technology and the level of realism it can impart (there is a whole slew of senses we need to trigger for truly enveloping experiences, of course), but I feel we sometimes get somewhat limited in the way we look at VR. Of course, we can all imagine video games built in VR - and when we do, we likely imagine them as they were presented to us in Steven Spielberg's Ready Player One.

Then there are other use-cases, such as real-estate experiences that place you right inside your future home and allow you to see the changes you'd make. Architecture design, engineering, game world design, even strolls through museums, your mind a subatomic particle able to instantly travel to foreign countries and explore their marvels. All for this, mind you, without ever leaving the comfort of our home, without the required expenses and no wasted time with travelling or passport checks - all, however, simulated. But what if VR could go even further? What if VR could be used as a coping mechanism? What if you could meet your dead parents, siblings... Or children? This is the story I bring to you today: of how VR was used to reunite a mother with her deceased seven-year-old girl. This is the story of Ji-sung and her daughter Nayeon.

AMD Radeon Instinct MI100 "Arcturus" Hits the Radar, We Have its BIOS

AMD's upcoming large post-Navi graphics chip, codenamed "Arcturus," will debut as "Radeon Instinct MI100", which is an AI-ML accelerator under the Radeon Instinct brand, which AMD calls "Server Accelerators." TechPowerUp accessed its BIOS, which is now up on our VGA BIOS database. The card goes with the device ID "0x1002 0x738C," which confirms "AMD" and "Arcturus,". The BIOS also confirms that memory size is at a massive 32 GB HBM2, clocked at 1000 MHz real (possibly 1 TB/s bandwidth, if memory bus width is 4096-bit).

Both Samsung (KHA884901X) and Hynix memory (H5VR64ESA8H) is supported, which is an important capability for AMD's supply chain. From the ID string "MI100 D34303 A1 XL 200W 32GB 1000m" we can derive that the TDP limit is set to a surprisingly low 200 W, especially considering this is a 128 CU / 8,192 shader count design. Vega 64 and Radeon Instinct MI60 for comparison have around 300 W power budget with 4,096 shaders, 5700 XT has 225 W with 2560 shaders, so either AMD achieved some monumental efficiency improvements with Arcturus or the whole design is intentionally running constrained, so that AMD doesn't reveal their hand to these partners, doing early testing of the card.

Samsung Launches 3rd-Generation "Flashbolt" HBM2E Memory

Samsung Electronics, the world leader in advanced memory technology, today announced the market launch of 'Flashbolt', its third-generation High Bandwidth Memory 2E (HBM2E). The new 16-gigabyte (GB) HBM2E is uniquely suited to maximize high performance computing (HPC) systems and help system manufacturers to advance their supercomputers, AI-driven data analytics and state-of-the-art graphics systems in a timely manner.

"With the introduction of the highest performing DRAM available today, we are taking a critical step to enhance our role as the leading innovator in the fast-growing premium memory market," said Cheol Choi, executive vice president of Memory Sales & Marketing at Samsung Electronics. "Samsung will continue to deliver on its commitment to bring truly differentiated solutions as we reinforce our edge in the global memory marketplace."

Europe Readies its First Prototype of Custom HPC Processor

European Processor Initiative (EPI) is a Europe's project to kickstart a homegrown development of custom processors tailored towards different usage models that the European Union might need. The first task of EPI is to create a custom processor for high-performance computing applications like machine learning, and the chip prototypes are already on their way. The EPI chairman of the board Jean-Marc Denis recently spoke to the Next Platform and confirmed some information regarding the processor design goals and the timeframe of launch.

Supposed to be manufactured on TSMC's 6 nm EUV (TSMC N6 EUV) technology, the EPI processor will tape-out at the end of 2020 or the beginning of 2021, and it is going to be heterogeneous. That means that on its 2.5D die, many different IPs will be present. The processor will use a custom ARM CPU, based on a "Zeus" iteration of Neoverese server core, meant for general-purpose computation tasks like running the OS. When it comes to the special-purpose chips, EPI will incorporate a chip named Titan - a RISC-V based processor that uses vector and tensor processing units to compute AI tasks. The Titan will use every new standard for AI processing, including FP32, FP64, INT8, and bfloat16. The system will use HBM memory allocated to the Titan processor, have DDR5 links for the CPU, and feature PCIe 5.0 for the inner connection.

Samsung Starts Production of AI Chips for Baidu

Baidu, a leading Chinese-language Internet search provider, and Samsung Electronics, a world leader in advanced semiconductor technology, today announced that Baidu's first cloud-to-edge AI accelerator, Baidu KUNLUN, has completed its development and will be mass-produced early next year. Baidu KUNLUN chip is built on the company's advanced XPU, a home-grown neural processor architecture for cloud, edge, and AI, as well as Samsung's 14-nanometer (nm) process technology with its I-Cube (Interposer-Cube) package solution.

The chip offers 512 gigabytes per second (GBps) memory bandwidth and supplies up to 260 Tera operations per second (TOPS) at 150 watts. In addition, the new chip allows Ernie, a pre-training model for natural language processing, to infer three times faster than the conventional GPU/FPGA-accelerating model. Leveraging the chip's limit-pushing computing power and power efficiency, Baidu can effectively support a wide variety of functions including large-scale AI workloads, such as search ranking, speech recognition, image processing, natural language processing, autonomous driving, and deep learning platforms like PaddlePaddle.

NVIDIA Introduces DRIVE AGX Orin Platform

NVIDIA today introduced NVIDIA DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles and robots. The platform is powered by a new system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors and is the result of four years of R&D investment. The Orin SoC integrates NVIDIA's next-generation GPU architecture and Arm Hercules CPU cores, as well as new deep learning and computer vision accelerators that, in aggregate, deliver 200 trillion operations per second—nearly 7x the performance of NVIDIA's previous generation Xavier SoC.

Orin is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D. Built as a software-defined platform, DRIVE AGX Orin is developed to enable architecturally compatible platforms that scale from a Level 2 to full self-driving Level 5 vehicle, enabling OEMs to develop large-scale and complex families of software products. Since both Orin and Xavier are programmable through open CUDA and TensorRT APIs and libraries, developers can leverage their investments across multiple product generations.

Intel Acquires Artificial Intelligence Chipmaker Habana Labs

Intel Corporation today announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center for approximately $2 billion. The combination strengthens Intel's artificial intelligence (AI) portfolio and accelerates its efforts in the nascent, fast-growing AI silicon market, which Intel expects to be greater than $25 billion by 2024 (1).

"This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need - from the intelligent edge to the data center," said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. "More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads."

Intel's AI strategy is grounded in the belief that harnessing the power of AI to improve business outcomes requires a broad mix of technology - hardware and software - and full ecosystem support. Today, Intel AI solutions are helping customers turn data into business value and driving meaningful revenue for the company. In 2019, Intel expects to generate over $3.5 billion in AI-driven revenue, up more than 20 percent year-over-year. Together, Intel and Habana can accelerate the delivery of best-in-class AI products for the data center, addressing customers' evolving needs.

Intel in Negotiations for Habana Labs Acquisition

Intel is currently performing negotiations to acquire Israeli AI chip startup, Habana Labs, according to a person who spoke to Calcalist anonymously. If the deal realizes, Intel will pay between one and two billion USD, making it Intel's second-largest acquisition of an Israeli company. When asked about the potential deal, the Intel spokesperson has stated that the company will not respond to rumors surrounding it, while Habana Labs has yet to respond to a request for comment made by Calcalist.

Founded in 2016 by Israeli entrepreneur Avigdor Willenz, who founded Galileo Technologies and Annapurna Labs, Habana Labs develops processors for training and inference of Machine Learning models. This acquisition would allow Intel to compete better in the AI processor market and get new customers which were previously exclusive to Habana Labs.

MediaTek Announces Dimensity & Dimensity 1000 5G SoC

MediaTek today unveiled Dimensity, MediaTek's family of powerful 5G system-on-chips (SoCs) offering an unrivaled combination of connectivity, multimedia, AI and imaging innovations for premium and flagship smartphones.

The MediaTek Dimensity 5G chipset family brings smart and fast together to power the world's most capable 5G devices. Dimensity represents a step toward a new era of mobility - the fifth dimension - to spur industry innovation and let consumers unlock the possibilities of 5G connectivity.

ASUS Deploying AI for Noise Cancellation Technology in Upcoming ROG Strix Go 2.4 Headset

ASUS- upcoming ROG Strix Go 2.4 headset may well grab more headlines than this one here at TechPowerUp, since the company has decided to integrate the latest buzzword in their upcoming product (missing is a "Gaming" addition). According to ASUS, the ROG Strix Go 2.4-s bi-directional boom-mic uses a noise-cancelling algorithm to block out up to 95% of unwanted noise, ranging from rage-induced keyboard mash-ups through nagging significant others (just kidding, they never nag). ASUS says that the algorithm they've deployed in these noise-cancellation features has been trained on a database of over 50 million records for tens of thousands of hours. But noise-cancellation with no discernible output sound quality wouldn't a driving product make, and so ASUS has included their 40 mm Asus Essence drivers on these headphones, which ASUS says have a penchant for Bass, and have been conditioned in an airtight chamber for reduction of any parasitic sounds.

NVIDIA Announces Jetson Xavier NX, Smallest Supercomputer for AI at the Edge

NVIDIA today introduced Jetson Xavier NX, the world's smallest, most powerful AI supercomputer for robotic and embedded computing devices at the edge. With a compact form factor smaller than the size of a credit card, the energy-efficient Jetson Xavier NX module delivers server-class performance up to 21 TOPS for running modern AI workloads, and consumes as little as 10 watts of power.

Jetson Xavier NX opens the door for embedded edge computing devices that demand increased performance but are constrained by size, weight, power budgets or cost. These include small commercial robots, drones, intelligent high-resolution sensors for factory logistics and production lines, optical inspection, network video recorders, portable medical devices and other industrial IoT systems.

Rambus Achieves Industry-Leading GDDR6 Performance at 18 Gbps

Rambus Inc., a premier silicon IP and chip provider making data faster and safer, today announced it has achieved industry-leading 18 Gbps performance with the Rambus GDDR6 Memory PHY. Running at the industry's fastest data rate of 18 Gbps, the Rambus GDDR6 PHY IP delivers peak performance four-to-five times faster than current DDR4 solutions and continues the company's longstanding tradition of developing leading-edge products. The Rambus GDDR6 PHY pairs with the companion GDDR6 memory controller from the recent acquisition of Northwest Logic to provide a complete and optimized memory subsystem solution.

Increased data usage in applications such as AI, ML, data center, networking and automotive systems is driving a need for higher bandwidth memory. The coming introduction of high-bandwidth 5G networks will exacerbate this challenge. Working closely with our memory partners, the Rambus GDDR6 solution gives system designers more options in selecting the memory system that meets both their bandwidth and cost requirements.

NVIDIA Launches the New Shield TV

NVIDIA today raised the bar higher still for streaming media players — unveiling the next generation of SHIELD TV, which delivers unmatched levels of home entertainment, gaming and AI capabilities right into the living room, starting at $149.

The two new SHIELD models — SHIELD TV and SHIELD TV Pro — provide exceptional visual and sound experiences. Their new Tegra X1+ processor, delivering up to 25 percent more performance than its predecessor, helps bring to life Dolby Vision for ultra-vivid imagery and Dolby Atmos for extraordinary audio. Its computational prowess dramatically improves picture quality by using AI to upscale HD video streams into 4K resolution.

New NVIDIA EGX Edge Supercomputing Platform Accelerates AI, IoT, 5G at the Edge

NVIDIA today announced the NVIDIA EGX Edge Supercomputing Platform - a high-performance, cloud-native platform that lets organizations harness rapidly streaming data from factory floors, manufacturing inspection lines and city streets to securely deliver next-generation AI, IoT and 5G-based services at scale, with low latency.

Early adopters of the platform - which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices - include Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

Intel Could Unveil First Discrete 10 nm GPUs in mid-2020

According to the sources close to DigiTimes, Intel will unveil its first discrete 10 nm graphics cards named "Xe" very soon, with the first wave of Xe GPUs expected to arrive some time in 2020. Said to launch mid year, around July or August, Intel will start selling initial Xe GPU models of the long awaited product to consumers, in hope of gaining a share in the massive market using GPU for acceleration of all kinds of tasks.

Perhaps one of the most interesting notes DigiTimes reported is that "... Intel's GPUs have already received support from the upstream supply chain and has already been integrated into Intel's CPUs to be used in the datacenter and AI fields.", meaning that AIB partners already have access to first 10 nm graphics chips that are ready for system integration. First generation of Xe graphics cards will cover almost whole GPU market, including PC, datacenter, and AI applications where NVIDIA currently holds the top spot.

TSMC Starts Shipping its 7nm+ Node Based on EUV Technology

TSMC today announced that its seven-nanometer plus (N7+), the industry's first commercially available Extreme Ultraviolet (EUV) lithography technology, is delivering customer products to market in high volume. The N7+ process with EUV technology is built on TSMC's successful 7 nm node and paves the way for 6 nm and more advanced technologies.

The N7+ volume production is one of the fastest on record. N7+, which began volume production in the second quarter of 2019, is matching yields similar to the original N7 process that has been in volume production for more than one year.

GLOBALFOUNDRIES Introduces 12LP+ FinFET Solution for Cloud and Edge AI Applications

GLOBALFOUNDRIES (GF), the world's leading specialty foundry, announced today at its Global Technology Conference the availability of 12LP+, an innovative new solution for AI training and inference applications. 12LP+ offers chip designers a best-in-class combination of performance, power and area, along with a set of key new features, a mature design and production ecosystem, cost-efficient development and fast time-to-market for high-growth cloud and edge AI applications.

Derived from GF's existing 12nm Leading Performance (12LP) platform, GF's new 12LP+ provides either a 20% increase in performance or a 40% reduction in power requirements over the base 12LP platform, plus a 15% improvement in logic area scaling. A key feature is a high-speed, low-power 0.5 V SRAM bit cell that supports the fast, power-efficient shuttling of data between processors and memory, an important requirement for AI applications in the computing and wired infrastructure markets.

Intel "Cascade Lake-X" HEDT CPU Lineup Starts at 10-core, Core i9-10900X Geekbenched

With its 10th generation Core X "Cascade Lake-X" HEDT processor series, Intel will not bother designing models with single-digit core-counts. The series is likely to start at 10 cores with the Core i9-10900X. This 10-core/20-thread processor features a quad-channel DDR4 memory interface, and comes with clock speeds of 3.70 GHz base, a 200 MHz speed-bump over the Core i9-9900X. The chip retains the mesh interconnect design and cache hierarchy of Intel's HEDT processors since "Skylake-X," with 1 MB of dedicated L2 cache per core, and 19.3 MB of shared L3 cache.

Geekbench tests run on the chip show it to perform roughly on par with the i9-9900X, with the 200 MHz speed-bump expected to marginally improve multi-threaded performance. Where the "Cascade Lake-X" silicon is expected to one-up "Skylake-X" is its support for DLBoost, an on-die fixed function hardware that multiplies matrices, improving AI DNN building and training; and pricing. Intel is expected to price its next-generation HEDT processors aggressively, to nearly double cores-per-Dollar.

OtherSide Entertainment Releases Pre-Alpha Gameplay Footage of System Shock 3

System Shock is one of the most revered, iconic franchises for PC gaming. One of the first games to thoroughly explore the trope of the rampant AI in a fully realized world (be it lore or mechanics-wise), it has left a taste of grandeur on those that actually got to play it before some of us gamers became too blindsided by pretty, current graphics.

Developer OtherSide Entertainment, who have the rights to develop further worlds in the franchise, have just released a pre-alpha gameplay teaser that at least serves to give us a taste of their art and mood direction. I got some flashbacks to the first Dead Space game whilst watching this, but that might just be me. This really does look like one of the darkest renditions of AI-powered reckonings we've seen to date. Here's hoping for a good, engaging storyline. Check out the teaser after the break.
Return to Keyword Browsing
May 21st, 2024 06:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts