News Posts matching #AI

Return to Keyword Browsing

NVIDIA Surpasses Intel in Market Cap Size

Yesterday after the stock market has closed, NVIDIA has officially reached a bigger market cap compared to Intel. After hours, the price of the NVIDIA (ticker: NVDA) stock is $411.20 with a market cap of 251.31B USD. It marks a historic day for NVIDIA as the company has historically been smaller than Intel (ticker: INTC), with some speculating that Intel could buy NVIDIA in the past while the company was much smaller. Intel's market cap now stands at 248.15B USD, which is a bit lower than NVIDIA's. However, the market cap is not an indication of everything. NVIDIA's stock is fueled by the hype generated around Machine Learning and AI, while Intel is not relying on any possible bubbles.

If we compare the revenues of both companies, Intel is having much better performance. It had a revenue of 71.9 billion USD in 2019, while NVIDIA has 11.72 billion USD of revenue. No doubt that NVIDIA has managed to do a good job and it managed to almost double revenue from 2017, where it went from $6.91 billion in 2017 to $11.72 billion in 2019. That is an amazing feat and market predictions are that it is not stopping to grow. With the recent acquisition of Mellanox, the company now has much bigger opportunities for expansion and growth.

Qualcomm Announces Snapdragon 865 Plus 5G Mobile Platform, Breaking the 3 GHz Barrier

Qualcomm Technologies, Inc. unveiled the Qualcomm Snapdragon 865 Plus 5G Mobile Platform, a follow-on to the flagship Snapdragon 865 that has powered more than 140 devices (announced or in development) - the most individual premium-tier designs powered by a single mobile platform this year. The new Snapdragon 865 Plus is designed to deliver increased performance across the board for superior gameplay and insanely fast Qualcomm Snapdragon Elite Gaming experiences, truly global 5G, and ultra-intuitive AI.

"As we work to scale 5G, we continue to invest in our premium tier, 8-series mobile platforms, to push the envelope in terms of performance and power efficiency and deliver the next generation of camera, AI and gaming experiences," said Alex Katouzian, senior vice president and general manager, mobile, Qualcomm Technologies, Inc. "Building upon the success of Snapdragon 865, the new Snapdragon 865 Plus will deliver enhanced performance for the next wave of flagship smartphones."

SK hynix Starts Mass-Production of HBM2E High-Speed DRAM

SK hynix announced that it has started the full-scale mass-production of high-speed DRAM, 'HBM2E', only ten months after the Company announced the development of the new product in August last year. SK hynix's HBM2E supports over 460 GB (Gigabyte) per second with 1,024 I/Os (Inputs/Outputs) based on the 3.6 Gbps (gigabits-per-second) speed performance per pin. It is the fastest DRAM solution in the industry, being able to transmit 124 FHD (full-HD) movies (3.7 GB each) per second. The density is 16 GB by vertically stacking eight 16 Gb chips through TSV (Through Silicon Via) technology, and it is more than doubled from the previous generation (HBM2).

HBM2E boasts high-speed, high-capacity, and low-power characteristics; it is an optimal memory solution for the next-generation AI (Artificial Intelligence) systems including Deep Learning Accelerator and High-Performance Computing, which all require high-level computing performance. Furthermore, it is expected to be applied to the Exascale supercomputer - a high-performance computing system which can perform calculations a quintillion times per second - that will lead the research of next-generation basic and applied science, such as climate changes, bio-medics, and space exploration.

Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

Ahead of its PC platform release on July 14, testing of a pre-release build by Tom's Hardware reveals that "Death Stranding" will offer 4K 60 frames per second on any NVIDIA RTX 20-series graphics card if DLSS 2.0 is enabled. NVIDIA's performance-enhancing feature renders the game at a resolution lower than that of the display head, and uses AI to reconstruct details. We've detailed DLSS 2.0 in an older article. The PC version has a frame-rate limit of 240 FPS, ultra-wide resolution support, and a photo mode (unsure if it's an Ansel implementation). It has rather relaxed recommended system requirements for 1080p 60 FPS gaming (sans DLSS).

NVIDIA Unveils AI Platform to Minimize Downtime in Supercomputing Data Centers

NVIDIA today unveiled the NVIDIA Mellanox UFM Cyber-AI platform, which minimizes downtime in InfiniBand data centers by harnessing AI-powered analytics to detect security threats and operational issues, as well as predict network failures.

This extension of the UFM platform product portfolio — which has managed InfiniBand systems for nearly a decade — applies AI to learn a data center's operational cadence and network workload patterns, drawing on both real-time and historic telemetry and workload data. Against this baseline, it tracks the system's health and network modifications, and detects performance degradations, usage and profile changes.

Qualcomm Launches World's First 5G and AI-Enabled Robotics Platform

Qualcomm Technologies, Inc., today announced the Qualcomm Robotics RB5 platform - the Company's most advanced, integrated, comprehensive offering designed specifically for robotics. Building on the successful Qualcomm Robotics RB3 platform and its broad adoption in a wide array of robotics and drone products available today, the Qualcomm Robotics RB5 platform is comprised of an extensive set of hardware, software and development tools.

The Qualcomm Robotics RB5 platform is the first of its kind to bring together the Company's deep expertise in 5G and AI to empower developers and manufacturers to create the next generation of high-compute, low-power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors - and the comprehensive Qualcomm Robotics RB5 Development Kit helps ensure developers have the customization and flexibility they need to make their visions a commercial reality. To date, Qualcomm Technologies has engaged many leading companies that have endorsed the Qualcomm Robotics RB5 platform, including 20+ early adopters in the process of evaluating the platform.
Qualcomm Robotics RB5 Platform

AAEON Releases Body Temperature Monitoring IoT Board

AAEON, a leading manufacturer of AI and IoT platforms, has worked with partners and developers to create solutions designed to address COVID-19 and other infectious diseases. While the current pandemic is winding down in some areas, many companies are rethinking what it means to do business in a world where it is key to be aware of how disease spreads. One key sector is in banking, where many kinds of customers may visit on a daily basis. In Southeast Asia, AAEON is partnering with developers to deploy the BOXER-8170AI and SCA-M01 IoT Node Board in body temperature and meeting room monitoring.

HTC Vive Launches VIVE XR Suite

HTC VIVE, a global leader in innovative technology, today officially announces it will enter the cloud software business with the VIVE XR Suite offering at its hybrid event, "Journey into the Next Normal", which took place physically in Shanghai and online through the Engage virtual events platform. Comprised of five separate applications covering remote collaboration, productivity, events, social and culture, the VIVE XR Suite gives users the tools they need to overcome the new challenges faced while working and living in a socially distant world. The VIVE XR Suite is targeted to launch in Q3 2020 in China, with additional regions to follow throughout the year.

The VIVE XR Suite is comprised of 5 major applications (VIVE Sync, VIVE Sessions, VIVE Campus, VIVE Social, and VIVE Museum) to meet the daily needs of the users to overcome the new challenges faced by users around the world who are working, learning and living remotely. Although it is called an XR Suite, it is important to note that this software is not dependent on VR/AR devices to function. All the applications will function on existing PCs/laptops and some apps will even support modern smartphones, but for a superior immersive experience, PC VR or standalone VR devices would be recommended. Users will be able to login to all apps in the suite using a single account and across various devices they own. This integrated application bundle which is created in partnership with the leading software companies in their respective areas will provide a seamless experience for the consumer and business user. The CEO's of all the software partners in the VIVE XR Suite (Immersive VR Education, VirBELA, VRChat, and Museum of Other Realities) attended the event live via video and within VR in avatar form.

ASUS Announces AI Noise-canceling Microphone Technology

New technology eliminates background noise for clear voice communication with ASUS AI Noise-Canceling Mic Adapter, ROG Strix Go, ROG Theta 7.1 headsets and more. ASUS today announced new AI Noise-Canceling Microphone (AI Mic) technology that intelligently eliminates unwanted background noise for clear voice communication for work or play. The new technology uses chipset-based machine learning to filter out and remove other human voices and ambient sounds like wind or traffic noise. This new technology is now available on the ASUS AI Noise Canceling Mic Adapter and the latest ROG headsets.

ASUS AI Noise-Canceling Mic Adapter is the world's first USB-C to 3.5 mm adapter with integrated AI Mic technology. It connects to any headset via a 3.5 mm audio jack to provide users with crystal-clear voice communication. The built‑in chipset handles all of the sound processing, so the adapter does not affect the performance of the mobile device, PC or laptop it is connected to. Weighing just 8 grams, the AI Noise-Canceling Mic Adapter includes exclusive ASUS Hyper-Grounding technology to prevent electromagnetic interference for noise-free audio. In select markets, it is available with a USB Type-C-to-Type‑A adapter.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

Aetina Launches New Edge AI Computer Powered by the NVIDIA Jetson

Aetina Corp., a provider of high-performance GPGPU solutions, announced the new AN110-XNX edge AI computer leveraging the powerful capabilities of the NVIDIA Jetson Xavier NX, expanding its range of edge AI systems built on the Jetson platform for applications in smart transportation, factories, retail, healthcare, AIoT, robotics, and more.

The AN110-XNX combines the NVIDIA Jetson Xavier NX and Aetina AN110 carrier board in a compact form factor of 87.4 x 68.2 x 52 mm (with fan). AN110-XNX supports the MIPI CSI-2 interface for 1x4k or 2xFHD cameras to handle intensive AI workloads from ultra-high-resolution cameras to more accurate image analysis. It is as small as Aetina's AN110-NAO based on the NVIDIA Jetson Nano platform, but delivers more powerful AI computing via the new Jetson Xavier NX. With 384 CUDA cores, 48 Tensor Cores, and cloud-native capability the Jetson Xavier NX delivers up to 21 TOPS and is the ideal platform to accelerate AI applications. Bundled with the latest NVIDIA Jetpack 4.4 SDK, the energy-efficient module significantly expands the choices now available for developers and customers looking for embedded edge-computing options that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.

LG's 48-inch OLED Gaming TV with G-SYNC Goes on Sale This Month

LG is preparing to launch its latest addition to the gaming lineup of panels and this time it goes big. Preparing to launch this month is LG's 48-inch OLED Gaming TV with 120 HZ refreshing and G-SYNC support. To round up the impressive feature set, LG has priced this panel at $1499, which is a pricey but a tempting buy. Featuring 1 ms response time and low input lag, the 48CX TV is designed for gaming and fits into NVIDIA's Big Format Gaming Display (BFGD) philosophy. Interestingly, the TV uses LG's a9 Gen3 AI processor which does content upscaling so everything can look nice and crisp. Ai is used to "authentically upscale lower resolution content, translating the source to 4K's 8.3+ million pixels. The technology is so good, you might mistake non-4K for true 4K"

XRSpace Launches 5G-connected VR headset

XRSPACE, the company pioneering the next generation of social reality, has today announced the launch of the world's first social VR platform designed for mass market users, combined with the first 5G mobile VR headset, delivering on the promise of a true social VR experience for all.

XRSPACE aims to create the social reality of the future - a world where people can interact both physically and virtually in a way that is contextual, familiar, immersive, interactive and personal. At a time when face-to-face interaction is restricted due to social distancing measures, XRSPACE is aiming to bring people together in a virtual world that is powered by cutting edge XR, AI, and computer vision technology, creating a new experience through 5G which is meaningful, anytime, anywhere.

NVIDIA GameGAN AI Recreates PAC-MAN From Gameplay Footage

Trained on 50,000 episodes of the game, a powerful new AI model created by NVIDIA Research, called NVIDIA GameGAN, can generate a fully functional version of PAC-MAN — without an underlying game engine. That means that even without understanding a game's fundamental rules, AI can recreate the game with convincing results.

GameGAN is the first neural network model that mimics a computer game engine by harnessing generative adversarial networks, or GANs. Made up of two competing neural networks, a generator and a discriminator, GAN-based models learn to create new content that's convincing enough to pass for the original.

MSI Releases "Sound Tune" AI-Powered Noise Cancellation Software

MSI, a world leader in gaming and content creation hardware, unveils the MSI Sound Tune, an AI-powered noise cancellation software that filters out all the unwanted background noises to remarkably enhance users' communication experiences. "MSI is constantly pushing the boundaries of technology. The new MSI Sound Tune breaks the traditional boundaries on noise reduction by integrating deep learning AI technology to audio signal processing, allowing users who have to work from home amid the global pandemic to communicate clearly and stay productive with their tasks," said Harry Kao, MSI Technical Director.

MSI Sound Tune's AI-powered noise cancellation is achieved through Deep Neural Network (DNN) with more than 0.5 billion Synthetic Noisy Speech Data to help simulate the way human brain works. The traditional noise reduction technology, with its manually compiled algorithm, doesn't work well in environments with complicated noise sources, such as keyboard noises, construction site noises, coffee shop whispers, or the loud noise that overwhelms the vocal sounds. Comparatively speaking, the MSI Sound Tune, utilizing human brains' excellent ability to distinguish human vocal sounds from environmental noises, can extract the vocal sounds from complicated environments, giving users extra clear, noise-free commuication in various surroundings.

Microsoft Builds a Supercomputer for AI

Microsoft has held a Build 2020 conference for developers from all over the world, and they live-streamed it online. Among some of the announcements, Microsoft has announced a new supercomputer dedicated to the OpenAI company, which works on building Artificial General Intelligence (AGI). The new supercomputer is a part of Microsoft's Azure cloud infrastructure and it will allow OpenAI developers to train very large scale machine learning models in the cloud. The supercomputer is said to be the fifth most powerful supercomputer in the world with specifications of "more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server."

Specific information wasn't announced and we don't know what CPUs and GPUs go into this machine, but we can speculate that the latest Nvidia A100 "Ampere" GPU could be used. The company hasn't yet submitted its entry to the Top500 website, so we can't keep track of the FLOPs count and see what power it holds.
Microsoft Azure Data Center

Dell Announces New Generation Latitude, OptiPlex, and Precision Commercial Notebooks, Desktops, and Services

Dell Technologies unveiled the world's most intelligent and secure business PCs across its award-winning Latitude, Precision and OptiPlex portfolios to make work more efficient and safe - no matter the location. As the industry's most sustainable commercial PC portfolio, the new devices further advance Dell's commitment to sustainability with recycled materials, sustainable packaging, energy efficient designs and EPEAT Gold registrations.

Professionals can work smarter with Dell Optimizer, the automated Artificial Intelligence (AI)-based optimization technology, now available across Latitude, Precision and OptiPlex devices. The built-in software learns how each person works and adapts to their behavior to help them focus on the tasks that matter most. It works behind the scenes to improve overall application performance; enable faster log-in and secure lock outs; eliminate echoes and reduce background noise on conference calls; and extend battery run time.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

MediaTek Unveils 5G-Integrated Dimensity 1000+ Chip

MediaTek today announced enhancements to its Dimensity 5G chipset family with the Dimensity 1000+, an enhanced 5G-integrated chip with a number of leading technologies and upgrades for gaming, video and power-efficiency.

Dimensity 1000+ is based on the flagship performance of the Dimensity 1000 series and delivers a high-end, premium user experience. With years of accumulated experience in integrated chip technologies, MediaTek has made breakthroughs and innovations in all aspects and has become a pioneer in the 5G era.
MediaTek Dimensity 1000 SoC

ASUS Announces Tinker Edge R with AI Machine-Learning Capabilities

ASUS today announced Tinker Edge R, a single-board computer (SBC) specially designed for AI applications. It uses a Rockchip RK3399Pro NPU, a machine-learning (ML) accelerator that speeds up processing efficiency, lowers power demands and makes it easier to build connected devices and intelligent applications.

With this integrated ML accelerator, Tinker Edge R can perform three tera-operations per second (3 TOPS), using low power consumption. It also features an optimized neural-network (NN) architecture, which means Tinker Edge R can support multiple ML frameworks and allow lots of common ML models to be compiled and run easily.
ASUS Tinker Edge R

GeForce NOW Gains NVIDIA DLSS 2.0 Support In Latest Update

NVIDIA's game streaming service GeForce NOW has gained support for NVIDIA Deep Learning Super Sampling (DLSS) 2.0 in the latest update. DLSS 2.0 uses the tensor cores found in RTX series graphics cards to render games at a lower resolution and then use custom AI to construct sharp, higher resolution images. The introduction of DLSS 2.0 to GeForce NOW should allow for graphics quality to be improved on existing server hardware and deliver a smoother stutter-free gaming experience. NVIDIA announced that Control would be the first game on the platform to support DLSS 2.0, with additional games such as MechWarrior 5: Mercenaries and Deliver Us The Moon to support the feature in the future.

NVIDIA Completes Acquisition of Mellanox

NVIDIA today announced the completion of its acquisition of Mellanox Technologies, Ltd., for a transaction value of $7 billion. The acquisition, initially announced on March 11, 2019, unites two of the world's leading companies in high performance and data center computing. Combining NVIDIA's leading computing expertise with Mellanox's high-performance networking technology, the move will enable customers to achieve higher performance, greater utilization of computing resources and lower operating costs.

"The expanding use of AI and data science is reshaping computing and data center architectures," said Jensen Huang, founder and CEO of NVIDIA. "With Mellanox, the new NVIDIA has end-to-end technologies from AI computing to networking, full-stack offerings from processors to software, and significant scale to advance next-generation data centers. Our combined expertise, supported by a rich ecosystem of partners, will meet the challenge of surging global demand for consumer internet services, and the application of AI and accelerated data science from cloud to edge to robotics."
NVIDIA finishes acquiring Mellanox

NVIDIA Unveils RTX Voice, AI-based Audio Noise-Cancellation Software

Perhaps the biggest gripe about attending office calls and meetings from home these days is the background noise - everyone's home. NVIDIA developed an interesting new piece of free software that can help those on desktops cut out background noise in the audio, called RTX Voice, released to web as a beta. The app uses AI to filter out background audio noise not just at your end, but also from the audio of others in your meeting as you receive it (they don't need the app running on their end). The app leverages tensor cores, and requires an NVIDIA GeForce RTX 20-series GPU, Windows 10, and GeForce drivers R410 or later. RTX Voice runs in conjunction with your meetings software. Among the supported ones are Cisco Webex, Zoom, Skype, Twitch, XSplit, OBS, Discord, and Slack. For more information and FAQs, visit the download link.

DOWNLOAD: NVIDIA RTX Voice beta
NVIDIA RTX Voice

Xilinx Announces World's Highest Bandwidth, Highest Compute Density Adaptable Platform for Network and Cloud Acceleration

Xilinx, Inc. today announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry's highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.

Versal is the industry's first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC's 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to-market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Xilinx Versal ACAP FPGA

UNISOC Launches Next-Gen 5G SoC T7520 on 6 nm EUV Manufacturing Node

UNISOC, a leading global supplier of mobile communication and IoT chipsets, today officially launched its new-generation 5G SoC mobile platform - T7520. Using cutting-edge process technology, T7520 enables an optimized 5G experience with substantially enhanced AI computing and multimedia imaging processing capabilities while lowering power consumption.

T7520 is UNISOC's second-generation 5G smartphone platform. Built on a 6 nm EUV process technology and empowered by some of the latest design techniques, it offers substantially enhanced performance at a lower level of power consumption than ever.
Return to Keyword Browsing