News Posts matching #AI

Return to Keyword Browsing

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.

Intel Unveils a Clean-slate CPU Core Architecture Codenamed "Sunny Cove"

Intel today unveiled its first clean-slate CPU core micro-architecture since "Nehalem," codenamed "Sunny Cove." Over the past decade, the 9-odd generations of Core processors were based on incrementally refined descendants of "Nehalem," running all the way down to "Coffee Lake." Intel now wants a clean-slate core design, much like AMD "Zen" is a clean-slate compared to "Stars" or to a large extent even "Bulldozer." This allows Intel to introduce significant gains in IPC (single-thread performance) over the current generation. Intel's IPC growth curve over the past three micro-architectures has remained flat, and only grew single-digit percentages over the generations prior.

It's important to note here, that "Sunny Cove" is the codename for the core design. Intel's earlier codenaming was all-encompassing, covering not just cores, but also uncore, and entire dies. It's up to Intel's future chip-designers to design dies with many of these cores, a future-generation iGPU such as Gen11, and a next-generation uncore that probably integrates PCIe gen 4.0 and DDR5 memory. Intel details "Sunny Cove" as far as mentioning IPC gains, a new ISA (new instruction sets and hardware capabilities, including AVX-512), and improved scalability (ability to increase core-counts without running into latency problems).

Intel Unveils the Neural Compute Stick 2

Intel is hosting its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15. The company kicked off the event with the introduction of the Intel Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. Based on the Intel Movidius Myriad X vision processing unit (VPU) and supported by the Intel Distribution of OpenVINO toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

"The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn't exist before. We're excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2," said Naveen Rao, Intel corporate vice president and general manager of the AI Products Group.

Samsung Launches First Mobile SoC with AI-Accelerating Matrix Multiplication Cores

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced its latest premium application processor (AP), the Exynos 9 Series 9820, equipped for on-device Artificial Intelligence (AI) applications. The Exynos 9820 features a fourth-generation custom CPU, 2.0-gigabits-per-second (Gbps) LTE Advanced Pro modem, and an enhanced neural processing unit (NPU) to bring new smart experiences to mobile devices.

"As AI-related services expand and their utilization diversify in mobile devices, their processors require higher computational capabilities and efficiency," said Ben Hur, vice president of System LSI marketing at Samsung Electronics. "The AI capabilities in the Exynos 9 Series 9820 will provide a new dimension of performance in smart devices through an integrated NPU, high-performance fourth-generation custom CPU core, 2.0Gbps LTE modem and improved multimedia performance."

VIA Partners with Lucid to Develop Industry-Leading VIA Edge AI 3D Developer Kit

VIA Technologies, Inc today announces that it is partnering with AI vision startup Lucid, to deliver AI-based depth sensing capabilities to more dual- and multi-camera devices in the security, retail, robotics and autonomous vehicle space. With Lucid's proprietary 3D Fusion Technology embedded into the VIA Edge AI 3D Developer Kit, security and retail cameras, robots, drones, and autonomous vehicles will now be able to easily capture accurate depth and 3D with dual- or multi-camera setups while reducing the costs, power, and space consumption of previous hardware depth solutions. As VIA builds out its long-term Edge AI solutions roadmap, Lucid is adding camera- and machine-learning based depth capabilities on top of every platform.

The AI-enhanced 3D/depth solution developed by Lucid, known as 3D Fusion Technology, is currently deployed in many devices such as 3D cameras, security cameras, robots, and mobile phones, including the RED Hydrogen One which is launching in November without any additional emission or laser-based hardware components. In the VIA Edge AI 3D Developer Kit, the AI depth solution runs on the Qualcomm APQ8096SG embedded processor, which features the Qualcomm AI Engine along with support for multiple cameras to help Lucid provide superior performance compared to other hardware depth solutions and deliver an industry-leading and unique pure machine learning-based software solution.

Intel Drafts Model Legislation to Spur Data Privacy Discussion

Intel Corporation released model legislation designed to inform policymakers and spur discussion on personal data privacy. Prompted by the rapid rise of new technologies like artificial intelligence (AI), Intel's model bill is open for review and comment from privacy experts and the public on an interactive website. The bill's language and comments received should provide useful insight for those interested in meaningful data privacy legislation.

"The collection of personal information is a growing concern. The US needs a privacy law that both protects consumer privacy and creates a framework in which important new industries can prosper. Our model bill is designed to spur discussion that helps inspire meaningful privacy legislation," said David Hoffman, Intel associate general counsel and global privacy officer.

Data are the lifeblood for many critical new industries, including precision medicine, automated driving, workplace safety, smart cities and others. But the growing amount of personal data collected, sometimes without consumers' awareness, raises serious privacy concerns.

Chinese State News Agency Debuts AI-powered Anchor for 24/7 Automated News Coverage

So, this doesn't really concern hardware, but alas, all advances - and particularly AI-related ones - are powered by the little silicon chips that could. This time, and in a move that really does bode towards the future of news coverage, Xinhua, China's state-run news agency, unveiled the "world's first AI news anchor," which was created in collaboration with local search engine company Sogou. There are actually two independent versions of the same anchor - one for news coverage in English, and another for Mandarin.

The AI-infused anchors fuse the image and voice profiles of actual human anchors with artificial intelligence (AI) technology, which powers their speech, lip movements, and facial expressions, alongside reading, absorbing, and curating content that's then posted as video snippets generated by the AI. There is some work to be done until the result is actually indistinguishable from that of actual humans - but do we ever want AI renditions that are indistinguishable from humans? There are a number of problems that could arise from such an achievement, after all. But maybe that's a conversation for another day.

NVIDIA Introduces RAPIDS Open-Source GPU-Acceleration Platform

NVIDIA today announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed.

RAPIDS open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU's importance in data analytics, an array of companies is supporting RAPIDS - from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle.

NVIDIA Announces New GeForce Experience Features Ahead of RTX Push

NVIDIA today announced new GeForce experience features to be integrated and expanded in wake of its RTX platform push. The new features include increased number of Ansel-supporting titles (including already released Prey and Vampyr, as well as the upcoming Metro Exodus and Shadow of the Tomb Raider), as well as RTX-exclusive features that are being implemented into the company's gaming system companion.

There are also some features being implemented that gamers will be able to take advantage of without explicit Ansel SDK integration done by the games developer - which NVIDIA says will bring Ansel support (in any shape or form) to over 200 titles (150 more than the over 50 titles already supported via SDK). And capitalizing on Battlefield V's relevance to the gaming crowd, NVIDIA also announced support for Ansel and its Highlights feature for the upcoming title.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Medical Imaging

Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds," said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

Five Years Too Late, Typo Fix Offers Improved AI in Aliens: Colonial Marines

It has been a long five years since Aliens: Colonial Marines launched as a hot mess. Being critically panned by gamers and critics alike. One of the reasons behind the negative reception was the game's poor AI. The Xenomorphs had a tendency to run straight into gunfire. Or worse yet, would stand around or group up making them easy targets. Suffice to say the Xenomorphs were far from scary. A typographical error has been discovered as the reason behind some of those issues.

As noted on the ResetERA forums, a post by jamesdickinson963 on the ACM Overhaul ModDB page traced the problem to a spelling error in a single line of code within the game's ini file. The code shown below has "teather" instead of the proper "tether". This simple mistake in theory, results in the "zone tether" failing to load the AI parameters attached to the broken bit of code.

Let's Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

(Editor's Note: NVIDIA continues to spread its wings in the AI and automotive markets, where it has rapidly become the de facto player. While the company's gaming products have certainly been the ones to project the company's image - and profits - that allowed it to come to be one of the world's leading tech companies, it's hard to argue that AI and datacenter accelerators has become one of the chief departments in raking in profits for the company. The company's vision for Level 4 and Level 5 autonomous driving and the future of our connected cities is an inspiring one, that came straight from yesterday's science fiction. Here's hoping the human mind, laws and city design efforts accompany these huge technological leaps -or at least don't strangle them too much.)

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive. While the world's billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

Samsung Foundry and Arm Expand Collaboration to Drive High-Performance Computing Solutions

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that its strategic foundry collaboration with Arm will be expanded to 7/5-nanometer (nm) FinFET process technology to remain a step ahead in the era of high-performance computing. Based on Samsung Foundry's 7LPP (7nm Low Power Plus) and 5LPE (5nm Low Power Early) process technologies, the Arm Artisan physical IP platform will enable 3GHz+ computing performance for Arm's Cortex -A76 processor.

Samsung's 7LPP process technology will be ready for its initial production in the second half of 2018. The first extreme ultra violet (EUV) lithography process technology, and its key IPs, are in development and expected to be completed by the first half of 2019. Samsung's 5LPE technology will allow greater area scaling and ultra-low power benefits due to the latest innovations in 7LPP process technology.

Baidu Unveils 'Kunlun' High-Performance AI Chip

Baidu Inc. today announced Kunlun, China's first cloud-to-edge AI chip, built to accommodate high performance requirements of a wide variety of AI scenarios. The announcement includes training chip "818-300"and inference chip "818-100". Kunlun can be applied to both cloud and edge scenarios, such as data centers, public clouds and autonomous vehicles.

Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu's AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu's years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip.

NVIDIA Joins S&P 100 Stock Market Index

With tomorrow's opening bell, NVIDIA will join the Standard and Poors S&P 100 index, replacing Time Warner. The spot that NVIDIA is joining in has been freed up by the merger of Time Warner with AT&T. This marks a monumental moment for the company as membership in the S&P 100 is reserved for only the largest and most important corporations in the US. From the tech sector the list comprises illustrious names such as Apple, Amazon, Facebook, Google Alphabet, IBM, Intel, Microsoft, Netflix, Oracle, Paypal, Qualcomm and Texas Instruments.

NVIDIA's stock has seen massive gains over the last years, thanks to delivering record quarter after record quarter. Recent developments have transformed the company from a mostly gaming GPU manufacturer to a company that is leading in the fields of GPU compute, AI and machine learning. This of course inspires investors, so the NVIDIA stock has been highly sought after, now sitting above 265 USD, which brings the company's worth to over 160 billion USD. Congratulations!

ASUS Introduces Full Lineup of PCI-E Servers Powered by NVIDIA Tesla GPUs

ASUS, the leading IT Company in server systems, server motherboards, workstations and workstation motherboards, today announced support for the latest NVIDIA AI solutions with NVIDIA Tesla V100 Tensor Core 32GB GPUs and Tesla P4 on its accelerated computing servers.
Artificial intelligence (AI) is translating data into meaningful insights, services and scientific breakthroughs. The size of the neural networks powering this AI revolution has grown tremendously. For instance, today's state of the art neural network model for language translation, Google's MOE model has 8 billion parameters compared to 100 million parameters of models from just two years ago.

To handle these massive models, NVIDIA Tesla V100 offers a 32GB memory configuration, which is double that of the previous generation. Providing 2X the memory improves deep learning training performance for next-generation AI models by up to 50 percent and improves developer productivity, allowing researchers to deliver more AI breakthroughs in less time. Increased memory allows HPC applications to run larger simulations more efficiently than ever before.

NGD Systems Delivers Industry-First 16TB NVMe Computational U.2 SSD

NGD Systems, Inc., the leader in computational storage, today announced the general availability (GA) of the 16 terabyte (TB) Catalina-2 U.2 NVMe solid state drive (SSD). The Catalina-2 is the first NVMe SSD with 16TB capacity that also makes available NGD's powerful "In-Situ Processing" capabilities. The Catalina-2 does this without impact to the reliability, quality of service (QoS) or power consumption, already available in the current shipping NGD Products.

The use of Arm multi-core processors in Catalina-2 provides users with a well-understood development environment and the combination of exceptional performance with low power consumption. The Arm-based In-Situ Processing platform allows NGD Systems to pack both high capacity and computational ability into the first 16TB 2.5-inch form factor package on the market. The NGD Catalina-2 U.2 NVMe SSD only consumes 12W (0.75W/TB) of power, compared to the 25W or more used by other NVMe solutions. This provides the highest energy efficiency in the industry.

Intel's Mobileye Secures a Future-Focused Deal for 8 Million Self-Driving Systems in 2021

Intel's Mobileye, the AI and self-driving feature the blue giant acquired last year for a cool $15.3 billion, has just announced, via an exclusive report to Reuters, that they've secured a contract to provide some 8 million self-driving systems to an European automaker. The deal is a future-focused one, and will see, by 2021, distribution for Intel's EyeQ5 chip, which is designed for fully autonomous driving - an upgrade to the EyeQ4 that will be rolled out in the coming weeks, Reuters reports, according to senior vice president for advanced development and strategy at Mobileye Erez Dagan.

Amnon Shashua, Mobileye's chief executive, said that "By the end of 2019, we expect over 100,000 Level 3 cars [where the car is self-driving but still allows for user intervention should the system be unable to progress for more than 10 seconds] with Mobileye installed." This deal is sure to make Intel even more of a player in the automotive space, where NVIDIA and a number of other high-profile companies have been making strides in recent years.

Probabilistic Computing Takes Artificial Intelligence to the Next Step

The potential impact of Artificial Intelligence (AI) has never been greater - but we'll only be successful if AI can deliver smarter and more intuitive answers. A key barrier to AI today is that natural data fed to a computer is largely unstructured and "noisy."

It's easy for humans to sort through natural data. For example: If you are driving a car on a residential street and see a ball roll in front of you, you would stop, assuming there is a small child not far behind that ball. Computers today don't do this. They are built to assist humans with precise productivity tasks. Making computers efficient at dealing with probabilities at scale is central to our ability to transform current systems and applications from advanced computational aids into intelligent partners for understanding and decision-making.

A Very Real Intelligence Race: The White House Hosts 38 Tech Companies on AI

The White House today is hosting executives from 38 companies for a grueling, embattled day of trying to move through the as of yet murky waters of AI development. The meeting, which includes representatives from Microsoft, Intel, Google, Amazon, Pfizer, and Ford, among others, aims to gather thoughts and ideas on how to supercharge AI development in a sustainable, safe, and cost-effective way.

Fields such as agriculture, healthcare and transportation are being spearheaded as areas of interest (military applications, obviously, are being discussed elsewhere). The Washington Post quotes Michael Kratsios, deputy chief technology officer at the White House, as saying in a recent interview that "Whether you're a farmer in Iowa, an energy producer in Texas, a drug manufacturer in Boston, you are going to be using these techniques to drive your business going forward."

VIA Launches VIA Edge AI Developer Kit

VIA Technologies, Inc., today announced the launch of the VIA Edge AI Developer Kit, a highly-integrated package powered by the Qualcomm Snapdragon 820E Embedded Platform that simplifies the design, testing, and deployment of intelligent Edge AI systems and devices.

The kit combines the VIA SOM-9X20 SOM Module and SOMDB2 Carrier Board with a 13MP camera module that is optimized for intelligent real-time video capture, processing, and edge analysis. Edge AI application development is enabled by an Android 8.0 BSP, which includes support for the Snapdragon Neural Processing Engine (NPE) and full acceleration of the Qualcomm Hexagon DSP, Qualcomm Adreno 530 GPU, or Qualcomm Kryo CPU to power AI applications. A Linux BSP based on Yocto 2.0.3 is set to be released in June this year.

More Humane AI: Microsoft Launches "AI for Accessibility" Initiative

Microsoft at its Build conference today announced one of the better use cases for AI yet: to empower those with disabilities. Dubbed the AI for Accessibility Initiative, this Microsoft program will see $25 million dollars being deployed across five years to further research and development to specifically target challenges faced by people with disabilities in three key areas: human connection, employment and modern life. The $25 million budget will be used by Microsoft as seed grants for developers, universities, institutions and other Microsoft partners, with the Redmond-based Microsoft pledging to also further invest - and scale up - development for key promising ideas that are birthed from this project. The AI bit comes from its implementation in inclusive design scenarios, scaling it up through platforms, services, and different solutions.

Further, Microsoft will help partners include accessibility solutions on their products, which could allow for a base model for accessibility technologies on families of products. Microsoft President Brad Smith said there are about a billion people around the world with some kind of disability, either temporary or permanent, and it's for these people, and those that will come after, that Microsoft is committing to this investment.

Adobe and NVIDIA Announce Partnership to Deliver New AI Services

At Adobe Summit, Adobe and NVIDIA today announced a strategic partnership to rapidly enhance their industry-leading artificial intelligence (AI) and deep learning technologies. Building on years of collaboration, the companies will work to optimize the Adobe Sensei AI and machine learning (ML) framework for NVIDIA GPUs. The collaboration will speed time to market and improve performance of new Sensei-powered services for Adobe Creative Cloud and Experience Cloud customers and developers.

The partnership advances Adobe's strategy to extend the availability of Sensei APIs and to broaden the Sensei ecosystem to a new audience of developers, data scientists and partners. "Combining NVIDIA's best-in-class AI capabilities with Adobe's leading creative and digital experience solutions, all powered by Sensei, will allow us to deliver higher-performing AI services to customers and developers more quickly," said Shantanu Narayen, president and CEO, Adobe. "We're excited to partner with NVIDIA to push the boundaries of what's possible in creativity, marketing and exciting new areas like immersive media."

Intel FPGAs Accelerate Artificial Intelligence for Deep Learning in Microsoft's Bing

Artificial intelligence (AI) is transforming industries and changing how data is managed, interpreted and, most importantly, used to solve real problems for people and businesses faster than ever.

Today's Microsoft's Bing Intelligent Search news demonstrates how Intel FPGA (field programmable gate array) technology is powering some of the world's most advanced AI platforms. Advances to the Bing search engine with real-time AI will help people do more and learn by going beyond delivering standard search results. Bing Intelligent Search will provide answers instead of web pages, and enable a system that understands words and the meaning behind them, the context and intent of a search.
Return to Keyword Browsing
Jun 3rd, 2024 06:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts