News Posts matching "AI"

Return to Keyword Browsing

FSP Leads the Way to AIoT and 5G at Computex 2019

FSP, one of the world's leading manufacturers of power supplies, is pleased to announce an extensive range of products for the Computex 2019 show in Taipei, Taiwan, from May 28 to June 1, 2019 - with a special focus on the fast-growing AIoT (Artificial Intelligence of Things) sector and 5G-ready products. FSP will be showing power supplies, chargers and other power-related products designed specifically for these markets. In addition, they will be demonstrating a variety of specialized power supplies and PC cases for gamers.

The AIoT has the potential to drive revolutionary changes in numerous sectors, particularly in B2B, but relies on 24/7/365 uptime backed by a reliable power supply. FSP is ready to meet this demand. The key AIoT segments impact a huge range of applications including smart cities, transportation, logistics, environment, industrials, agriculture, utilities, smart building and consumers. For those key segments, FSP offers 1U, UPS and Redundant PSU which included CRPS for cloud and data center services. For edge computing (so-called 'fog' segment), FSP has Flex ATX, ATX and UPS. Finally, to fully cover the vertical, FSP offers adapters, open frame designs, Flex ATX, chargers and UPS for edge computing and devices at the client end.

NVIDIA Announces Financial Results for First Quarter Fiscal 2020

NVIDIA today reported revenue for the first quarter ended April 28, 2019, of $2.22 billion compared with $3.21 billion a year earlier and $2.21 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $0.64, compared with $1.98 a year ago and $0.92 in the previous quarter. Non-GAAP earnings per diluted share were $0.88 compared with $2.05 a year earlier and $0.80 in the previous quarter.

"NVIDIA is back on an upward trajectory," said Jensen Huang, founder and CEO of NVIDIA. "We've returned to growth in gaming, with nearly 100 new GeForce Max-Q laptops shipping. And NVIDIA RTX has gained broad industry support, making ray tracing the standard for next-generation gaming.

Microsoft Partners with Sony on Gaming and AI

Sony Corporation (Sony) and Microsoft Corp. (Microsoft) announced on Thursday that the two companies will partner on new innovations to enhance customer experiences in their direct-to-consumer entertainment platforms and AI solutions.

Under the memorandum of understanding signed by the parties, the two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. In addition, the two companies will explore the use of current Microsoft Azure datacenter-based solutions for Sony's game and content-streaming services. By working together, the companies aim to deliver more enhanced entertainment experiences for their worldwide customers. These efforts will also include building better development platforms for the content creator community.

Western Digital Announces Automotive-grade iNAND EM132 eMMC Storage

Western Digital Corp. is addressing the automotive industry's increasing need for storage by equipping vehicle manufacturers and system solution providers with the technology and capacity to support both current and future applications including e-cockpits, Artificial Intelligence (AI) databases, ADAS, advanced infotainment systems, and autonomous computers. As the first 256GB e.MMC using 64-Layer 3D NAND TLC flash technology in the automotive market, the new Western Digital iNAND AT EM132 EFD extends the life of e.MMC beyond 2D NAND to meet evolving application needs and growing capacity requirements.

According to Neil Shah, partner and research director, Counterpoint Research, "Storage is one of the fastest growing semiconductor applications in a connected autonomous car. The advanced in-vehicle infotainment (IVI), AI and sensor-driven autonomous driving systems generate large amounts of data that needs to be processed and stored locally at the edge. The average capacity of storage required per vehicle is expected to balloon beyond 2TB by 2022."

Intel Xe GPUs to Support Raytracing Hardware Acceleration

Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.

"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.

Intel Reports First-Quarter 2019 Financial Results

Intel Corporation today reported first-quarter 2019 financial results. "Results for the first quarter were slightly higher than our January expectations. We shipped a strong mix of high performance products and continued spending discipline while ramping 10nm and managing a challenging NAND pricing environment. Looking ahead, we're taking a more cautious view of the year, although we expect market conditions to improve in the second half," said Bob Swan, Intel CEO. "Our team is focused on expanding our market opportunity, accelerating our innovation and improving execution while evolving our culture. We aim to capitalize on key technology inflections that set us up to play a larger role in our customers' success, while improving returns for our owners."

In the first quarter, the company generated approximately $5.0 billion in cash from operations, paid dividends of $1.4 billion and used $2.5 billion to repurchase 49 million shares of stock. In the first quarter, Intel achieved 4 percent growth in the PC-centric business while data-centric revenue declined 5 percent.

NVIDIA Responds to Tesla's In-house Full Self-driving Hardware Development

Tesla held an investor panel in the USA yesterday (April 22) with the entire event, focusing on autonomous vehicles, also streamed on YouTube (replay here). There were many things promised in the course of the event, many of which are outside the scope of this website, but the announcement of Tesla's first full self-driving hardware module made the news in more ways than one as reported right here on TechPowerUp. We had noted how Tesla had traditionally relied on NVIDIA (and then Intel) microcontroller units, as well as NVIDIA self-driving modules in the past, but the new in-house built module had stepped away from the green camp in favor of more control over the feature set.

NVIDIA was quick to respond to this, saying Tesla was incorrect in their comparisons, in that the NVIDIA Drive Xavier at 21 TOPS was not the right comparison, and rather it should have been against NVIDIA's own full self-driving hardware the Drive AGX Pegasus capable of 320 TOPS. Oh, and NVIDIA also claimed Tesla erroneously reported Drive Xavier's performance was 21 TOPS instead of 30 TOPS. It is interesting how one company was quick to recognize itself as the unmarked competition, especially at a time when Intel, via their Mobileye division, have also given them a hard time recently. Perhaps this is a sign of things to come in that self-driving cars, and AI computing in general, is getting too big a market to be left to third-party manufacturing, with larger companies opting for in-house hardware itself. This move does hurt NVIDIA's focus in this field, as market speculation is ongoing that they may end up losing other customers following Tesla's departure.

Western Digital Introduces Surveillance-Class Storage with Extreme Endurance For AI-Enabled Security

Western Digital Corp. today unveiled the new Western Digital WD Purple SC QD312 Extreme Endurance microSD card for designers and manufacturers of AI-enabled security cameras, smart video surveillance and advanced edge devices that capture and store video at higher bit rates than mainstream cameras. According to IHS Markit, global shipments of professional video surveillance cameras are expected to grow from 127 million to over 200 million between 2017 and 2022, and those with on-board storage are expected to grow by an average of approximately 19 percent per year.

With the migration to 4K and higher video resolutions, and the introduction of more smart cameras with built-in AI and improved local processing capabilities, surveillance cameras need to be able to store both video and raw data to facilitate these AI capabilities. As a result, storage with higher capacity, more intelligence and greater durability is increasingly required.

AMD President and CEO Dr. Lisa Su to Deliver COMPUTEX 2019 CEO Keynote

Taiwan External Trade Development Council (TAITRA) announced today that the 2019 COMPUTEX International Press Conference will be held with a Keynote by AMD President and CEO Dr. Lisa Su. The 2019 COMPUTEX International Press Conference & CEO Keynote is scheduled for Monday, May 27 at 10:00 AM in Room 201 of the Taipei International Convention Center (TICC) in Taipei, Taiwan with the keynote topic "The Next Generation of High-Performance Computing".

"COMPUTEX, as one of the global leading technology tradeshows, has continued to advance with the times for more than 30 years. This year, for the first time, a keynote speech will be held at the pre-show international press conference," said Mr. Walter Yeh, President & CEO, TAITRA, "Dr. Lisa Su received a special invitation to share insights about the next generation of high-performance computing. We look forward to her participation attracting more companies to participate in COMPUTEX, bringing the latest industry insights, and jointly sharing the infinite possibilities of the technology ecosystem on this global stage."

Mellanox Not Quite Intel's Yet, NVIDIA Joins Competitive Bidding

Late January it was reported that Intel is looking to buy out Israeli networking hardware maker Mellanox Technology, in what looked like a cakewalk USD $6 billion deal at the time, which was a 35 percent premium over the valuation of Mellanox. Turns out, Intel hasn't closed the deal, and there are other big tech players in the foray for Mellanox, the most notable being NVIDIA. The GPU giant has reportedly offered Mellanox a competitive bid of $7 billion.

NVIDIA eyes a slice of the data-center networking hardware pie since the company has invested heavily in GPU-based AI accelerators and its own high-bandwidth interconnect dubbed NVLink, and now needs to complete its hardware ecosystem with NICs and switches under its own brand. Founded in 1999 in Yoqneam, Israel, Mellanox designs high performance network processors and fully-built NICs in a wide range of data-center relevant interconnects. Intel is by far the biggest tech company operating in Israel, with not just R&D centers, but also manufacturing sites, in stark contrast to NVIDIA, which opened its first R&D office in 2017 with a few hundred employees.

Update: NVIDIA's bid for Mellanox stands at $7 billion.

Microsoft Unveils HoloLens 2 Mixed Reality Headset

Since the release of HoloLens in 2016 we have seen mixed reality transform the way work gets done. We have unlocked super-powers for hundreds of thousands of people who go to work every day. From construction sites to factory floors, from operating rooms to classrooms, HoloLens is changing how we work, learn, communicate and get things done.

We are entering a new era of computing, one in which the digital world goes beyond two-dimensional screens and enters the three-dimensional world. This new collaborative computing era will empower us all to achieve more, break boundaries and work together with greater ease and immediacy in 3D. Today, we are proud to introduce the world to Microsoft HoloLens 2. Our customers asked us to focus on three key areas to make HoloLens even better. They wanted HoloLens 2 to be even more immersive and more comfortable, and to accelerate the time-to-value.

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.

Intel Unveils a Clean-slate CPU Core Architecture Codenamed "Sunny Cove"

Intel today unveiled its first clean-slate CPU core micro-architecture since "Nehalem," codenamed "Sunny Cove." Over the past decade, the 9-odd generations of Core processors were based on incrementally refined descendants of "Nehalem," running all the way down to "Coffee Lake." Intel now wants a clean-slate core design, much like AMD "Zen" is a clean-slate compared to "Stars" or to a large extent even "Bulldozer." This allows Intel to introduce significant gains in IPC (single-thread performance) over the current generation. Intel's IPC growth curve over the past three micro-architectures has remained flat, and only grew single-digit percentages over the generations prior.

It's important to note here, that "Sunny Cove" is the codename for the core design. Intel's earlier codenaming was all-encompassing, covering not just cores, but also uncore, and entire dies. It's up to Intel's future chip-designers to design dies with many of these cores, a future-generation iGPU such as Gen11, and a next-generation uncore that probably integrates PCIe gen 4.0 and DDR5 memory. Intel details "Sunny Cove" as far as mentioning IPC gains, a new ISA (new instruction sets and hardware capabilities, including AVX-512), and improved scalability (ability to increase core-counts without running into latency problems).

Intel Unveils the Neural Compute Stick 2

Intel is hosting its first artificial intelligence (AI) developer conference in Beijing on Nov. 14 and 15. The company kicked off the event with the introduction of the Intel Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. Based on the Intel Movidius Myriad X vision processing unit (VPU) and supported by the Intel Distribution of OpenVINO toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

"The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn't exist before. We're excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2," said Naveen Rao, Intel corporate vice president and general manager of the AI Products Group.

Samsung Launches First Mobile SoC with AI-Accelerating Matrix Multiplication Cores

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced its latest premium application processor (AP), the Exynos 9 Series 9820, equipped for on-device Artificial Intelligence (AI) applications. The Exynos 9820 features a fourth-generation custom CPU, 2.0-gigabits-per-second (Gbps) LTE Advanced Pro modem, and an enhanced neural processing unit (NPU) to bring new smart experiences to mobile devices.

"As AI-related services expand and their utilization diversify in mobile devices, their processors require higher computational capabilities and efficiency," said Ben Hur, vice president of System LSI marketing at Samsung Electronics. "The AI capabilities in the Exynos 9 Series 9820 will provide a new dimension of performance in smart devices through an integrated NPU, high-performance fourth-generation custom CPU core, 2.0Gbps LTE modem and improved multimedia performance."

VIA Partners with Lucid to Develop Industry-Leading VIA Edge AI 3D Developer Kit

VIA Technologies, Inc today announces that it is partnering with AI vision startup Lucid, to deliver AI-based depth sensing capabilities to more dual- and multi-camera devices in the security, retail, robotics and autonomous vehicle space. With Lucid's proprietary 3D Fusion Technology embedded into the VIA Edge AI 3D Developer Kit, security and retail cameras, robots, drones, and autonomous vehicles will now be able to easily capture accurate depth and 3D with dual- or multi-camera setups while reducing the costs, power, and space consumption of previous hardware depth solutions. As VIA builds out its long-term Edge AI solutions roadmap, Lucid is adding camera- and machine-learning based depth capabilities on top of every platform.

The AI-enhanced 3D/depth solution developed by Lucid, known as 3D Fusion Technology, is currently deployed in many devices such as 3D cameras, security cameras, robots, and mobile phones, including the RED Hydrogen One which is launching in November without any additional emission or laser-based hardware components. In the VIA Edge AI 3D Developer Kit, the AI depth solution runs on the Qualcomm APQ8096SG embedded processor, which features the Qualcomm AI Engine along with support for multiple cameras to help Lucid provide superior performance compared to other hardware depth solutions and deliver an industry-leading and unique pure machine learning-based software solution.

Intel Drafts Model Legislation to Spur Data Privacy Discussion

Intel Corporation released model legislation designed to inform policymakers and spur discussion on personal data privacy. Prompted by the rapid rise of new technologies like artificial intelligence (AI), Intel's model bill is open for review and comment from privacy experts and the public on an interactive website. The bill's language and comments received should provide useful insight for those interested in meaningful data privacy legislation.

"The collection of personal information is a growing concern. The US needs a privacy law that both protects consumer privacy and creates a framework in which important new industries can prosper. Our model bill is designed to spur discussion that helps inspire meaningful privacy legislation," said David Hoffman, Intel associate general counsel and global privacy officer.

Data are the lifeblood for many critical new industries, including precision medicine, automated driving, workplace safety, smart cities and others. But the growing amount of personal data collected, sometimes without consumers' awareness, raises serious privacy concerns.

Chinese State News Agency Debuts AI-powered Anchor for 24/7 Automated News Coverage

So, this doesn't really concern hardware, but alas, all advances - and particularly AI-related ones - are powered by the little silicon chips that could. This time, and in a move that really does bode towards the future of news coverage, Xinhua, China's state-run news agency, unveiled the "world's first AI news anchor," which was created in collaboration with local search engine company Sogou. There are actually two independent versions of the same anchor - one for news coverage in English, and another for Mandarin.

The AI-infused anchors fuse the image and voice profiles of actual human anchors with artificial intelligence (AI) technology, which powers their speech, lip movements, and facial expressions, alongside reading, absorbing, and curating content that's then posted as video snippets generated by the AI. There is some work to be done until the result is actually indistinguishable from that of actual humans - but do we ever want AI renditions that are indistinguishable from humans? There are a number of problems that could arise from such an achievement, after all. But maybe that's a conversation for another day.

NVIDIA Introduces RAPIDS Open-Source GPU-Acceleration Platform

NVIDIA today announced a GPU-acceleration platform for data science and machine learning, with broad adoption from industry leaders, that enables even the largest companies to analyze massive amounts of data and make accurate business predictions at unprecedented speed.

RAPIDS open-source software gives data scientists a giant performance boost as they address highly complex business challenges, such as predicting credit card fraud, forecasting retail inventory and understanding customer buying behavior. Reflecting the growing consensus about the GPU's importance in data analytics, an array of companies is supporting RAPIDS - from pioneers in the open-source community, such as Databricks and Anaconda, to tech leaders like Hewlett Packard Enterprise, IBM and Oracle.

NVIDIA Announces New GeForce Experience Features Ahead of RTX Push

NVIDIA today announced new GeForce experience features to be integrated and expanded in wake of its RTX platform push. The new features include increased number of Ansel-supporting titles (including already released Prey and Vampyr, as well as the upcoming Metro Exodus and Shadow of the Tomb Raider), as well as RTX-exclusive features that are being implemented into the company's gaming system companion.

There are also some features being implemented that gamers will be able to take advantage of without explicit Ansel SDK integration done by the games developer - which NVIDIA says will bring Ansel support (in any shape or form) to over 200 titles (150 more than the over 50 titles already supported via SDK). And capitalizing on Battlefield V's relevance to the gaming crowd, NVIDIA also announced support for Ansel and its Highlights feature for the upcoming title.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Medical Imaging

Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds," said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

Five Years Too Late, Typo Fix Offers Improved AI in Aliens: Colonial Marines

It has been a long five years since Aliens: Colonial Marines launched as a hot mess. Being critically panned by gamers and critics alike. One of the reasons behind the negative reception was the game's poor AI. The Xenomorphs had a tendency to run straight into gunfire. Or worse yet, would stand around or group up making them easy targets. Suffice to say the Xenomorphs were far from scary. A typographical error has been discovered as the reason behind some of those issues.

As noted on the ResetERA forums, a post by jamesdickinson963 on the ACM Overhaul ModDB page traced the problem to a spelling error in a single line of code within the game's ini file. The code shown below has "teather" instead of the proper "tether". This simple mistake in theory, results in the "zone tether" failing to load the AI parameters attached to the broken bit of code.

Let's Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

(Editor's Note: NVIDIA continues to spread its wings in the AI and automotive markets, where it has rapidly become the de facto player. While the company's gaming products have certainly been the ones to project the company's image - and profits - that allowed it to come to be one of the world's leading tech companies, it's hard to argue that AI and datacenter accelerators has become one of the chief departments in raking in profits for the company. The company's vision for Level 4 and Level 5 autonomous driving and the future of our connected cities is an inspiring one, that came straight from yesterday's science fiction. Here's hoping the human mind, laws and city design efforts accompany these huge technological leaps -or at least don't strangle them too much.)

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive. While the world's billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

Samsung Foundry and Arm Expand Collaboration to Drive High-Performance Computing Solutions

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that its strategic foundry collaboration with Arm will be expanded to 7/5-nanometer (nm) FinFET process technology to remain a step ahead in the era of high-performance computing. Based on Samsung Foundry's 7LPP (7nm Low Power Plus) and 5LPE (5nm Low Power Early) process technologies, the Arm Artisan physical IP platform will enable 3GHz+ computing performance for Arm's Cortex -A76 processor.

Samsung's 7LPP process technology will be ready for its initial production in the second half of 2018. The first extreme ultra violet (EUV) lithography process technology, and its key IPs, are in development and expected to be completed by the first half of 2019. Samsung's 5LPE technology will allow greater area scaling and ultra-low power benefits due to the latest innovations in 7LPP process technology.
Return to Keyword Browsing