News Posts matching #Artificial Intelligence

Return to Keyword Browsing

Lightelligence's Optical Processor Outperforms GPUs by 100 Times in Some of The Hardest Math Problems

Optical computing has been the research topic of many startups and tech companies like Intel and IBM, searching for the practical approach to bring a new way of computing. However, the most innovative solutions often come out of startups and today is no exception. According to the report from EETimes, optical computing startup Lightelligence has developed a processor that outperforms regular GPUs by 100 times in calculating some of the most challenging mathematical problems. As the report indicates, the Photonic Arithmetic Computing Engine (PACE) from Lightelligence manages to outperform regular GPUs, like NVIDIA's GeForce RTX 3080, by almost 100 times in the NP-complete class of problems.

More precisely, the PACE accelerator was tackling the Ising model, an example of a thermodynamic system used for understanding phase transitions, and it achieved some impressive results. Compared to the RTX 3080, it reached 100 times greater speed-up. All of that was performed using 12,000 optical devices integrated onto a circuit and running at 1 GHz frequency. Compared to the purpose-built Toshiba's simulated bifurcation machine based on FPGAs, the PACE still outperforms this system designed to tackle the Ising mathematical computation by 25 times. The PACE chip uses standard silicon photonics integration of Mach-Zehnder Interferometer (MZI) for computing and MEMS to change the waveguide shape in the MZI.
Lightelligence Photonic Arithmetic Computing Engine Lightelligence Photonic Arithmetic Computing Engine

QNAP and ULINK Release DA Drive Analyzer, AI-powered Drive Failure Prediction Tool for NAS

QNAP, partnered with ULINK Technology, to launch the DA Drive Analyzer. By leveraging cloud-based AI, this drive failure prediction tool allows users to take proactive steps to protect against server downtime and data loss by replacing drives before they fail. The DA Drive Analyzer leverages statistics generated from ULINK's cloud AI portal. Driven by historical usage data of millions of drives provided by users just like yourself, the DA Drive Analyzer's drive health prediction applies machine learning to track historical behaviors and is able to find drive failure events that won't be flagged by traditional diagnostics tools that rely on S.M.A.R.T. thresholds. Its user interface is also much more friendly and intuitive, allowing you to make plans to replace drives based on clearly defined drive information.

"Artificial Intelligence is a new technology that has tackled many real-life problems. By applying this technology to disk failure prediction, ULINK can actively and continuously monitor drives, detect problems, predict failures, and notify end users with our unique cloud-based data processing system. We are fortunate to have worked with QNAP to create this service, and we believe that many will benefit from it," said Joseph Chen, CEO of ULINK Technology.

AMD Announces Ambitious Goal to Increase Energy Efficiency of Processors Running AI Training and High Performance Computing Applications 30x by 2025

AMD today announced a goal to deliver a 30x increase in energy efficiency for AMD EPYC CPUs and AMD Instinct accelerators in Artificial Intelligence (AI) training and High Performance Computing (HPC) applications running on accelerated compute nodes by 2025.1 Accomplishing this ambitious goal will require AMD to increase the energy efficiency of a compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the last five years.

Accelerated compute nodes are the most powerful and advanced computing systems in the world used for scientific research and large-scale supercomputer simulations. They provide the computing capability used by scientists to achieve breakthroughs across many fields including material sciences, climate predictions, genomics, drug discovery and alternative energy. Accelerated nodes are also integral for training AI neural networks that are currently used for activities including speech recognition, language translation and expert recommendation systems, with similar promising uses over the coming decade. The 30x goal would save billions of kilowatt hours of electricity in 2025, reducing the power required for these systems to complete a single calculation by 97% over five years.

IBM Unveils On-Chip Accelerated Artificial Intelligence Processor

At the annual Hot Chips conference, IBM (NYSE: IBM) today unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM's first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.

Today, businesses typically apply detection techniques to catch fraud after it occurs, a process that can be time consuming and compute-intensive due to the limitations of today's technology, particularly when fraud analysis and detection is conducted far away from mission critical transactions and data. Due to latency requirements, complex fraud detection often cannot be completed in real-time - meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place.

IDC Forecasts Companies to Spend Almost $342 Billion on AI Solutions in 2021

Worldwide revenues for the artificial intelligence (AI) market, including software, hardware, and services, is estimated to grow 15.2% year over year in 2021 to $341.8 billion, according to the latest release of the International Data Corporation (IDC) Worldwide Semiannual Artificial Intelligence Tracker. The market is forecast to accelerate further in 2022 with 18.8% growth and remain on track to break the $500 billion mark by 2024. Among the three technology categories, AI Software occupied 88% of the overall AI market. However, in terms of growth, AI Hardware is estimated to grow the fastest in the next several years. From 2023 onwards, AI Services is forecast to become the fastest growing category.

Within the AI Software category, AI Applications has the lion's share at nearly 50% of revenues. In terms of growth, AI Platforms is the strongest with a five-year compound annual growth rate (CAGR) of 33.2%. The slowest will be AI System Infrastructure Software with a five-year CAGR of 14.4% while accounting for roughly 35% of all AI Software revenues. Within the AI Applications market, AI ERM is expected to grow slightly stronger than AI CRM over the next five years. Meanwhile, AI Lifecycle Software is forecast to grow the fastest among the markets within AI Platforms.

All Team Group Industrial Products Pass Military-Grade Certification

Leading memory provider TEAMGROUP today announced that all of its industrial storage products have passed military standards testing for shock and vibration resistance. The company's unique graphene-coated copper heatsink was also awarded a U.S. utility patent and is the world's first heat sink to be used for industrial SSDs. In response to the growing Artificial Intelligence of Things (AIoT) and edge computing trends, TEAMGROUP continues to improve the durability, reliability, and safety of its industrial products, striving to create innovative solutions in the field of industrial control.

TEAMGROUP's industrial product series, including industrial memory modules, SSDs and memory cards, have all been tested and certified to meet military shock (MIL-STD-202G and MIL-STD-883K) and vibration (MIL-STD-810G) standards. Whether focused on edge computing or 5G related applications, these products are guaranteed to handle high-speed data processing and computing for long periods of continuous operation. TEAMGROUP has once again proven the stability and durability of its industrial products, which meet the demanding needs for data security and industrial control in extreme conditions.

Qualcomm Introduces New 5G Distributed Unit Accelerator Card to Drive Global 5G Virtualized RAN Growth

Qualcomm Technologies, Inc. today announced the expansion of its 5G RAN Platforms portfolio with the addition of the Qualcomm 5G DU X100 Accelerator Card. The Qualcomm 5G DU X100 is designed to enable operators and infrastructure vendors the ability to readily reap the benefits of high performance, low latency, and power efficient 5G, while accelerating the cellular ecosystem's transition towards virtualized radio access networks.

The Qualcomm 5G DU X100 is a PCIe inline accelerator card with concurrent Sub-6 GHz and mmWave baseband support which is designed to simplify 5G deployments by offering a turnkey solution for ease of deployment with O-RAN fronthaul and 5G NR layer 1 High (L1 High) processing. The PCIe card is designed to seamlessly plug into standard Commercial-Off-The-Shelf (COTS) servers to offload CPUs from latency-sensitive and compute-intensive 5G baseband functions such as demodulation, beamforming, channel coding, and Massive MIMO computation needed for high-capacity deployments. For use in public or private networks, this accelerator card aims to give carriers the ability to increase overall network capacity and fully realize the transformative potential of 5G.

Seagate Launches SkyHawk AI 18TB Hard Drive

Seagate Technology plc, a world leader in data storage and management solutions, today announced it is shipping 18 TB SkyHawk Artificial Intelligence drives in volume. SkyHawk AI is the world's first purpose-built hard drive for artificial intelligence (AI)-enabled Surveillance solutions, enabling quicker and smarter decisions. The new drive supports deep learning and machine learning workload streams for Edge applications with ImagePerfectAI.

The capacity to retain more data over time is required for deep learning systems to become smarter and more accurate in their predictive analysis, and behavior analysis requires significantly more data than traditional video capture. SkyHawk AI simultaneously sustains 32 AI streams alongside 64 video streams and supports multi-bay NVR and AI-enabled NVR. SkyHawk AI offers a 550 TB/year workload rate, more than 3× the workload rate of standard surveillance hard drives in order to manage data deluge in complex video security system environments without sacrificing performance. This drive intelligently adapts between traditional video workloads and video+AI workloads.

Asetek Collaborates With HPE to Deliver Next-Gen HPC Server Cooling Solutions

Asetek today announced a collaboration with Hewlett Packard Enterprise (HPE) to deliver its premium data center liquid cooling solutions in HPE Apollo Systems, which are high-performing and density-optimized to target high-performance computing (HPC) and Artificial Intelligence (AI) needs. The integration enables deployment of high wattage processors in high density configurations to support compute-intense workloads.

When developing its next-generation HPC server solutions, HPE worked closely with Asetek to define a plug and play HPC system that is integrated, installed, and serviced by HPE that serves as the ideal complement to HPE's Gen10 Plus platform. With the resulting solution, HPE is able to maximize processor and interconnect performance by efficiently cooling high density computing clusters. HPE will be deploying these DLC systems, which support warm water cooling, this calendar year.

Qualcomm Launches World's First 5G and AI-Enabled Robotics Platform

Qualcomm Technologies, Inc., today announced the Qualcomm Robotics RB5 platform - the Company's most advanced, integrated, comprehensive offering designed specifically for robotics. Building on the successful Qualcomm Robotics RB3 platform and its broad adoption in a wide array of robotics and drone products available today, the Qualcomm Robotics RB5 platform is comprised of an extensive set of hardware, software and development tools.

The Qualcomm Robotics RB5 platform is the first of its kind to bring together the Company's deep expertise in 5G and AI to empower developers and manufacturers to create the next generation of high-compute, low-power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors - and the comprehensive Qualcomm Robotics RB5 Development Kit helps ensure developers have the customization and flexibility they need to make their visions a commercial reality. To date, Qualcomm Technologies has engaged many leading companies that have endorsed the Qualcomm Robotics RB5 platform, including 20+ early adopters in the process of evaluating the platform.
Qualcomm Robotics RB5 Platform

Intel Showcases Intelligent Edge and Energy-efficient Performance Research

This week at the 2020 Symposia on VLSI Technology and Circuits, Intel will present a body of research and technical perspectives on the computing transformation driven by data that is increasingly distributed across the core, edge and endpoints. Chief Technology Officer Mike Mayberry will deliver a plenary keynote, "The Future of Compute: How Data Transformation is Reshaping VLSI," that highlights the importance of transitioning computing from a hardware/program-centric approach to a data/information-centric approach.

"The sheer volume of data flowing across distributed edge, network and cloud infrastructure demands energy-efficient, powerful processing to happen close to where the data is generated, but is often limited by bandwidth, memory and power resources. The research Intel Labs is showcasing at the VLSI Symposia highlights several novel approaches to more efficient computation that show promise for a range of applications - from robotics and augmented reality to machine vision and video analytics. This body of research is focused on addressing barriers to the movement and computation of data, which represent the biggest data challenges of the future," said Vivek K. De, Intel fellow and director of Circuit Technology Research, Intel Labs.

Microsoft is Replacing MSN Journalists with Artificial Intelligence

Microsoft is working on bringing the latest artificial intelligence technology everywhere it can, and everywhere when it works. According to a few reports from Business Insider and Seattle Times, Microsoft is working on terminating its contracts with journalists and replacing them with artificial intelligence software. In the period between Wednesday and Thursday of last week, around 50 employees have received the information that their contracts will not be renewed after June 30th. The journalists in question were responsible for Microsoft's MSN Web portal which will now use machine learning (ML) models that will generate news stream. To use ML for an application like this, Microsoft is surely utilizing its Azure infrastructure to process everything in the cloud.

One of the ex-employees has said that the MSN platform has been semi-automated for some time now and that this is a completion of the automation. "Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic." - said Microsoft spokesman to Seattle Times.

Dell Announces New Generation Latitude, OptiPlex, and Precision Commercial Notebooks, Desktops, and Services

Dell Technologies unveiled the world's most intelligent and secure business PCs across its award-winning Latitude, Precision and OptiPlex portfolios to make work more efficient and safe - no matter the location. As the industry's most sustainable commercial PC portfolio, the new devices further advance Dell's commitment to sustainability with recycled materials, sustainable packaging, energy efficient designs and EPEAT Gold registrations.

Professionals can work smarter with Dell Optimizer, the automated Artificial Intelligence (AI)-based optimization technology, now available across Latitude, Precision and OptiPlex devices. The built-in software learns how each person works and adapts to their behavior to help them focus on the tasks that matter most. It works behind the scenes to improve overall application performance; enable faster log-in and secure lock outs; eliminate echoes and reduce background noise on conference calls; and extend battery run time.

GLOBALFOUNDRIES Delivers Industry's First Production-ready eMRAM on 22FDX Platform

GLOBALFOUNDRIES (GF ) today announced its embedded magnetoresistive non-volatile memory (eMRAM) on the company's 22 nm FD-SOI (22FDX ) platform has entered production, and GF is working with several clients with multiple production tape-outs scheduled in 2020. Today's announcement represents a significant industry milestone, demonstrating the scalability of eMRAM as a cost-effective option at advanced process nodes for Internet of Things (IoT), general-purpose microcontrollers, automotive, edge-AI (Artificial Intelligence), and other low-power applications.

Designed as a replacement for high-volume embedded NOR flash (eFlash), GF's eMRAM allows designers to extend their existing IoT and microcontroller unit architectures to access the power and density benefits of technology nodes below 28 nm.

Intel Acquires Artificial Intelligence Chipmaker Habana Labs

Intel Corporation today announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center for approximately $2 billion. The combination strengthens Intel's artificial intelligence (AI) portfolio and accelerates its efforts in the nascent, fast-growing AI silicon market, which Intel expects to be greater than $25 billion by 2024 (1).

"This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need - from the intelligent edge to the data center," said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. "More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads."

Intel's AI strategy is grounded in the belief that harnessing the power of AI to improve business outcomes requires a broad mix of technology - hardware and software - and full ecosystem support. Today, Intel AI solutions are helping customers turn data into business value and driving meaningful revenue for the company. In 2019, Intel expects to generate over $3.5 billion in AI-driven revenue, up more than 20 percent year-over-year. Together, Intel and Habana can accelerate the delivery of best-in-class AI products for the data center, addressing customers' evolving needs.

NVIDIA Leads the Edge AI Chipset Market but Competition is Intensifying: ABI Research

Diversity is the name of the game when it comes to the edge Artificial Intelligence (AI) chipset industry. In 2019, the AI industry is witnessing the continual migration of AI workloads, particularly AI inference, to edge devices, including on-premise servers, gateways, and end-devices and sensors. Based on the AI development in 17 vertical markets, ABI Research, a global tech market advisory firm, estimates that the edge AI chipset market will grow from US $2.6 billion in 2019 to US $7.6 billion by 2024, with no vendor commanding more than 40% of the market.

The frontrunner of this market is NVIDIA, with a 39% revenue share in the first half of 2019. The GPU vendor has a strong presence in key AI verticals that are currently leading in AI deployments, such as automotive, camera systems, robotics, and smart manufacturing. "In the face of different use cases, NVIDIA chooses to release GPU chipsets with different computational and power budgets. In combination with its large developer ecosystem and partnerships with academic and research institutions, the chipset vendor has developed a strong foothold in the edge AI industry," said Lian Jye Su, Principal Analyst at ABI Research.

NVIDIA is facing stiff competition from Intel with its comprehensive chipset portfolio, from Xeon CPU to Mobileye and Movidius Myriad. At the same time, FPGA vendors, such as Xilinx, QuickLogic, and Lattice Semiconductor, are creating compelling solutions for industrial AI applications. One missing vertical from NVIDIA's wide footprint is consumer electronics, specifically smartphones. In recent years, AI processing in smartphones has been driven by smartphone chipset manufacturers and smartphone vendors, such as Qualcomm, Huawei, and Apple. In smart home applications, MediaTek and Amlogic are making their presence known through the widespread adoption of voice control front ends and smart appliances.

Compute Express Link Consortium (CXL) Officially Incorporates

Today, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel Corporation and Microsoft announce the incorporation of the Compute Express Link (CXL) Consortium, and unveiled the names of its newly-elected members to its Board of Directors. The core group of key industry partners announced their intent to incorporate in March 2019, and remain dedicated to advancing the CXL standard, a new high-speed CPU-to-Device and CPU-to-Memory interconnect which accelerates next-generation data center performance.

The five new CXL board members are as follows: Steve Fields, Fellow and Chief Engineer of Power Systems, IBM; Gaurav Singh, Corporate Vice President, Xilinx; Dong Wei, Standards Architect and Fellow at ARM Holdings; Nathan Kalyanasundharam, Senior Fellow at AMD Semiconductor; and Larrie Carr, Fellow, Technical Strategy and Architecture, Data Center Solutions, Microchip Technology Inc.

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.

SHERPA Consortium: If AI Could Feel, it Would Fear Cyber-attacks from People

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium - an EU project studying the impact of AI on ethics and human rights that F-Secure joined in 2018 - finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.

The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.

FSP Leads the Way to AIoT and 5G at Computex 2019

FSP, one of the world's leading manufacturers of power supplies, is pleased to announce an extensive range of products for the Computex 2019 show in Taipei, Taiwan, from May 28 to June 1, 2019 - with a special focus on the fast-growing AIoT (Artificial Intelligence of Things) sector and 5G-ready products. FSP will be showing power supplies, chargers and other power-related products designed specifically for these markets. In addition, they will be demonstrating a variety of specialized power supplies and PC cases for gamers.

The AIoT has the potential to drive revolutionary changes in numerous sectors, particularly in B2B, but relies on 24/7/365 uptime backed by a reliable power supply. FSP is ready to meet this demand. The key AIoT segments impact a huge range of applications including smart cities, transportation, logistics, environment, industrials, agriculture, utilities, smart building and consumers. For those key segments, FSP offers 1U, UPS and Redundant PSU which included CRPS for cloud and data center services. For edge computing (so-called 'fog' segment), FSP has Flex ATX, ATX and UPS. Finally, to fully cover the vertical, FSP offers adapters, open frame designs, Flex ATX, chargers and UPS for edge computing and devices at the client end.

SuperMicro Gearing for Launch of New Gaming-Grade Motherboards With PCIe Gen4 and DDR5 Wave

SuperMicro may not be household name in consumer motherboards right now, but they once were a decent alternative in the market - or so I've been told by people much more knowledgeable than me in that regard, as I never laid my hands on one. The company is now more known for its server products, where it has focused most of its attention in the past decade - an effort that gave it a good, third-place hold in that market. And if the company can command such a market share in a much more requirements-heavy environment such as the server market demands, then it's likely those design decisions and developments will find themselves trickling down to the consumer side in any sort of consumer, gaming-grade product the company decided to tackle.

To that end, SuperMicro is gearing up to re introduce themselves to the consumer market, accompanying the wave of new technologies coming to the market in a few years - namely, PCIe Gen 4 and DDR5 memory. The company seems to think that this will mark a perfect opportunity for a strong comeback to the consumer market - where they now only offer a handful of motherboard solutions for Intel's CPUs. One such example is the C9Z390-PGW motherboard, based on Intel's Z390 chipset - with its 10-phase VRM design, PLC chip for doubling of PCIe lanes, and 10 Gigabit Lan. But not only on said "typical" consumer motherboard techonologies will SuperMicro be delivering - if the company has its way, anything from 5G, IoT, Mission Learning and Artificial Intelligence can be incorporated for some use case or another on consumer-grade motherboards, thus providing an axis of penetration for SuperMicro - and its entire partner eco-system.

Samsung Launches First Mobile SoC with AI-Accelerating Matrix Multiplication Cores

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced its latest premium application processor (AP), the Exynos 9 Series 9820, equipped for on-device Artificial Intelligence (AI) applications. The Exynos 9820 features a fourth-generation custom CPU, 2.0-gigabits-per-second (Gbps) LTE Advanced Pro modem, and an enhanced neural processing unit (NPU) to bring new smart experiences to mobile devices.

"As AI-related services expand and their utilization diversify in mobile devices, their processors require higher computational capabilities and efficiency," said Ben Hur, vice president of System LSI marketing at Samsung Electronics. "The AI capabilities in the Exynos 9 Series 9820 will provide a new dimension of performance in smart devices through an integrated NPU, high-performance fourth-generation custom CPU core, 2.0Gbps LTE modem and improved multimedia performance."

Samsung Electronics Starts Production of EUV-based 7nm LPP Process

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that it has completed all process technology development and has started wafer production of its revolutionary process node, 7LPP, the 7-nanometer (nm) LPP (Low Power Plus) with extreme ultraviolet (EUV) lithography technology. The introduction of 7LPP is a clear demonstration of Samsung Foundry's technology roadmap evolution and provides customers with a definite path to 3nm. The commercialization of its newest process node, 7LPP gives customers the ability to build a full range of exciting new products that will push the boundaries of applications such as 5G, Artificial Intelligence, Enterprise and Hyperscale Datacenter, IoT, Automotive, and Networking.

"With the introduction of its EUV process node, Samsung has led a quiet revolution in the semiconductor industry," said Charlie Bae, executive vice president of foundry sales and marketing team at Samsung Electronics. "This fundamental shift in how wafers are manufactured gives our customers the opportunity to significantly improve their products' time to market with superior throughput, reduced layers, and better yields. We're confident that 7LPP will be an optimal choice not only for mobile and HPC, but also for a wide range of cutting-edge applications."

Let's Go Driverless: Daimler, Bosch Select NVIDIA DRIVE for Robotaxi Fleets

(Editor's Note: NVIDIA continues to spread its wings in the AI and automotive markets, where it has rapidly become the de facto player. While the company's gaming products have certainly been the ones to project the company's image - and profits - that allowed it to come to be one of the world's leading tech companies, it's hard to argue that AI and datacenter accelerators has become one of the chief departments in raking in profits for the company. The company's vision for Level 4 and Level 5 autonomous driving and the future of our connected cities is an inspiring one, that came straight from yesterday's science fiction. Here's hoping the human mind, laws and city design efforts accompany these huge technological leaps -or at least don't strangle them too much.)

Press a button on your smartphone and go. Daimler, Bosch and NVIDIA have joined forces to bring fully automated and driverless vehicles to city streets, and the effects will be felt far beyond the way we drive. While the world's billion cars travel 10 trillion miles per year, most of the time these vehicles are sitting idle, taking up valuable real estate while parked. And when driven, they are often stuck on congested roadways. Mobility services will solve these issues plaguing urban areas, capture underutilized capacity and revolutionize the way we travel.

Samsung Foundry and Arm Expand Collaboration to Drive High-Performance Computing Solutions

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that its strategic foundry collaboration with Arm will be expanded to 7/5-nanometer (nm) FinFET process technology to remain a step ahead in the era of high-performance computing. Based on Samsung Foundry's 7LPP (7nm Low Power Plus) and 5LPE (5nm Low Power Early) process technologies, the Arm Artisan physical IP platform will enable 3GHz+ computing performance for Arm's Cortex -A76 processor.

Samsung's 7LPP process technology will be ready for its initial production in the second half of 2018. The first extreme ultra violet (EUV) lithography process technology, and its key IPs, are in development and expected to be completed by the first half of 2019. Samsung's 5LPE technology will allow greater area scaling and ultra-low power benefits due to the latest innovations in 7LPP process technology.
Return to Keyword Browsing
May 15th, 2024 14:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts