News Posts matching #AI

Return to Keyword Browsing

PCI-SIG Announces PCI Express 7.0 Specification to Reach 128 GT/s

PCI-SIG today announced that the PCI Express (PCIe ) 7.0 specification will double the data rate to 128 GT/s and is targeted for release to members in 2025. "For 30 years the guiding principle of PCI-SIG has been, 'If we build it, they will come,'" observed Nathan Brookwood, Research Fellow at Insight 64. "Early parallel versions of PCI technology accommodated speeds of hundreds of megabytes/second, well matched to the graphics, storage and networking demands of the 1990s.

In 2003, PCI-SIG evolved to a serial design that supported speeds of gigabytes/second to accommodate faster solid-state disks and 100MbE Ethernet. Almost like clockwork, PCI-SIG has doubled PCIe specification bandwidth every three years to meet the challenges of emerging applications and markets. Today's announcement of PCI-SIG's plan to double the channel's speed to 512 GB/s (bi-directionally) puts it on track to double PCIe specification performance for another 3-year cycle."

SiFive Enhances Popular X280 Processor IP to Meet Accelerated Demand for Vector Processing

SiFive Inc., the founder and leader of RISC-V computing, today announced the release of the latest version of its SiFive Intelligence X280 processor, which introduces significant new features including scalability up to a 16-core cache-coherent complex, WorldGuard trusted protection, and a new interface allowing for seamless integration between the X280 vector unit and customer-designed external AI accelerators or other coprocessors, called VCIX (Vector Coprocessor Interface eXtension). Collectively, these enhanced features deliver unmatched scalability, security, and interoperability to the SiFive X280, the most widely adopted implementation of the RISC-V Vector extension. This latest version of the X280 is a powerful solution for those looking for alternatives to legacy SIMD-style architectures.

Publicly available since April 2021, the SiFive Intelligence X280 has seen rapid adoption as customers gravitate towards its unique combination of performance, power efficiency, and an intuitive programming model. The X280 has claimed double-digit design wins in the past six months alone, in a wide variety of data-driven applications, including AI inference, image processing, datacenter acceleration, and automotive use cases.

Supermicro Announces Global Availability of Intelligent Edge Systems featuring Intel Xeon D Processors

Super Micro Computer, Inc., a global leader in enterprise computing, storage, networking, and green computing technology, is announcing the global availability of a wide range of systems for edge computing, including the recently announced SuperEdge and a range of Intel Xeon D processor-based systems, with up to 20 cores, 25 GbE networking built-in, and a temperature operating range from -40° C to 85° C. These systems are optimized for the Intelligent Edge, where responsive systems that reduce latency are required. In addition, locating powerful AI systems closer to where data is generated reduces network traffic and lowers response time.

"We are pleased to be able to deliver our high-volume optimized edge systems worldwide due to our expanded factories and capacity," said Charles Liang, president, and CEO of Supermicro. "Our latest systems are designed for harsh conditions with the highest performing CPUs and GPUs that are available. Supermicro has the unique ability to quickly deliver industry-optimized solutions from the edge to the data center by using our Building Block Solution approach. We are also committed to lessening our servers' carbon footprint, resulting in fewer emissions and a greener planet."

AMD CDNA3 Architecture Sees the Inevitable Fusion of Compute Units and x86 CPU at Massive Scale

AMD in its 2022 Financial Analyst Day presentation unveiled its next-generation CDNA3 compute architecture, which will see something we've been expecting for a while—a compute accelerator that has a large number of compute units for scalar processing, and a large number of x86-64 CPU cores based on some future "Zen" microarchitecture, onto a single package. The presence of CPU cores on the package would eliminate the need for the system to have an EPYC or Xeon processor at its head, and clusters of Instinct CDNA3 processors could run themselves without the need for a CPU and its system memory.

The Instinct CDNA3 processor will feature an advanced packaging technology that brings various IP blocks together as chiplets, each based on a node most economical to it, without compromising on its function. The package features stacked HBM memory, and this memory is shared not just by the compute units and x86 cores, but also forms part of large shared memory pools accessible across packages. 4th Generation Infinity Fabric ties it all together.

AAEON Unveils UP Squared 6000 Edge Computing Kit

AAEON continues to innovate and improve with the introduction of their UP Squared 6000 Edge Computing Kit, which offers customers elite, exclusive features in a plug-and-play industrial turnkey solution powered by the Intel Atom x6425RE SoC (formerly Elkhart Lake).

An upgrade from the UP Squared 6000 Edge, the UP Squared 6000 Edge Computing Kit provides a wealth of additional features while only outgrowing its predecessor's form factor by 1 centimeter in height. This centimeter houses an integrated carrier board containing a HAT2-compatible 40-pin PSE header, doubling the expansion options available compared to previous iterations of the UP Board series. This is in addition to an already impressive I/O, which features four Gigabit Ethernet ports, two COM ports, three USB 3.2 ports, along with three M.2 sockets to incorporate AI, SSD, 5G, and Wi-Fi5/6 modules.

Intel Announces "Rialto Bridge" Accelerated AI and HPC Processor

During the International Supercomputing Conference on May 31, 2022, in Hamburg, Germany, Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation, announced Rialto Bridge, Intel's data center graphics processing unit (GPU). Using the same architecture as the Intel data center GPU Ponte Vecchio and combining enhanced tiles with Intel's next process node, Rialto Bridge will offer up to 160 Xe cores, more FLOPs, more I/O bandwidth and higher TDP limits for significantly increased density, performance and efficiency.

"As we embark on the exascale era and sprint towards zettascale, the technology industry's contribution to global carbon emissions is also growing. It has been estimated that by 2030, between 3% and 7% of global energy production will be consumed by data centers, with computing infrastructure being a top driver of new electricity use," said Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation.

Ayar Labs Partners with NVIDIA to Deliver Light-Based Interconnect for AI Architectures

Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs' technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

AMD Selects Google Cloud to Provide Additional Scale for Chip Design Workloads

Google Cloud and AMD today announced a technology partnership in which AMD will run electronic design automation (EDA) for its chip-design workloads on Google Cloud, further extending the on-premises capabilities of AMD data centers. AMD will also leverage Google Cloud's global networking, storage, artificial intelligence, and machine learning capabilities to further improve upon its hybrid and multicloud strategy for these EDA workloads.

Scale, elasticity, and efficient utilization of resources play critical roles in chip design, particularly given that the demand for compute processing grows with each node advancement. To remain flexible and scale easily, AMD will add Google Cloud's newest compute-optimized C2D VM instance, powered by 3rd Gen AMD EPYC processors, to its suite of resources focused on EDA workloads. By leveraging Google Cloud, AMD anticipates being able to run more designs in parallel, giving the team more flexibility to manage short-term compute demands, without reducing allocation on long-term projects.

AMD Robotics Starter Kit Kick-Starts the Intelligent Factory of the Future

Today AMD announced the Kria KR260 Robotics Starter Kit, the latest addition to the Kria portfolio of adaptive system-on-modules (SOMs) and developer kits. A scalable and out-of-the-box development platform for robotics, the Kria KR260 offers a seamless path to production deployment with the existing Kria K26 adaptive SOMs. With native ROS 2 support, the standard framework for robotics application development, and pre-built interfaces for robotics and industrial solutions, the new SOM starter kit enables rapid development of hardware-accelerated applications for robotics, machine vision and industrial communication and control.

"The Kria KR260 Robotics Starter Kits builds on the success of our Kria SOMs and KV260 Vision AI Starter Kit for AI and embedded developers, providing roboticists with a complete, out-of-the-box solution for this rapidly growing application space," said Chetan Khona, senior director of Industrial, Vision, Healthcare and Sciences Markets at AMD. "Roboticists will now be able to work in their standard development environment on a platform that has all the interfaces and capabilities needed to be up and running in less than an hour. The KR260 Starter Kit is an ideal platform to accelerate robotics innovation and easily take ideas to production at scale."

GrAI Matter Labs Unveils sparsity-native AI SoC

GrAI Matter Labs, a pioneer of brain-inspired ultra-low latency computing, announced today that it will be unveiling GrAI VIP, a full-stack AI system-on-chip platform, to partners and customers at GLOBAL INDUSTRIE, May 17th-20th, 2022. At GLOBAL INDUSTRIE, GML will demonstrate a live event-based, brain-inspired computing solution for purpose-built, efficient inference in a real-world application of robotics using the Life-Ready GrAI VIP chip. GrAI VIP is an industry-first near-sensor AI solution with 16-bit floating-point capability that achieves best-in-class performance with a low-power envelope. It opens up unparalleled applications that rely on understanding and transformations of signals produced by a multitude of sensors at the edge in Robotics, AR/VR, Smart Homes, Infotainment in automobiles and more.

"GrAI VIP is ready to deliver Life-Ready AI to industrial automation applications and revolutionize systems such as pick & place robots, cobots, and warehouse robots, as being demonstrated at the show," said Ingolf Held, CEO of GrAI Matter Labs. "GrAI Matter Labs has a pipeline of over $1 Million in pre-orders, and we are thrilled to enable our early-access partners and customers in industrial automation, consumer electronics, defence and more, with our GrAI VIP M.2 cards sampling today." "GML is targeting the $1 billion+ fast-growing market (20%+ per year) of endpoint AI with a unique approach backed by innovative technology," said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. "GML's 'Life-Ready' AI provides solutions that here-to-fore were simply impossible at such low footprint and power." AI application developers looking for high fidelity and low latency responses for their edge algorithms can now get early access to the GrAI VIP platform and drive game-changing products in industrial automation, consumer electronics, and more.

MediaTek Unveils New AIoT Platform Stack and Introduces the Genio 1200 AIoT Chip

MediaTek today unveiled its new Genio platform for AIoT devices and introduced the first chip in the Genio family, the Genio 1200 designed for premium AIoT products. MediaTek Genio is a complete platform stack for the AIoT with powerful and ultra- efficient chipsets, open platform software development kits (SDKs) and a developer portal with comprehensive resources and tools. This all-in-one platform makes it easy for brands to develop innovative consumer, enterprise and industrial smart applications at the premium, mid-range and entry levels, and bring these devices to market faster. With MediaTek Genio, customers have access to all the hardware, software and resources needed to go from concept to design and manufacturing.

Customers can choose from a range of Genio chips to suit their product needs, and then use MediaTek's developer resources and the Yocto Linux open platform SDK to customize their designs. MediaTek also makes it easy for customers to access its partners' system hardware and software, and leverage partners' networks and sales channels. By offering an integrated, easy-to-use platform, MediaTek Genio reduces development costs and speeds up time to market, while providing long-term support for operating system updates and security patches that extend the product lifecycle. "Today MediaTek powers the most popular AIoT devices on the market. As the industry enters the next era of innovation, MediaTek's Genio platform delivers flexibility, scalability and development support brands need to cater to the latest market demands," said Jerry Yu, MediaTek Corporate Senior Vice President and General Manager of MediaTek's Computing, Connectivity and Metaverse Business Group. "We look forward to seeing the new user experiences brands bring to life with the Genio 1200 and its powerful AI capability, support for 4K displays and advanced imaging features."

Habana Labs Launches Second-generation AI Deep Learning Processors

Today at the Intel Vision conference, Habana Labs, an Intel company, announced its second-generation deep learning processors, the Habana Gaudi 2 Training and Habana Greco Inference processors. The processors are purpose-built for AI deep learning applications, implemented in 7nm technology and build upon Habana's high-efficiency architecture to provide customers with higher-performance model training and inferencing for computer vision and natural language applications in the data center. At Intel Vision, Habana Labs revealed Gaudi2's training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model delivers twice the training throughput over the Nvidia A100-80GB GPU.

"The launch of Habana's new deep learning processors is a prime example of Intel executing on its AI strategy to give customers a wide array of solution choices - from cloud to edge - addressing the growing number and complex nature of AI workloads. Gaudi2 can help Intel customers train increasingly large and complex deep learning workloads with speed and efficiency, and we're anticipating the inference efficiencies that Greco will bring."—Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Tachyum Delivers the Highest AI and HPC Performance with the Launch of the World's First Universal Processor

Tachyum today launched the world's first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products.

After the company undertook its mission to conquer the processor performance plateau in nanometer-class chips and the systems they power, Tachyum has succeeded by launching its first commercial product. The Prodigy Cloud/AI/HPC supercomputer processor chip offers 4x the performance of the fastest Xeon, has 3x more raw performance than NVIDIA's H100 on HPC and has 6x more raw performance on AI training and inference workloads, and up to 10x performance at the same power. Prodigy is poised to overcome the challenges of increasing data center power consumption, low server utilization and stalled performance scaling.

Ayar Labs Raises $130 Million for Light-based Chip-to-Chip Communication

Ayar Labs, the leader in chip-to-chip optical connectivity, today announced that the company has secured $130 million in additional financing led by Boardman Bay Capital Management to drive the commercialization of its breakthrough optical I/O solution. Hewlett Packard Enterprise (HPE) and NVIDIA entered this investment round, joining existing strategic investors Applied Ventures LLC, GlobalFoundries, Intel Capital, and Lockheed Martin Ventures. Other new strategic and financial investors participating in the round include Agave SPV, Atreides Capital, Berkeley Frontier Fund, IAG Capital Partners, Infinitum Capital, Nautilus Venture Partners, and Tyche Partners. They join existing investors such as BlueSky Capital, Founders Fund, Playground Global, and TechU Venture Partners.

"As a successful technology-focused crossover fund operating for over a decade, Ayar Labs represents our largest private investment to date," said Will Graves, Chief Investment Officer at Boardman Bay Capital Management. "We believe that silicon photonics-based optical interconnects in the data center and telecommunications markets represent a massive new opportunity and that Ayar Labs is the leader in this emerging space with proven technology, a fantastic team, and the right ecosystem partners and strategy."

Polyn Technology Delivers NASP Test Chip for Tiny AI

Polyn Technology announced today that its first Neuromorphic Analog Signal Processor (NASP) chip is packaged and evaluated, demonstrating proof of the technology's brain-mimicking architecture. It is the first Tiny AI true analog design to be used next to sensors. Polyn Technology is an innovative provider of ultra-low-power-performance NASP technology and a producer of unique Tiny AI chips and their associated IP. "This achievement validates the intensive work of our multinational team," said Aleksandr Timofeev, CEO and founder of Polyn Technology. "Our chip represents the most advanced technology bridging analog computations and the digital core. It is designed with neuroscience in mind, replicating pre-processing the primary cortical area of the human brain does at the periphery before learning at the center."

The NASP chip enables full data processing disaggregation between the sensor node and the cloud; it truly embodies the Tiny AI concept. The NASP test chip contains several neural networks. The chip is implemented in 55 nm CMOS technology. Its design proves the NASP "neuron" model as well as the scalability of the technology and efficiency of the chip design automation tools developed by Polyn. "Our first chip is created from trained neural networks by NASP Compiler and synthesis tools that generated Netlist and the silicon engineering files from the software math model simulation. We will continue to refine our technology for creation of new generation chips," said Yaakov Milstain, COO of Polyn. Polyn anticipates the chip will be available to customers in the first quarter of 2023 as its first wearables product, with a fusion of PPG and IMU sensors for the most accurate heart rate measurement along with recognition and tracking of human activity.

Seagate and Phison Partner to Develop New Enterprise SSDs

Seagate Technology Holdings plc, a world leader in mass-data storage infrastructure solutions, and Phison Electronics Corp., a global leader in NAND flash controller and storage solutions, announced today plans to expand their SSD portfolio of next-gen high-performance, high-density enterprise NVMe SSDs. The new SSDs will help enterprises lower total cost ownership (TCO) through increased storage density, lower power consumption, and higher performance. The companies also announced that they have entered a long-term partnership that will strengthen the development cycle and distribution of enterprise-class SSDs.

Seagate and Phison have collaborated on Seagate's mainstream SATA SSD products since 2017. That close cooperation has continued through the company's performance-leading line of FireCuda consumer gaming NVMe PCIe Gen4 x4 SSDs and the world's first purpose-built NAS NVMe SSDs. The partnership will now focus on meeting the evolving global enterprise demand for higher density, faster, and smarter storage infrastructure needs that complement HDD storage to enable comprehensive enterprise applications such as hyperscale data center, high-performance computing and AI.

Microsoft Readies Security Feature-Update for Windows 11 Needing an OS Reinstall to Use

Microsoft is giving final touches to a what it refers to as a groundbreaking new security feature update for the Windows 11 operating system, which should significantly improve application-level security, and safeguard you from malicious apps based on the way they behave. Trouble is, to use the feature, you will have to reinstall your operating system (i.e. a clean reinstall), if you're on the current release of Windows 11, or any build that's older than the one that carries this update.

The Smart App Control feature by default blocks untrusted or uncertified applications from running on your PC, and unlike browser-level protections such as Smart Screen, is baked directly into the OS, and monitors application code at a process level, to detect potentially malicious application behavior. It does this using a combination of code-signing by the application publisher and an AI model for trust within the Microsoft cloud. The OS keeps in touch with the cloud 24x7 (whenever the PC is up), to receive the latest threat intelligence and AI model updates from the cloud. It's very likely that Smart App Control will be part of the next significant version milestone of Windows 11 (such as "22H2"), which means everyone on 22H1 or older will be made to reinstall to use it.

Oosto Launches Edge-Based Vision AI Appliance to Revolutionize the TCO Calculus of Video Analytics

Oosto announced today the launch of the Oosto Vision AI Appliance, a revolutionary near-edge device that delivers the power and security of Vision AI in a palm-sized device, allowing organizations to protect people, customers and assets more affordably while reducing IT complexity. The Vision AI Appliance is based on the NVIDIA Jetson Xavier NX system on module and is equipped with Oosto's state-of-the-art neural network models for video analytics that are optimized to support low-power devices. Significantly, this was achieved without compromising on superior performance, security or recognition accuracy which historically required compute-intensive GPU processing servers.

This announcement is part of Oosto's ongoing commitment to edge computing. In 2021, the company embedded its Vision AI algorithms on Ambarella's CV2x family of edge AI vision SoCs. Moving the workload to the chip level and to near-edge devices enables organizations to perform real-time video analysis. Significant processing power is required to analyze every frame of a video feed and perform a variety of instant computations to determine if the person entering a building is an authorized employee or on a watchlist (e.g., a VIP or security threat).

Seagate Announces 20TB Variant of SkyHawk AI Hard Drive

AI-enabled video imaging and analytics systems require support for heavy workloads that process and analyze multiple streams and thousands of hours of video data. Today during the ISC West trade show in Las Vegas, Nev., Seagate Technology Holdings plc, a world leader in mass-data storage infrastructure solutions, launched the new SkyHawk AI 20 TB hard disk drive (HDD). The mass-capacity HDD adds to the company's leading line of video imaging and analytics (VIA) devices.

Purposely designed for network video recorders (NVRs) enabled with AI for edge security applications, Seagate's SkyHawk AI 20 TB HDD intelligently adapts to the scale of the users' AI environment, supporting up to 64 HD video streams and 32 AI streams. With this capability to support scalable AI workloads, SkyHawk AI meets the growing needs of advanced VIA systems to analyze and record video footage while simultaneously supporting GPU analytics.

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

HYTE Launches Y60 Mid-Tower PC Case - A New Angle on Design

[Editor's note: We have posted our HYTE Y60 review here.]

HYTE, the new PC components, peripherals, and lifestyle brand of iBUYPOWER, today launched the new Y60 mid-tower PC case. Taking a new angle on design, the Y60 boasts a style unlike any other case available on the market.

The uniquely constructed Y60 ATX case features a three-part, bezel-less, tempered glass front and side panel that provides an unobstructed internal view from a left, right, or center orientation. When the glass panels are removed, chamfered molding on the ceiling and floor of the case draw the eye inward, creating a modern aesthetic and allowing the system to be displayed in an open-air format. Users will have the option to choose from three colorways, white and black, black and black, and red and black, to best fit the aesthetic of their setup.

ASRock Industrial Announces New Range of Industrial Motherboards with 12th Gen Intel Core Processors

ASRock Industrial launches a new range of industrial motherboards powered by 12th Gen Intel Core Processors (Alder Lake-S) with up to 16 cores and 24 threads, supporting the new Intel 600 Series W680, Q670, and H610 chipsets. Featuring high computing power with performance hybrid architecture and enhanced AI capabilities, rich IOs and expansions for up to quad displays 4K@60 Hz, USB 3.2 Gen2x2 (20 Gbit/s), triple Intel 2.5 GbE LANs with real-time TSN, multi M.2 Key M, ECC memory, plus TPM 2.0, and wide voltage support. The new series covers comprehensive form factors, including industrial Mini-ITX, Micro-ATX, and ATX motherboards for diverse applications, such as factory automation, kiosks, digital signage, smart cities, medical, and Edge AIoT applications.

Marvell Introduces Industry's First 800G Multimode Electro-Optics Platform for Cloud Data Centers

Marvell (NASDAQ: MRVL) today announced the industry's first 800 Gbps or 8x 100 Gbps multimode platform solution, that enables data center infrastructure to achieve dramatically higher speeds for short-reach optical modules and Active Optical Cable (AOC) applications. As artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications continue to drive greater bandwidth requirements, cloud-optimized solutions are needed that can bring lower power, latency and cost to short-range data center interconnections. The new 800G platform, which includes Marvell's PAM4 DSP with a multimode transimpedance amplifier (TIA) and Driver, enables faster data center speeds scaling to 800 Gbps, using conventional cost-effective vertical-cavity surface-emitting laser (VCSEL) technology while accelerating time-to-market with plug-and-play deployment.

Today's data centers are packed with equipment utilizing optical modules or AOCs connected by multimode optical fiber optimized for communication over short distances within data centers. This 100G per lane multimode fiber provides cost-effective, low-power, short-reach connectivity. To support multi-gigabit transmissions, multimode architectures often use VCSEL transmitters, which offer the cost benefits of reliability, power efficiency and easy deployment.

Intel Releases OpenVINO 2022.1 to Advance AI Inferencing for Developers

Since OpenVINO launched in 2018, Intel has enabled hundreds of thousands of developers to dramatically accelerate AI inferencing performance, starting at the edge and extending to the enterprise and the client. Today, ahead of MWC Barcelona 2022, the company launched a new version of the Intel Distribution of OpenVINO Toolkit. New features are built upon three-and-a-half years of developer feedback and include a greater selection of deep learning models, more device portability choices and higher inferencing performance with fewer code changes.

"The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network," said Adam Burns, vice president, OpenVINO Developer Tools in the Network and Edge Group.

Intel Updates Technology Roadmap with Data Center Processors and Game Streaming Service

At Intel's 2022 Investor Meeting, Chief Executive Officer Pat Gelsinger and Intel's business leaders outlined key elements of the company's strategy and path for long-term growth. Intel's long-term plans will capitalize on transformative growth during an era of unprecedented demand for semiconductors. Among the presentations, Intel announced product roadmaps across its major business units and key execution milestones, including: Accelerated Computing Systems and Graphics, Intel Foundry Services, Software and Advanced Technology, Network and Edge, Technology Development, More: For more from Intel's Investor Meeting 2022, including the presentations and news, please visit the Intel Newsroom and Intel.com's Investor Meeting site.
Return to Keyword Browsing
May 21st, 2024 18:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts