News Posts matching #TensorFlow

Return to Keyword Browsing

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.

Micron 5210 ION SSD Now Generally Available

Micron Technology, Inc., today announced the next step towards market leadership for its quad-level cell (QLC) NAND technology with immediate broad market availability of the popular Micron 5210 ION enterprise SATA SSD, the world's first QLC SSD, which began shipping to select customers and partners in May of this year. Available through global distributors, the Micron 5210 ION enterprise SATA SSD further accelerates Micron's lead in the QLC market, enabling replacement of hard disk drives (HDDs) with SSDs and building on Micron's recent launch of the Crucial P1 NVMe QLC SSD for consumer markets.

Enterprise storage needs are increasing as data center applications deliver real-time user insights and intelligent and enhanced user experiences, leveraging artificial intelligence (AI), machine learning, big data and real-time analytics. At the same time, there is a growing consumer need for higher storage capacity to support digital experiences. QLC SSDs are uniquely designed to address these requirements.

QNAP Introduces the TS-2888X AI-ready NAS

QNAP Systems, Inc. introduces the brand-new TS-2888X AI-Ready NAS, an all-in-one AI solution combining robust storage and a ready-to-use software environment that simplifies AI workflows with high cost-efficiency. Built using next-gen Intel Xeon W processors with up to 18 cores and employing a hybrid storage architecture with eight hard drives and twenty high-performance SSDs (including 4 U.2 SSDs), the TS-2888X also supports installing up to 4 high-end graphics cards and runs QNAP's AI developer package "QuAI". The TS-2888X packs everything required for machine learning AI to help organizations quickly and easily implement AI applications.

"Compared with typical AI workstations, the TS-2888X combines high-performance computing with huge-capacity storage to greatly reduce latency, accelerate data transfer, and to eliminate performance bottlenecks caused by network connectivity," said David Tsao, Product Manager of QNAP, adding "integrating AI-focused hardware and software reduces the time and complexity for implementing and managing AI tasks, making the TS-2888X the ideal AI solution for most organizations."

AMD and Xilinx Announce a New World Record for AI Inference

At today's Xilinx Developer Forum in San Jose, Calif., our CEO, Victor Peng was joined by the AMD CTO Mark Papermaster for a Guinness. But not the kind that comes in a pint - the kind that comes in a record book. The companies revealed the AMD and Xilinx have been jointly working to connect AMD EPYC CPUs and the new Xilinx Alveo line of acceleration cards for high-performance, real-time AI inference processing. To back it up, they revealed a world-record 30,000 images per-second inference throughput!

The impressive system, which will be featured in the Alveo ecosystem zone at XDF today, leverages two AMD EPYC 7551 server CPUs with its industry-leading PCIe connectivity, along with eight of the freshly-announced Xilinx Alveo U250 acceleration cards. The inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow. The benchmark was performed on GoogLeNet, a widely used convolutional neural network.

VIA Launches ALTA DS 3 Edge AI System Powered by Qualcomm Snapdragon 820E

VIA Technologies, Inc., today announced the launch of the VIA ALTA DS 3 Edge AI system. Powered by the Qualcomm Snapdragon 820E Embedded Platform, the system enables the rapid development and deployment of intelligent signage, kiosk, and access control devices that require real-time image and video capture, processing, and display capabilities.

The VIA ALTA DS 3 harnesses the cutting-edge compute, graphics, and AI processing capabilities of the Qualcomm Snapdragon 820E Embedded Platform to facilitate the creation of vibrant new user experiences by allowing customers to combine their own AI applications with immersive multimedia signage display content in a compact, low-power system.

The Laceli AI Compute Stick is Here to Compete Against Intel's Movidius

Gyrfalcon Technology Inc, an emerging AI chip maker in Silicon Valley, CA, launches its Laceli AI Compute Stick after Intel Movidius announced its deep learning Neural Compute Stick in July of last year. With the company's first ultra-low power, high performance AI processor Lightspeeur 2801S, the Laceli AI Compute Stick runs a 2.8 TOPS performance within 0.3 Watt of power, which is 90 times more efficient than the Movidius USB Stick (0.1 TOPS within 1 Watt of power.)

Lightspeeur is based on Gyrfalcon Technology Inc's APiM architecture, which uses memory as the AI processing unit. This eliminates the huge data movement that results in high power consumption. The architecture features true, on-chip parallelism, in situ computing, and eliminates memory bottlenecks. It has roughly 28,000 parallel computing cores and does not require external memory for AI inference.
Return to Keyword Browsing