News Posts matching "FPGA"

Return to Keyword Browsing

Intel to Acquire eASIC to Bolster FPGA Talent and Solutions

Intel is competing to win in the largest-ever addressable market for silicon, which is being driven by the explosion of data and the need to process, analyze, store and share it. This dynamic is fueling demand for computing solutions of all kinds. Of course Intel is known for world-class CPUs, but today we offer a broader range of custom computing solutions to help customers tackle all kinds of workloads - in the cloud, over the network and at the edge. In recent years, Intel has expanded its products and introduced breakthrough innovations in memory, modems, purpose-built ASICs, vision processing units and field programmable gate arrays (FPGAs).

FPGAs are experiencing expanding adoption due to their versatility and real-time performance. These devices can be programmed anytime - even after equipment has been shipped to customers. FPGAs contain a mixture of logic, memory and digital signal processing blocks that can implement any desired function with extremely high throughput and very low latency. This makes FPGAs ideal for many critical cloud and edge applications, and Intel's Programmable Solutions Group revenue has grown double digits as customers use FPGAs to accelerate artificial intelligence, among other applications.

Baidu Unveils 'Kunlun' High-Performance AI Chip

Baidu Inc. today announced Kunlun, China's first cloud-to-edge AI chip, built to accommodate high performance requirements of a wide variety of AI scenarios. The announcement includes training chip "818-300"and inference chip "818-100". Kunlun can be applied to both cloud and edge scenarios, such as data centers, public clouds and autonomous vehicles.

Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu's AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu's years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip.

Samsung Doubles its HBM2 Output, May Still Fall Short of Demand

Samsung has reportedly doubled its manufacturing output of HBM2 (high-bandwidth memory 2) stacks. Despite this, the company may still fall short of the demand for HBM2, according to HPC expert Glenn K Lockwood, Tweeting from the ISC 2018, the annual HPC industry event held between 24th to 28th June in Frankfurt, where Samsung was talking about its 2nd generation "Aquabolt" HBM2 memory, which is up to 8 times faster than GDDR5, with up to 307 GB/s bandwidth from a single stack.

While HBM2 is uncommon on consumer graphics cards (barring AMD's flagship Radeon RX Vega series, and NVIDIA's TITAN V), the memory type is in high demand with HPC accelerators that are mostly GPU-based, such as AMD Radeon Instinct series, and NVIDIA Tesla. The HPC industry itself is riding the gold-rush of AI research based on deep-learning and neural-nets. FPGAs, chips that you can purpose-build for your applications, are the other class of devices soaking up HBM2 inventories. The result of high demand, coupled with high DRAM prices could mean HBM2 could still be too expensive for mainstream client applications.

NVIDIA G-Sync HDR Module Adds $500 to Monitor Pricing

PCPer had the opportunity to disassemble the ASUS ROG Swift PG27UQ 27", a 4K 144 Hz G-Sync HDR Monitor and found that the G-Sync module is a newer version than the one used on 1st generation G-Sync monitors (which of course do not support 4K / 144 Hz / HDR). The module is powered by an FPGA made by Altera (Intel-owned since 2015). The exact model number is Arria 10 GX 480, which is a high-performance 20 nanometer SoC that provides enough bandwidth and LVDS pins to process the data stream.

The FPGA is sold in low quantities for $2000 at Digikey and Mouser. Assuming that NVIDIA buys thousands, PCPer suggests that the price of this chip alone will add $500 to monitor cost. The BOM cost is further increased by 3 GB of DDR4 memory on the module. With added licensing fees for G-SYNC, this explains why these monitors are so expensive.

Say Hello to the Next Generation of Arduino Boards, Introducing FPGA Solutions

We're excited to kick off Maker Faire Bay Area by expanding our IoT lineup with two new boards: the MKR Vidor 4000 and the Uno WiFi Rev 2.

The MKR Vidor 4000 is the first-ever Arduino based on an FPGA chip, equipped with a SAM D21 microcontroller, a u-blox Nina W102 WiFi module, and an ECC508 crypto chip for secure connection to local networks and the Internet. MKR Vidor 4000 is the latest addition to the MKR family, designed for a wide range of IoT applications, with its distinctive form factor and substantial computational power for high performance. The board will be coupled with an innovative development environment, which aims to democratize and radically simplify access to the world of FPGAs.

Intel Stratix 10: Capable of 10 Trillion Calculations per Second

(Editor's Note: Intel says the Stratix 10 contains some 30 billion transistors - and they say that's more than triple the amount in CPUs that run in the fastest desktop processors today. They're really the ones to know it, since Intel has decided to cut on disclosing transistor count on its CPUs for some time now. The amount of data these FPGAs can process in a single second is nothing short of mind-blowing, though: Intel says they can process the data equivalent to 420 Blu-ray Discs... in just one second. If that doesn't spell an unimaginable future in terms of processing power, I don't know what does.)

Because of the Intel Stratix 10's unique design, it can whip through calculations at blinding speeds - often 10 to 100 times faster than the chips in consumer devices. Intel Stratix 10 FPGAs - the latest version came out in February - are capable of 10 TFLOPS, or 10 trillion floating point operations per second. The Stratix 10 is the fastest chip of its kind in the world.

Intel's Ice Lake Xeon Processor Details Leaked: LGA 4189, 8-Channel Memory

The Power Stamp Alliance (PSA) has posted some details on Intel's upcoming high-performance, 10 nm architecture. Code-named Ice Lake, the Xeon parts of this design will apparently usher in yet another new socket (socket LGA 4189, compared to the socket LGA 3647 solution for Kaby lake and upcoming Cascade Lake designs). TDP is being shown as increased with Intel's Ice Lake designs, with an "up to" 230 W TDp - more than the Skylake or Cascade Lake-based platforms, which just screams at higher core counts (and other features such as OmniPath or on-package FPGAs).

Digging a little deeper into the documentation released by the PSA shows Intel's Ice Lake natively supporting 8-channel memory as well, which makes sense, considering the growing needs in both available memory capacity, and actual throughput, that just keeps rising. More than an interesting, unexpected development, it's a sign of the times.

Intel FPGAs Accelerate Artificial Intelligence for Deep Learning in Microsoft's Bing

Artificial intelligence (AI) is transforming industries and changing how data is managed, interpreted and, most importantly, used to solve real problems for people and businesses faster than ever.

Today's Microsoft's Bing Intelligent Search news demonstrates how Intel FPGA (field programmable gate array) technology is powering some of the world's most advanced AI platforms. Advances to the Bing search engine with real-time AI will help people do more and learn by going beyond delivering standard search results. Bing Intelligent Search will provide answers instead of web pages, and enable a system that understands words and the meaning behind them, the context and intent of a search.

Xilinx Unveils Their Revolutionary Adaptive Compute Acceleration Platform

Xilinx, Inc., the leader in adaptive and intelligent computing, today announced a new breakthrough product category called adaptive compute acceleration platform (ACAP) that goes far beyond the capabilities of an FPGA. An ACAP is a highly integrated multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP's adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that is unmatched by CPUs or GPUs.

An ACAP is ideally suited to accelerate a broad set of applications in the emerging era of big data and artificial intelligence. These include: video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge and cloud applications. The first ACAP product family, codenamed "Everest," will be developed in TSMC 7nm process technology and will tape out later this year.

Intel Unveils Discrete GPU Prototype Development

Intel is making progress in its development of a new discrete GPU architecture, after its failed attempt with "Larrabee" that ended up as an HPC accelerator; and ancient attempts such as the i740. This comes in the wake of the company's high-profile hiring of Raja Koduri, AMD's former Radeon Technologies Group (RTG) head. The company unveiled slides pointing to the direction in which its GPU development is headed, at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco. That direction is essentially scaling up its existing iGPU architecture, and bolstering it with mechanisms to sustain high clock speeds better.

The company's first 14 nm dGPU prototype, shown as a test-chip at the ISSCC, is a 2-chip solution. The first chip contains two key components, the GPU itself, and a system agent; and the second chip is an FPGA that interfaces with the system bus. The GPU component, as it stands now, is based on Intel's Gen 9 architecture, and features a three execution unit (EU) clusters. Don't derive numbers from this yet, as Intel is only trying to demonstrate a proof of concept. The three clusters are wired to a sophisticated power/clock management mechanism that efficiently manages power and clock-speed of each individual EU. There's also a double-clock mechanism that doubles clock speeds (of the boost state) beyond what today's Gen 9 EUs can handle on Intel iGPUs. Once a suitable level of energy efficiency is achieved, Intel will use newer generations of EUs, and scale up EU counts taking advantage of newer fab processes, to develop bigger discrete GPUs.
More slides follow.

Intel Unveils Industry's First FPGA Integrated with HBM - Built for Acceleration

Intel today announced the availability of the Intel Stratix 10 MX FPGA, the industry's first field programmable gate array (FPGA) with integrated High Bandwidth Memory DRAM (HBM2). By integrating the FPGA and the HBM2, Intel Stratix 10 MX FPGAs offer up to 10 times the memory bandwidth when compared with standalone DDR memory solutions1. These bandwidth capabilities make Intel Stratix 10 MX FPGAs the essential multi-function accelerators for high-performance computing (HPC), data centers, network functions virtualization (NFV), and broadcast applications that require hardware accelerators to speed-up mass data movements and stream data pipeline frameworks.

In HPC environments, the ability to compress and decompress data before or after mass data movements is paramount. HBM2-based FPGAs can compress and accelerate larger data movements compared with stand-alone FPGAs. With High Performance Data Analytics (HPDA) environments, streaming data pipeline frameworks like Apache Kafka and Apache Spark Streaming require real-time hardware acceleration. Intel Stratix 10 MX FPGAs can simultaneously read/write data and encrypt/decrypt data in real-time without burdening the host CPU resources.

Fujitsu's WAN Acceleration Technology Delivers Transfer Speeds Up to 40 Gbps

Fujitsu Laboratories Ltd. today announced the development of WAN acceleration technology that can deliver transfer speeds up to 40Gbps for migration of large volumes of data between clouds, using servers equipped with field-programmable gate arrays (FPGAs).

Connections in wide area networks (WANs) between clouds are moving from 1Gbps lines to 10Gbps lines, but with the recent advance of digital technology, including IoT and AI, there is an even greater demand for faster high-speed data transfers as huge volumes of data are collected in the cloud. Until now the effective transfer speed of WAN connections has been raised using techniques to reduce the volume of data, such as compression and deduplication. However, with WAN lines of 10Gbps there are enormous volumes of data to be processed, and existing WAN acceleration technologies usable in cloud servers have not been able to sufficiently raise the effective transfer rate.

Intel Reports Q3-2017 Financial Results

Intel Corporation today reported third-quarter 2017 financial results. "We executed well in the third quarter with strong results across the business, and we're on track to a record year," said Brian Krzanich, Intel CEO. "I'm excited about our progress and our future. Intel's product line-up is the strongest it has ever been with more innovation on the way for artificial intelligence, autonomous driving and more." "In the third quarter, we delivered record earnings, exceeded our EPS expectations, and increased our profit expectations for the full year," said Bob Swan, Intel CFO. "We feel great about Intel's transformation and where we are nine months into our three year plan." In the third quarter, the company generated approximately $6.3 billion in cash from operations, paid dividends of $1.3 billion, and used $1.1 billion to repurchase 31 million shares of stock.

In the third quarter, Intel saw strength across the business. The data center, Internet of Things and memory businesses all achieved record quarterly revenue, and Intel extended its performance leadership with the launches of 8th Gen Intel Core and Intel Xeon Scalable processors. Intel's FPGA business, the Programmable Solutions Group, is experiencing strong momentum, winning designs with automotive and cloud service provider customers that advance Intel's position in artificial intelligence. The company also furthered its autonomous driving efforts with exciting customer wins and the completion of the Mobileye tender offer, four months earlier than expected.

Intel Starts Shipping its High-End Stratix 10 SX FPGA

Intel today announced it has begun shipping its Intel Stratix 10 SX FPGA - the only high-end FPGA family with an integrated quad-core ARM* Cortex*-A53 processor. With densities greater than 1 million logic elements (MLE), Intel Stratix 10 SX FPGAs provide the flexibility and low latency benefit of integrating an ARM processor with a high-performance, high-density FPGA needed to tackle the design challenges of next-generation, high-performance systems.

By integrating the FPGA and the ARM processor, Intel Stratix 10 SX FPGAs provide an ideal solution for 5G wireless communication, software defined radios, secure computing for military applications, network function virtualization (NFV) and data center acceleration.

Intel's 10 nm Technology Bound for FPGAs First; Wafer Showcased

Intel is undoubtedly at the forefront of silicon processing technology these days, and has been for a long time. Being a fully integrated company from the bottom up, through the design and actual production of its silicon semiconductors, really does have a way of either paying of tremendously (as has been the case with Intel), or not at all (as was the case with AMD). That fabrication processes' nm ratings don't mean much in thhe industry right now has been the case for a while now; different companies use different calculations towards achieving a 22 nm or 14 nm claim, with some components in the same nm process having almost double the size of the same components in a competitor's equivalent. Intel has always been one of the more adamant defenders of an industry-wide categorization, both to avoid confusion and - naturally - put into perspective their process leadership.
Return to Keyword Browsing