Baidu Inc. today announced Kunlun, China's first cloud-to-edge AI chip, built to accommodate high performance requirements of a wide variety of AI scenarios. The announcement includes training chip "818-300"and inference chip "818-100". Kunlun can be applied to both cloud and edge scenarios, such as data centers, public clouds and autonomous vehicles.
Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu's AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu's years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip.
In 2011, Baidu started developing an FPGA-based AI accelerator for deep learning and began using GPUs in data centers. Kunlun, which is made up of thousands of small cores, has a computational capability which is nearly 30 times faster than the original FPGA-based accelerator. Other key specifications include: 14nm Samsung engineering, 512 GB/second memory bandwidth, as well as 260TOPS while consuming 100 Watts of power.
In addition to supporting the common open source deep learning algorithms, Kunlun chip can also support a wide variety of AI applications, including voice recognition, search ranking, natural language processing, autonomous driving and large-scale recommendations.
With the rapid emergence of AI applications, dramatically increasing requirements are being imposed on computational power. Traditional chips limit how much computing power is available and thus how far AI technologies can be accelerated. Baidu developed this chip, specifically designed for large-scale AI workloads, as an answer to this demand. Baidu believes that it will allow for significant advancements in the open AI ecosystem.
Baidu plans to continue to iterate upon this chip, developing it progressively to enable the expansion of an open AI ecosystem. As part of this, Baidu will continue to create "chip power" to meet the needs of various fields including intelligent vehicles, intelligent devices, voice recognition and image recognition.
View at TechPowerUp Main Site
Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu's AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu's years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip.
In 2011, Baidu started developing an FPGA-based AI accelerator for deep learning and began using GPUs in data centers. Kunlun, which is made up of thousands of small cores, has a computational capability which is nearly 30 times faster than the original FPGA-based accelerator. Other key specifications include: 14nm Samsung engineering, 512 GB/second memory bandwidth, as well as 260TOPS while consuming 100 Watts of power.
In addition to supporting the common open source deep learning algorithms, Kunlun chip can also support a wide variety of AI applications, including voice recognition, search ranking, natural language processing, autonomous driving and large-scale recommendations.
With the rapid emergence of AI applications, dramatically increasing requirements are being imposed on computational power. Traditional chips limit how much computing power is available and thus how far AI technologies can be accelerated. Baidu developed this chip, specifically designed for large-scale AI workloads, as an answer to this demand. Baidu believes that it will allow for significant advancements in the open AI ecosystem.
Baidu plans to continue to iterate upon this chip, developing it progressively to enable the expansion of an open AI ecosystem. As part of this, Baidu will continue to create "chip power" to meet the needs of various fields including intelligent vehicles, intelligent devices, voice recognition and image recognition.
View at TechPowerUp Main Site