Tuesday, November 14th 2017

AMD Expands EPYC Availability, Introduces ROCm 1.7 With Tensor Flow Support

AMD has been steadily increasing output and availability of their latest take on the server market with their EPYC CPUs. These are 32-core, 64-thread monsters that excel in delivering a better feature set in 1P configuration than even some of Intel's 2P setups, and reception for these AMD processors has been pretty warm as a result. The usage of an MCM design to create a 4-way cluster of small 8-core processor packages has allowed AMD to improve yields with minimum retooling and changes to its manufacturing lines, which in turn, has increased yields and profits for a company that sorely needed a a breakout product.
Now, AMD is announcing that it's pushing wider availability of EPYC-powered solutions: OEMs, distributors, and system integrators are expanding their EPYC-touting portfolios. ASUS, BOXX, GIGABYTE, HPE (Hewlett Packard Enterprise), Penguin Computing, Supermicro and Tyan all have some manner of new products that they are introducing to market: ASUS is betting heavily in HPC and Virtualization through its RS720A-E9 and RS700A-E9 1U/2U servers; BOXX chose to combine EPYC CPUs with Radeon Instinct accelerators for multi-GPU and deep learning compute solutions; GIGABYTE is forging ahead with rackmount servers; and Supermicro is moving from its well-known tower form factors to 1U, 2U and 4U for HPC and storage.
Hardware, however, tends to be propped-up and allowed to reach new heights by its software. This is why AMD has also doubled-down on its ROCm (Radeon Open Compute platforM) efforts for high performance computing and open standards with the latest 1.7 version. Besides the usual improvements in coding (version 1.7 of AMD's ROCm ROCm adds support for multiple GPUs for the latest hardware) there's also added support for TensorFlow and Caffe machine learning frameworks in the MIOpen libraries. TensorFlow is a huge deal right now in the machine learning field, as you know: NVIDIA's Volta architecture doubled down on Tensor Cores, for one, and Google's own efforts in the field revolve around TensorFlow.
Show 9 Comments