Machine Learning — Tensorflow
Artificial Intelligence is everywhere these days. Machine-learning-based algorithms are taking the grunt work out of many manual tasks that could previously only be performed by humans. In order for Deep Learning AI to solve problems, it has to be trained first through a large set of training data that is evaluated repeatedly to generate a neural network that can later be put to work (also called inference). Google's Python-based Tensorflow is one of the most popular machine-learning software packages that supports both CPUs and GPUs. Setting up Tensorflow for the GPU is a bit complicated, so lots of algorithm development and training on small data sets still happens on the CPU. Training performance on the CPU can also beat the GPU when problem sizes exceed typical GPU memory capacities.
Engineering widely uses the finite element method (FEM), which is able to simulate the flow of liquid (CFD), heat transfer, and structural stability to verify whether a final product is able to meet design requirements. Solving such a problem breaks the system up into a large number of simple parts called finite elements that are all able to interact with each other. This is a highly complex mathematical task that requires a lot of processing power, which is very difficult to parallelize on GPUs. Our Euler3D benchmark test is fully parallelized to make the most of multiple CPU cores, but it also puts a lot of stress on the memory subsystem.
Brain Neuron Simulation
In order to better understand how brains work, biology and medical research uses software to simulate neurons and their interactions with each other. Scientists hope that this can ultimately lead to an understanding of how biological intelligence emerges. Just like our physics simulation test, this is a highly complex, memory intensive problem that is best solved on CPUs—GPUs aren't well suited to these algorithms.