NVIDIA today announced that, for the second year in a row, the world's most energy efficient petaflop-class supercomputer is powered by NVIDIA Tesla GPUs. The Tsubame 2.0 system at the Tokyo Institute of Technology's Global Scientific Information Center (GSIC) ranks as the greenest petaflop-class supercomputer on the recently released Green500 list. Published twice annually, the Green500 list, rates the 500 most energy efficient supercomputers based on performance achieved relative to power consumed. Tsubame 2.0 is a heterogeneous supercomputer (combining both CPUs and GPUs) used to accelerate a range of scientific and industrial research in Japan. With sustained performance of 1.19 petaflops per second while consuming 1.2 megawatts, Tsubame 2.0 delivers 958 megaflops of processing power per watt of energy. It is 3.4-times more energy efficient than the next-closest x86 CPU-only petaflop system, the Cielo Cray supercomputer at Los Alamos National Laboratory, which delivers 278 megaflops per watt. In the race to exascale computing, power efficiency has become the defining element of computing performance. Heterogeneous GPU-accelerated systems are inherently more energy efficient than CPU-only systems because applications can take advantage of the different processors for executing different jobs. The sequential parts of the application runs on CPUs, and the data- and compute-intensive parts are accelerated by the massively parallel GPU processor. Tsubame 2.0 is comprised of HP ProLiant SL390 servers with Intel Xeon CPUs accelerated by NVIDIA Tesla GPUs. The Tesla GPUs provide more than 80 percent of its performance, enabling Tsubame 2.0 to achieve high levels of performance with very low power usage. This year, two of the five finalists for the prestigious Gordon Bell Prize ran on Tsubame 2.0, including the winner for Special Achievement in Scalability and Time-to Solution. The latest Green500 list underscores the energy efficiency of heterogeneous computer design. Five of the world's 10 most efficient systems, and 22 of the top 30 most efficient systems, combine GPUs with CPUs. Tesla GPUs are massively parallel accelerators based on the CUDA parallel computing architecture. Application developers can accelerate their applications either by using CUDA C, CUDA C++, CUDA Fortran or using the simple, easy-to-use directive-based compilers. For more information about Tsubame 2.0, visit the Tokyo Institute of Technology, Global Scientific Information and Computing Center website. To learn more about Tesla GPUs, visit the Tesla website. To learn more about CUDA, visit the CUDA website.