Wednesday, November 16th 2011

NVIDIA, Cray, PGI, CAPS Unveil 'OpenACC' Programming Standard for Parallel Computing

In an effort to make it easier for programmers to take advantage of parallel computing, NVIDIA, Cray Inc., the Portland Group (PGI), and CAPS enterprise announced today a new parallel-programming standard, known as OpenACC.

Initially developed by PGI, Cray, and NVIDIA, with support from CAPS, OpenACC is a new open parallel programming standard designed to enable the millions of scientific and technical programmers to easily take advantage of the transformative power of heterogeneous CPU/GPU computing systems.

OpenACC allows parallel programmers to provide simple hints, known as "directives," to the compiler, identifying which areas of code to accelerate, without requiring programmers to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the accelerator.

OpenACC is anticipated to benefit a broad range of programmers working in chemistry, biology, physics, data analytics, weather and climate, intelligence, and many other fields. Existing compilers from Cray, PGI and CAPS are expected to provide initial support for the OpenACC standard beginning in the first quarter of 2012. The OpenACC standard is fully compatible and interoperable with the NVIDIA CUDA parallel programming architecture, which is designed to allow detailed control over the accelerator for maximum performance tuning.

Directives provide a common code base that is multi-platform and multi-vendor compatible, offering an ideal way to preserve investment in legacy applications by enabling an easy migration path to accelerated computing. Based on recent data, the majority of developers have reported 2x to 10x increases in application performance in as little as two weeks when using existing directive-based compilers.1

Quotes
"Compiler directives are integral as we continue the build-out and deployment of the Titan GPU-accelerated supercomputer," said Buddy Bland, Titan project director at Oak Ridge National Laboratory, referring to a system expected to be the world's fastest supercomputer. "Our ultimate goal is to have all Titan supercomputing code run on hybrid CPU/GPU nodes, and OpenACC will enable programmers to develop portable applications that maximize the performance and power efficiency benefits of this architecture."

"OpenACC represents a major development for the scientific community," said Jeffrey Vetter, joint professor in the Computational Science and Engineering School of the College of Computing at Georgia institute of Technology. "Programming models for open science by definition need to be flexible, open and portable across multiple platforms; OpenACC is well designed to fill this need. It provides a valuable new tool to empower the vast numbers of domain scientists who could benefit from application acceleration, but who may not have the funding or expertise to port their code to emerging architectures."

"I am enthusiastic about the future of accelerator technologies," said Michael Wong, CEO of the OpenMP Architecture Review Board. "The OpenACC announcement highlights the technically impressive initiative undertaken by members of the OpenMP Working Group on Accelerators. I look forward to working with all four companies within the OpenMP organization to merge OpenACC with other ideas to create a common specification which extends OpenMP to support accelerators. We look forward to incorporating accelerator support with the full support of all OpenMP members in a future version of the OpenMP specification."

More information about OpenACC, as well as the OpenACC specification, is available today at OpenACC-standard.org. OpenACC is based largely on the PGI and Cray accelerator programming models. NVIDIA, Cray, PGI, and CAPS are members of the OpenMP subcommittee on accelerators, and intend to work within that organization to come to a common standard.

Developers interested in trying directives-based programming can obtain a one-month free trial of the PGI Accelerator Fortran and C compilers at NVIDIA's new 2x in 4 weeks web site, or by contacting Cray.
Add your own comment

12 Comments on NVIDIA, Cray, PGI, CAPS Unveil 'OpenACC' Programming Standard for Parallel Computing

#1
qubit
Overclocked quantum bit
Thanks to Recus for the lead. :toast:
Posted on Reply
#2
hhumas
Nvidia always show us the way
Posted on Reply
#3
Mussels
Moderprator
by: hhumas
Nvidia always show us the way
god i hate blatant fanboyism like this >.>

nice of you to join a thread about some complicated breakthrough for parallel computing, and trash it instantly with a pointless fanboy comment that makes out like nvidia is the best thing since sliced bread.



on TOPIC, the fact that they call this multi vendor compatible interests me. maybe the next iteration of PhysX, will actually work properly and be useful, since it'll be openly supported on CPU, and all brands of GPU simultaneously.
Posted on Reply
#4
Neuromancer
by: hhumas
Nvidia always show us the way
thought this article came out last week except it was AMD...

EDIT: ok I cant find it. But I know this might delve into a nother stupid fanboyish argument, but didnt AMD already have completely open parallel processing applications available? If so and this is nvidias first foray into it, the fact that cray would jump on it means nothing, since they did support it on AMD before also, but if everyone accepts it that is the difference maker yes? And it would be up to AMD to follow suit?

IE at this point it means nothing to us as end user. It is up to the companies we support (end user applications) to support one or the other before it effects us.


(Sorry about my affect/effect I can never remember)
Posted on Reply
#5
1c3d0g
Excellent! I love these supercomputer standards initiatives as they will slowly but surely trickle down to us consumers. :)
Posted on Reply
#6
Steevo
OpenCL anyone?


Oh wait, thats open and free to use and good for the consumer.
Posted on Reply
#7
qubit
Overclocked quantum bit
by: Steevo
OpenCL anyone?


Oh wait, thats open and free to use and good for the consumer.
We don't want that.
Posted on Reply
#8
RejZoR
What's the point of yet another "standard"? Being open or not, we need just one and so far OpenCL is the only one. And even OCL is not used much if at all...
Posted on Reply
#9
Sihastru
The point is that OpenCL is taking too long to become usable. OpenCL is in a "garrage development" state, it's incomplete and even a little out of date.

The only reason you know about OpenCL is because of AMD. Their "Close to Metal" initiative failed, they didn't have a real answer to CUDA's C programming language/compiler and they pushed this open so called "standard". (The same way they did 3D, outsoursing it to 3rd party companies).

Just recently Khronos announced version 1.2 of OpenCL, and I have a feeling it was a response to today's OpenACC announcement. More or less a desperate move.
Posted on Reply
#10
Jarman
Is it coincidence that the larabee successor isn't a million miles away and Nvidia has decided to embrace open source, or is that just me??
Posted on Reply
#11
pantherx12
by: Sihastru
The point is that OpenCL is taking too long to become usable. OpenCL is in a "garrage development" state, it's incomplete and even a little out of date.

.
Tell you what would of helped with with that, nvidia, cray etc chipping instead of making a separate standard.


This is the kind of shit that holds everyone back.


By the by Wolfram Mathematica utilises opencl and it works beautifully.
Posted on Reply
#12
qubit
Overclocked quantum bit
by: pantherx12
Tell you what would of helped with with that, nvidia, cray etc chipping instead of making a separate standard.


This is the kind of shit that holds everyone back.


By the by Wolfram Mathematica utilises opencl and it works beautifully.
+1 Competition and egos getting in the way of improvements for everyone. :shadedshu
Posted on Reply
Add your own comment