Thursday, October 1st 2009

NVIDIA Unveils Next Generation CUDA GPU Architecture – Codenamed ''Fermi''

NVIDIA Corp. today introduced its next generation CUDA GPU architecture, codenamed "Fermi". An entirely new ground-up design, the "Fermi" architecture is the foundation for the world's first computational graphics processing units (GPUs), delivering breakthroughs in both graphics and GPU computing.

"NVIDIA and the Fermi team have taken a giant step towards making GPUs attractive for a broader class of programs," said Dave Patterson, director Parallel Computing Research Laboratory, U.C. Berkeley and co-author of Computer Architecture: A Quantitative Approach. "I believe history will record Fermi as a significant milestone."
Presented at the company's inaugural GPU Technology Conference, in San Jose, California, "Fermi" delivers a feature set that accelerates performance on a wider array of computational applications than ever before. Joining NVIDIA's press conference was Oak Ridge National Laboratory who announced plans for a new supercomputer that will use NVIDIA GPUs based on the "Fermi" architecture. "Fermi" also garnered the support of leading organizations including Bloomberg, Cray, Dell, HP, IBM and Microsoft.

"It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore," said Jen-Hsun Huang, co-founder and CEO of NVIDIA. "The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry."

As the foundation for NVIDIA's family of next generation GPUs namely GeForce, Quadro and Tesla − "Fermi" features a host of new technologies that are "must-have" features for the computing space, including:
  • C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
  • ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
  • 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
  • 8x the peak double precision arithmetic performance over NVIDIA's last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
  • NVIDIA Parallel DataCache - the world's first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
  • NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)
  • Nexus - the world's first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio
Add your own comment

49 Comments on NVIDIA Unveils Next Generation CUDA GPU Architecture – Codenamed ''Fermi''

#3
shevanel
Now I see why they were smashing DX11.. seems theyre not too concerned with it or it was an easy task compared to the other features.
Posted on Reply
#4
FordGT90Concept
"I go fast!1!11!1!"
DX11 will render CUDA obsolete within 5 years. NVIDIA is shaking a finger at DX11 because they know it and want to protect their intellectual property.
Posted on Reply
#5
shevanel
I would think that would be based on hardware.
Posted on Reply
#6
pr0n Inspector
CUDA is cross-platform but nvidia only. DX11 is Windows only. Take your pick.
Posted on Reply
#7
FordGT90Concept
"I go fast!1!11!1!"
Intel will and AMD does support DX11. NVIDIA must support DX11 to stay competitive in the graphics market. I doubt NVIDIA would kill GeForce to save CUDA/Tesla although I'm certain the thought crossed their mind.
Posted on Reply
#8
pr0n Inspector
FordGT90ConceptIntel will and AMD does support DX11. NVIDIA must support DX11 to stay competitive in the graphics market. I doubt NVIDIA would kill GeForce to save CUDA/Tesla although I'm certain the thought crossed their mind.
Uh of course nVidia will support DX11, and? I failed to relate your post to mine.
Posted on Reply
#9
Zubasa
pr0n InspectorCUDA is cross-platform but nvidia only. DX11 is Windows only. Take your pick.
I will just pick OpenCL :laugh:
Posted on Reply
#10
pr0n Inspector
ZubasaI will just pick OpenCL :laugh:
Hey don't mention an open standard in a discussion about proprietary ones.;)
Posted on Reply
#11
FordGT90Concept
"I go fast!1!11!1!"
pr0n InspectorUh of course nVidia will support DX11, and? I failed to relate your post to mine.
My brain fart, I suppose. :(

Edit: Ah, I think it was more or less directed at shevanel's post.
ZubasaI will just pick OpenCL :laugh:
OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.
Posted on Reply
#12
phanbuey
FordGT90ConceptMy brain fart, I suppose. :(

Edit: Ah, I think it was more or less directed at shevanel's post.




OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.
That is true everything else being equal. However CUDA supports C++ and a plethora of other languages. From what I have heard, its a simple solution to use, just drop in the libraries and go. So if you are a *insert application here* developer who does not know open CL and you have all FORTRAN, C, or whatever developers on your team - CUDA is tons cheaper, faster, and more convenient than Open CL.

Now I'm always wary of proprietary stuff, but sometimes a proprietary standard blows away the open-source one in terms of actual performance and functionality. I definitely think that this is the case here.
Posted on Reply
#13
Unregistered
Now, will it do all it claims to AND be 5870 killer? If yes, then ATI must be getting a little po'd at having their launches spoiled.
#14
HalfAHertz
Well most supercomputers run on unix or linux and they don't like to play together with Dx, so CUDA has the same chances as OpenGL/CL HW acceleration which is also in its infancy. At the moment Nvidia are the only GPU manufacturer going for the server/HPC enviroment so I think CUDA is here to stay
Posted on Reply
#15
Sihastru
FordGT90ConceptOpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.
CUDA is not Windows only. OpenCL has been supported for a while now. The linux driver supports both. Everything included in the Windows version of the driver is also included in the linux version of the driver. nVidia is cross-platform. At least when you think about the major ones.
Posted on Reply
#16
pantherx12
SihastruCUDA is not Windows only. OpenCL has been supported for a while now. The linux driver supports both. Everything included in the Windows version of the driver is also included in the linux version of the driver. nVidia is cross-platform.
I don't think that's what he meant.

I think he meant Linux types will use open source, whilst windows type will use DX11 rather then cuda.
Posted on Reply
#17
newtekie1
Semi-Retired Folder
Since when can DX11 do the things CUDA can. Besides Physics, what does DX11 offer that CUDA does. They are two different technologies. Last I checked, DX11 has nothing to do with parallel computing.
Posted on Reply
#18
aj28
LaidLawJonesNow, will it do all it claims to AND be 5870 killer? If yes, then ATI must be getting a little po'd at having their launches spoiled.
Something tells me this thing is still several months out... AMD hard launches a great new GPU and the best nVidia can scrounge up is a few slides and some guy from U.C. Berkeley? I don't think nVidia is spoiling it at all, nor will they. The only thing at stake here is Huang's ego when gamers and general consumers alike choose AMD because at some point you've gotta accept that it's a graphics card not a co-processor. You can't design one to compete against the other...

nVidia is going to face (and probably already has) massive technical issues on this one, only to be compounded by ridiculous TDP and a price they can't possibly turn profitable. Maybe if Larabee were out we'd be looking at a different competitive landscape, but I think for now gamers are more interested in gaming than spending an extra $100-200 to fold proteins.

(That said, this may end up benefiting their Quadro line significantly. Those sales are way too low-volume to save them if this thing fails in the consumer market though...)
Posted on Reply
#19
gumpty
aj28(That said, this may end up benefiting their Quadro line significantly. Those sales are way too low-volume to save them if this thing fails in the consumer market though...)
That said, that market also has much higher margins.

I see this as a direction shift from Nvidia, they're starting to look at different areas for revenue (HPC etc). They'll still be big in the descrete gpu market, but it wont be their sole focus. They may lose market share to ATI (and eventually Intel), but if they offset that with increased profit elsewhere then it wont matter. Indeed they may be more stable as a company having a more diverse business model.
Posted on Reply
#20
ShinyG
I think nVidia are a little bit behind with their mentality. They need to think for the future and invest in the same Open Source standards or at least in universally accepted standards. Right now, it seems to me like they're trying to use their power to push for their own standards, which is perfectly natural in the business world, but as they loose discrete graphics market share to ATi/AMD and eventually Intel, as gumpty predicted, they will loose the power to enforce these proprietary standards.
Posted on Reply
#21
buggalugs
Nvidia is obsolete

For Nvidia to not have a DX11 card ready in 2009 is a major fail. This card could be 4-5 months away and i doubt even die hard nvidia lovers will be prepared to wait until next year while there are 5850's and 5870's around.
Posted on Reply
#22
Sihastru
ShinyGI think nVidia are a little bit behind with their mentality. They need to think for the future and invest in the same Open Source standards or at least in universally accepted standards. Right now, it seems to me like they're trying to use their power to push for their own standards, which is perfectly natural in the business world, but as they loose discrete graphics market share to ATi/AMD and eventually Intel, as gumpty predicted, they will loose the power to enforce these proprietary standards.
Which of the Open Source standard is supported by the ATi/AMD or Intel and not supported by nVidia?
buggalugsFor Nvidia to not have a DX11 card ready in 2009 is a major fail. This card could be 4-5 months away and i doubt even die hard nvidia lovers will be prepared to wait until next year while there are 5850's and 5870's around.
No it's not. Since there are no DX11 titles to play. In 3 months maybe there will be a few. nVidia will have it's cards just in time for that. 5850 and 5870 is just an incremental upgrade. For what Fermi is promising, I would wait another year.
Posted on Reply
#23
AsRock
TPU addict
Sounds to me this might be a 5870 killer.. And please don't give me that fanboy BS as i like AMD\ATI much more. BUT if those shaders work as well as the older ones did this is going kick ass.
Posted on Reply
#24
Steevo
Promising.


ATI started with high precision stream cores back on the X1K and now it has branched to DX11, and CUDA as the competing platform. This is all going to come down to consumers, this will be another "format war".


I am liking the offerings of the green team this round, I love the native code drop in, the expected performance at common tasks and folding, but I will probably hate the price. ATI might have a real problem here if they don't get their ass in gear with some software to run on their hardware, and show it to be as good as or better than NV. I for one am tired of paying either company for a card, hearing all the options and only having a few actually made and working. I bought a high end high def camcorder, and a card I was understanding could/was going to handle the format and do it quickly. I still use the CPU based software to manipulate my movies. FAIL......
Posted on Reply
#25
FordGT90Concept
"I go fast!1!11!1!"
newtekie1Since when can DX11 do the things CUDA can. Besides Physics, what does DX11 offer that CUDA does. They are two different technologies. Last I checked, DX11 has nothing to do with parallel computing.
Since it included DirectCompute (which is an API inside of the DirectX 11 package).

DX11 offers unified sound support, unified input support, unified networking support, and more that CUDA does not. But that's not what you were asking. CUDA and DirectCompute are virtually the same with one caveat: Microsoft will flex their industry muscles to get developers to use it and developers will want to use it because the same code will work on NVIDIA, AMD, and Intel GPUs.

DirectCompute has everything to do with parallel computing. That is the reason why it was authored.
Posted on Reply
Add your own comment
Apr 19th, 2024 17:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts