1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Unveils Next Generation CUDA GPU Architecture – Codenamed ''Fermi''

Discussion in 'News' started by btarunr, Oct 1, 2009.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    27,678 (11.60/day)
    Thanks Received:
    13,418
    Location:
    Hyderabad, India
    NVIDIA Corp. today introduced its next generation CUDA GPU architecture, codenamed “Fermi”. An entirely new ground-up design, the “Fermi” architecture is the foundation for the world’s first computational graphics processing units (GPUs), delivering breakthroughs in both graphics and GPU computing.

    “NVIDIA and the Fermi team have taken a giant step towards making GPUs attractive for a broader class of programs,” said Dave Patterson, director Parallel Computing Research Laboratory, U.C. Berkeley and co-author of Computer Architecture: A Quantitative Approach. “I believe history will record Fermi as a significant milestone.”

    [​IMG]

    Presented at the company’s inaugural GPU Technology Conference, in San Jose, California, “Fermi” delivers a feature set that accelerates performance on a wider array of computational applications than ever before. Joining NVIDIA’s press conference was Oak Ridge National Laboratory who announced plans for a new supercomputer that will use NVIDIA GPUs based on the “Fermi” architecture. “Fermi” also garnered the support of leading organizations including Bloomberg, Cray, Dell, HP, IBM and Microsoft.

    “It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry.”

    As the foundation for NVIDIA’s family of next generation GPUs namely GeForce, Quadro and Tesla − “Fermi” features a host of new technologies that are “must-have” features for the computing space, including:
    • C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
    • ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
    • 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
    • 8x the peak double precision arithmetic performance over NVIDIA’s last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
    • NVIDIA Parallel DataCache - the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
    • NVIDIA GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)
    • Nexus – the world’s first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio
  2. Velvet Wafer

    Velvet Wafer New Member

    Joined:
    Jun 16, 2009
    Messages:
    5,123 (2.90/day)
    Thanks Received:
    990
    Location:
    North of Germany
    I somehow feel bad fail...:nutkick:
    ;)
  3. pr0n Inspector

    pr0n Inspector

    Joined:
    Dec 8, 2008
    Messages:
    1,298 (0.66/day)
    Thanks Received:
    156
    Now that it's confirmed, folders rejoice?
    1c3d0g says thanks.
  4. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (2.01/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    Now I see why they were smashing DX11.. seems theyre not too concerned with it or it was an easy task compared to the other features.
  5. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    12,985 (6.44/day)
    Thanks Received:
    3,092
    Location:
    IA, USA
    DX11 will render CUDA obsolete within 5 years. NVIDIA is shaking a finger at DX11 because they know it and want to protect their intellectual property.
    Crunching for Team TPU
  6. shevanel

    shevanel New Member

    Joined:
    Jul 27, 2009
    Messages:
    3,479 (2.01/day)
    Thanks Received:
    406
    Location:
    Leesburg, FL
    I would think that would be based on hardware.
  7. pr0n Inspector

    pr0n Inspector

    Joined:
    Dec 8, 2008
    Messages:
    1,298 (0.66/day)
    Thanks Received:
    156
    CUDA is cross-platform but nvidia only. DX11 is Windows only. Take your pick.
  8. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    12,985 (6.44/day)
    Thanks Received:
    3,092
    Location:
    IA, USA
    Intel will and AMD does support DX11. NVIDIA must support DX11 to stay competitive in the graphics market. I doubt NVIDIA would kill GeForce to save CUDA/Tesla although I'm certain the thought crossed their mind.
    shevanel says thanks.
    Crunching for Team TPU
  9. pr0n Inspector

    pr0n Inspector

    Joined:
    Dec 8, 2008
    Messages:
    1,298 (0.66/day)
    Thanks Received:
    156
    Uh of course nVidia will support DX11, and? I failed to relate your post to mine.
  10. Zubasa

    Zubasa

    Joined:
    Oct 1, 2006
    Messages:
    3,979 (1.44/day)
    Thanks Received:
    456
    Location:
    Hong Kong
    I will just pick OpenCL :laugh:
  11. pr0n Inspector

    pr0n Inspector

    Joined:
    Dec 8, 2008
    Messages:
    1,298 (0.66/day)
    Thanks Received:
    156
    Hey don't mention an open standard in a discussion about proprietary ones.;)
  12. FordGT90Concept

    FordGT90Concept "I go fast!1!11!1!"

    Joined:
    Oct 13, 2008
    Messages:
    12,985 (6.44/day)
    Thanks Received:
    3,092
    Location:
    IA, USA
    My brain fart, I suppose. :(

    Edit: Ah, I think it was more or less directed at shevanel's post.



    OpenGL is to Direct3D as OpenCL is to DirectCompute. So yeah, Windows only software will be inclined to use the DirectX variety while cross-platform software will use the Open variety. There's not much room for CUDA, I'm afraid.
    Last edited: Oct 1, 2009
    Crunching for Team TPU
  13. phanbuey

    phanbuey

    Joined:
    Nov 13, 2007
    Messages:
    5,198 (2.21/day)
    Thanks Received:
    973
    Location:
    Miami
    That is true everything else being equal. However CUDA supports C++ and a plethora of other languages. From what I have heard, its a simple solution to use, just drop in the libraries and go. So if you are a *insert application here* developer who does not know open CL and you have all FORTRAN, C, or whatever developers on your team - CUDA is tons cheaper, faster, and more convenient than Open CL.

    Now I'm always wary of proprietary stuff, but sometimes a proprietary standard blows away the open-source one in terms of actual performance and functionality. I definitely think that this is the case here.
    1c3d0g says thanks.
  14. LaidLawJones Guest

    Now, will it do all it claims to AND be 5870 killer? If yes, then ATI must be getting a little po'd at having their launches spoiled.
  15. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,831 (1.01/day)
    Thanks Received:
    360
    Location:
    Singapore
    Well most supercomputers run on unix or linux and they don't like to play together with Dx, so CUDA has the same chances as OpenGL/CL HW acceleration which is also in its infancy. At the moment Nvidia are the only GPU manufacturer going for the server/HPC enviroment so I think CUDA is here to stay
  16. Sihastru

    Sihastru

    Joined:
    Apr 26, 2009
    Messages:
    346 (0.19/day)
    Thanks Received:
    63
    Location:
    You are here.
    CUDA is not Windows only. OpenCL has been supported for a while now. The linux driver supports both. Everything included in the Windows version of the driver is also included in the linux version of the driver. nVidia is cross-platform. At least when you think about the major ones.
  17. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (5.02/day)
    Thanks Received:
    1,698
    Location:
    ENGLAND-LAND-LAND
    I don't think that's what he meant.

    I think he meant Linux types will use open source, whilst windows type will use DX11 rather then cuda.
  18. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    19,486 (6.35/day)
    Thanks Received:
    5,727
    Since when can DX11 do the things CUDA can. Besides Physics, what does DX11 offer that CUDA does. They are two different technologies. Last I checked, DX11 has nothing to do with parallel computing.
    Crunching for Team TPU 25 Million points folded for TPU
  19. aj28 New Member

    Joined:
    Jun 18, 2008
    Messages:
    352 (0.17/day)
    Thanks Received:
    35
    Something tells me this thing is still several months out... AMD hard launches a great new GPU and the best nVidia can scrounge up is a few slides and some guy from U.C. Berkeley? I don't think nVidia is spoiling it at all, nor will they. The only thing at stake here is Huang's ego when gamers and general consumers alike choose AMD because at some point you've gotta accept that it's a graphics card not a co-processor. You can't design one to compete against the other...

    nVidia is going to face (and probably already has) massive technical issues on this one, only to be compounded by ridiculous TDP and a price they can't possibly turn profitable. Maybe if Larabee were out we'd be looking at a different competitive landscape, but I think for now gamers are more interested in gaming than spending an extra $100-200 to fold proteins.

    (That said, this may end up benefiting their Quadro line significantly. Those sales are way too low-volume to save them if this thing fails in the consumer market though...)
  20. gumpty

    gumpty

    Joined:
    Apr 29, 2008
    Messages:
    744 (0.34/day)
    Thanks Received:
    134
    Location:
    Auckland
    That said, that market also has much higher margins.

    I see this as a direction shift from Nvidia, they're starting to look at different areas for revenue (HPC etc). They'll still be big in the descrete gpu market, but it wont be their sole focus. They may lose market share to ATI (and eventually Intel), but if they offset that with increased profit elsewhere then it wont matter. Indeed they may be more stable as a company having a more diverse business model.
  21. ShinyG

    ShinyG New Member

    Joined:
    Sep 17, 2005
    Messages:
    186 (0.06/day)
    Thanks Received:
    13
    Location:
    Romania
    I think nVidia are a little bit behind with their mentality. They need to think for the future and invest in the same Open Source standards or at least in universally accepted standards. Right now, it seems to me like they're trying to use their power to push for their own standards, which is perfectly natural in the business world, but as they loose discrete graphics market share to ATi/AMD and eventually Intel, as gumpty predicted, they will loose the power to enforce these proprietary standards.
  22. buggalugs

    buggalugs

    Joined:
    Jul 19, 2008
    Messages:
    896 (0.43/day)
    Thanks Received:
    132
    Location:
    Australia
    Nvidia is obsolete

    For Nvidia to not have a DX11 card ready in 2009 is a major fail. This card could be 4-5 months away and i doubt even die hard nvidia lovers will be prepared to wait until next year while there are 5850's and 5870's around.
  23. Sihastru

    Sihastru

    Joined:
    Apr 26, 2009
    Messages:
    346 (0.19/day)
    Thanks Received:
    63
    Location:
    You are here.
    Which of the Open Source standard is supported by the ATi/AMD or Intel and not supported by nVidia?

    No it's not. Since there are no DX11 titles to play. In 3 months maybe there will be a few. nVidia will have it's cards just in time for that. 5850 and 5870 is just an incremental upgrade. For what Fermi is promising, I would wait another year.
  24. AsRock

    AsRock TPU addict

    Joined:
    Jun 23, 2007
    Messages:
    10,455 (4.19/day)
    Thanks Received:
    1,565
    Location:
    US
    Sounds to me this might be a 5870 killer.. And please don't give me that fanboy BS as i like AMD\ATI much more. BUT if those shaders work as well as the older ones did this is going kick ass.
  25. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    Promising.


    ATI started with high precision stream cores back on the X1K and now it has branched to DX11, and CUDA as the competing platform. This is all going to come down to consumers, this will be another "format war".


    I am liking the offerings of the green team this round, I love the native code drop in, the expected performance at common tasks and folding, but I will probably hate the price. ATI might have a real problem here if they don't get their ass in gear with some software to run on their hardware, and show it to be as good as or better than NV. I for one am tired of paying either company for a card, hearing all the options and only having a few actually made and working. I bought a high end high def camcorder, and a card I was understanding could/was going to handle the format and do it quickly. I still use the CPU based software to manipulate my movies. FAIL......
    10 Million points folded for TPU

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page