1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GT300 ''Fermi'' Detailed

Discussion in 'News' started by btarunr, Sep 30, 2009.

  1. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.23/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Yeah that's true. Sorry Erocker if it seemed directed at you.

    It's just that the subject is being brought again every 2 posts and really, it has already been explained by many members. I dont' think it's crazy to believe in the innocence of 1000s of developers (individuals), that IMO are being insulted by the people that presume guilty. TBH I get angry because of that.
     
    skylamer and erocker say thanks.
  2. AlienIsGOD

    AlienIsGOD

    Joined:
    Aug 9, 2008
    Messages:
    3,706 (1.44/day)
    Thanks Received:
    1,789
    Location:
    Kingston, Ontario Canada
    WOW 512 cores!!! without a doubt this will be better than a 5870 but im thinking the power draw will be larger than the 5870 too.
     
    skylamer says thanks.
  3. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    21,598 (6.05/day)
    Thanks Received:
    7,496
    You realize that ATi had a similar program to TWIMTBP back in 2002, right? Oddly enough*, I remember seeing it all over the place in games like Unreal Tournament, and Source based games**. Both of wich ran better on ATi hardware due to ATi's aid in developement. Surprising that you would actually switch to them, when they were in the middle of doing exactly what you are complaing about now...

    *I say oddly enough, because the Batman game that has caused so much uproar recently is actually based on an Unreal Engine.
    **Valve removed the ATi branding once ATi stopped working with them, and most other developers, to improve games before release.

    That worked wonderfully in the past, and probably allowed nVidia to compete better, and eliminated consumer confusion, and lowered prices for the consumer, so I can't see how it was really a bad thing.

    However, this likely won't work with the upcoming generation of cards, as DX11 support will be required.
     
    1c3d0g and skylamer say thanks.
    Crunching for Team TPU More than 25k PPD
  4. Mistral

    Mistral

    Joined:
    Feb 23, 2008
    Messages:
    430 (0.16/day)
    Thanks Received:
    62
    Location:
    Montreal
    I'm all for lightning fast encode times, but please explain how running C and Fortran would help that, since that part is a bit foggy for me. If nVidia can squeeze in extra "features" and keep prices and performance "competitive", all is peachy. We'll need to wait and see if that's the case though.
     
    skylamer says thanks.
  5. PP Mguire

    PP Mguire New Member

    Joined:
    Aug 15, 2008
    Messages:
    5,005 (1.95/day)
    Thanks Received:
    453
    Location:
    Venus, Texas
    I do that now with a GTX280....:wtf:

    Wee gotta love paper launches and the wars starting over what somebody said...not actual proof and hard launch benches.
     
    skylamer says thanks.
  6. aCid888*

    aCid888* New Member

    Joined:
    May 19, 2008
    Messages:
    2,754 (1.03/day)
    Thanks Received:
    645
    Location:
    In a state of flux...
    Why does every topic have to be derailed in some way by fanboys or general bullshit that has nothing to do with the subject at hand? :shadedshu



    We can sit here all day and chat about how A card will beat B card and get all enraged about it...or, we can wait until its actually released and base our views on solid facts.

    I know what way I'd prefer...but saying that, this card does look to be a beast in the making; I just have my doubts about the way nVidia will chose to price it as they often put an hefty price on 5% more "power".
     
    Binge, PP Mguire and skylamer say thanks.
  7. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.23/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    If the chip can trully run C code natively*, it means that a programer doesn't have to do anything especial to code a program to run on the GT300. They can just do it as they would to run it on the CPU. So the only difference is that instead of thinking they have 4 cores available, they have to make their code suitable for running in 512. Previously it was as if in order to write a book you had to learn french, because that was what the GPU could understand and with GT300 you could write in english as you have always done. A lot if not most applications and games are programmed in C/C++ and Fortran is very used in science and industry.

    * I say that because it seems too good to be true TBH.
     
  8. soldier242 New Member

    Joined:
    Aug 13, 2008
    Messages:
    145 (0.06/day)
    Thanks Received:
    6
    Location:
    Badgecave ... somewhere in Dresden
    if thats all tru, then the singlecore GTX 380? will even beat the crap out of the 5870 X2 and still won't be tired after doing so ... damn
     
  9. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,925 (0.83/day)
    Thanks Received:
    404
    Location:
    Singapore
    I don't think it will be able to run C natively. I think this can only be done on x86 and RISC.
    Anyway this sure sounds like a true power horse and realy stresses out that Nvidia wants to shatter the idea of the Graphics card as just means of entertainment.

    I think that alot of businesses and scientifical laboratories are ready for a massively paralel alternative to the CPU. And once they do, gamers and more importantly users are bound to follow.

    I mean come on, think about it for a second. Is there any better pick-up line than: "Hey baby, wanna come down to my crib and check out my quadruple-pumped super computer?"
     
    PP Mguire says thanks.
  10. PP Mguire

    PP Mguire New Member

    Joined:
    Aug 15, 2008
    Messages:
    5,005 (1.95/day)
    Thanks Received:
    453
    Location:
    Venus, Texas
    If i had it, she would :roll:
     
  11. Kaleid

    Joined:
    Jun 30, 2008
    Messages:
    119 (0.05/day)
    Thanks Received:
    16
    A monster..but boy it won't be cheap with that transistor count plus more expensive memory system.

    Likely too hot for my taste.

    Hopefully it will lower the 5850 prices a bit though, I might pick one of those up..or even wait for Juniper XT
     
  12. El Fiendo

    El Fiendo

    Joined:
    Aug 22, 2008
    Messages:
    2,304 (0.90/day)
    Thanks Received:
    1,165
    Location:
    Edmonton, Alberta
    I hate to say it but I sure hope they suck at folding. I hope they provide no real gain over the current NVIDIA offering in terms of daily points produced.

    If it turns out they do rock the folding world and see great gains, I'll probably start scheming ways to change out 6 GTX 260 216s for 6 of these and a much lighter wallet.
     
  13. erocker

    erocker Super Moderator Staff Member

    Joined:
    Jul 19, 2006
    Messages:
    40,771 (12.24/day)
    Thanks Received:
    15,681
    Heh, and I keep mixing up this thread with the damn Batman thread. I should quit bitching as I'm pretty content with my current setup anyways.

    Cheers. :toast:
     
  14. happita

    happita

    Joined:
    Aug 7, 2007
    Messages:
    2,464 (0.84/day)
    Thanks Received:
    491
    It's ok, we all know you live in the batcave, which has a bitchin' setup ;)
     
  15. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.23/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    That's exactly what they are saying the chip does.

    Implementing ISA inside the GPU. That's what they say. An ISA is something very especific to be misinterpreted IMO. Although there's no such thing yet as an ISA for C/C++ or Fortran because they are compiled to run in x86 or PPC processors it is true that many instructions in C/C++ have direct relationship with the x86 instruction set, and over the time x86 has absorbed most successful of it's functions, making the x86 instruction set grow and now probably it can be said that on the basic things C/C++ = x86 and I suppose that it's the same with Fortran, but I don't know fortran myself, so I can't speak of that.

    All in all, what they are claiming is that they have implemented an ISA for those programming languages, so they are effectively claiming that for every core function in C/C++ and Fortran there is an instruction in the GPU that can execute it. In a way they have completely bypassed the CPU, except for the first instruction that is going to be required to move the execution to the GPU. Yes Intel does have something to worry about.

    If the above is true, they will certainly own in folding. Not only they would be much faster, but there's not going to be a need for a GPU client to begin with. Just a pair of lines to make the CPU client run in the GPU. :eek:

    Now that I think about it, it might mean that GT300 could be the only processor inside a gaming console too, but it would run normal code very slowly. The truth is that the CPU is still very needed to run normal code, because GPUs don't have branch prediction (although I wouldn't bet a penny at this point, just in case) and that is needed. Then again C and Fortran have conditional expressions as core functions, so the ability to run them should be there, although at a high performance penalty compared to a CPU. A coder may take advantage of the raw power of the GPU and perform massive speculative execution though.

    Sorry for the jargon and overall divagation. :eek:
     
    Last edited: Sep 30, 2009
  16. El Fiendo

    El Fiendo

    Joined:
    Aug 22, 2008
    Messages:
    2,304 (0.90/day)
    Thanks Received:
    1,165
    Location:
    Edmonton, Alberta
    And I bet my basement would sound like a bunch of harpies getting gang banged by a roving group of banshees with 6 GT300s added to my setups.
     
  17. Millenia

    Millenia New Member

    Joined:
    Apr 20, 2008
    Messages:
    99 (0.04/day)
    Thanks Received:
    8
    Location:
    Turenki, Finland
    I'm going with ATI/AMD again whether it's much better or not due to my mobo, but of course I'd like to see it being at least competitive to drive the prices down.
     
  18. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    14,206 (4.76/day)
    Thanks Received:
    2,072
    Sorry UT 99 was a TWIMTBP, along with UT2K3/4 and UT3, so I don't see your point, only thing I really seen was the oft delayed HL 2 having ATI badge on it.

     
  19. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,925 (0.83/day)
    Thanks Received:
    404
    Location:
    Singapore
    I dunno the biggest problem I see with coding for C on a non x86 architecture is the complexity of the code. The SIMD/MIMD architecture of GPUs is closer to RISC, and from what i know it's much harder to write code for RISC than it is for x86, but once you have a working code, the benefits can me enormous.

    I'd love to see Nvidias solution from a nerds pov rather than anything else. If they realy acomplished what they state here, that would render Larabee useless and obsolete before it even comes out and create some serious competition on the HPC market.
     
  20. El Fiendo

    El Fiendo

    Joined:
    Aug 22, 2008
    Messages:
    2,304 (0.90/day)
    Thanks Received:
    1,165
    Location:
    Edmonton, Alberta
  21. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    21,598 (6.05/day)
    Thanks Received:
    7,496
    Maybe your right, but I could have sworn UT 2K3 had GITG branding on it, maybe not though, its been so long.

    The main point still stands though, ATi had/has a similar program that did the exact same thing.
     
    eidairaman1 says thanks.
    Crunching for Team TPU More than 25k PPD
  22. Kaleid

    Joined:
    Jun 30, 2008
    Messages:
    119 (0.05/day)
    Thanks Received:
    16
    There will be some waiting..

    "Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010.
    I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".

    Source:
    http://www.anandtech.com/video/showdoc.aspx?i=3651

    Another informative article:
    http://www.techreport.com/articles.x/17670
     
    Last edited: Sep 30, 2009
    Benetanegia, eidairaman1 and erocker say thanks.
  23. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    9,059 (2.52/day)
    Thanks Received:
    1,650
    100 draw distance and all high settings? You are delusional, mistaken, or full of shit.

    My 1Gb video card can't run it, it isn't the processor power required, it is the vmem, plain and simple.
     
    10 Million points folded for TPU
  24. El Fiendo

    El Fiendo

    Joined:
    Aug 22, 2008
    Messages:
    2,304 (0.90/day)
    Thanks Received:
    1,165
    Location:
    Edmonton, Alberta
    For anyone who wants to watch webcasts of NVIDIA's GPU Tech Conference.

    Linky

    Apparently its all in 3D this year. An interesting side effect is the press can't get any decent shots of the slides they show. The question is whether or not it was intentional to help keep people guessing.
     
    Binge says thanks.
  25. Kaleid

    Joined:
    Jun 30, 2008
    Messages:
    119 (0.05/day)
    Thanks Received:
    16

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page