1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot

Discussion in 'News' started by btarunr, May 17, 2012.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,368 (11.31/day)
    Thanks Received:
    13,607
    Location:
    Hyderabad, India
    With its competition checked thanks to good performance by its GK104 silicon, NVIDIA was bold enough to release die-shots of its GK110 silicon, which made its market entry as the Tesla K20 GPU-compute accelerator. This opened flood-gates of speculation surrounding minute details of the new chip, from various sources. We found one of these most plausible, by Beyond3D community member "fellix". The source of the image appears to have charted out component layout of the chip by some pattern recognition and educated guesswork.

    It identifies the the 7.1 billion transistor GK110 silicon to have 15 streaming multiprocessors (SMX). A little earlier this week, sources close to NVIDIA confirmed the SMX count to TechPowerUp. NVIDIA revealed that the chip will retain the SMX design of GK104, in which each of these holds 192 CUDA cores. Going by that, GK110 has a total of 2880 cores. Blocks of SMX units surround a centrally-located command processor, along with six setup pipelines, and a portion holding the ROPs and memory controllers. There are a total of six GDDR5 PHYs, which could amount to a 384-bit wide memory interface. The chip talks to the rest of the system over PCI-Express 3.0.

    [​IMG]

    Source: Beyond3D Forum
  2. Hustler New Member

    Joined:
    Jul 29, 2010
    Messages:
    100 (0.07/day)
    Thanks Received:
    39
    Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

    Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.
    Scheich, Law-II, Assimilator and 11 others say thanks.
  3. Chappy

    Chappy New Member

    Joined:
    Oct 3, 2011
    Messages:
    82 (0.08/day)
    Thanks Received:
    6
    Any news when will I get to lay my hands on these chips? Late 2013? :ohwell:
  4. hardcore_gamer

    hardcore_gamer

    Joined:
    Jan 25, 2011
    Messages:
    379 (0.29/day)
    Thanks Received:
    170
    Location:
    Fabry Perot cavity,AlGaAs-GaAs Heterojunction
    Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.
    heky says thanks.
  5. entropy13

    entropy13

    Joined:
    Mar 2, 2009
    Messages:
    4,915 (2.46/day)
    Thanks Received:
    1,193
    It took 3 posts. :laugh::laugh::laugh:
    AsRock, 1c3d0g and erixx say thanks.
  6. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,241 (1.89/day)
    Thanks Received:
    1,470
    Location:
    Glasgow - home of formal profanity
    It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.

    Anandtech also has a nice breakdown of K10 and K20.

    http://www.anandtech.com/show/5840/...s-gk104-based-tesla-k10-gk110-based-tesla-k20

    Adds this about the CUDA cores:

    D007, 1c3d0g, mdbrotha03 and 1 other person say thanks.
  7. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,368 (11.31/day)
    Thanks Received:
    13,607
    Location:
    Hyderabad, India
    Autumn-Winter. Probably as "GTX 780".
    Chappy and hardcore_gamer say thanks.
  8. hardcore_gamer

    hardcore_gamer

    Joined:
    Jan 25, 2011
    Messages:
    379 (0.29/day)
    Thanks Received:
    170
    Location:
    Fabry Perot cavity,AlGaAs-GaAs Heterojunction
    I'm having a connection error here. Wifi signal strength is very low here in my Lab
  9. hardcore_gamer

    hardcore_gamer

    Joined:
    Jan 25, 2011
    Messages:
    379 (0.29/day)
    Thanks Received:
    170
    Location:
    Fabry Perot cavity,AlGaAs-GaAs Heterojunction
    I think there is going to be a Geforce Version of this card for gaming.
  10. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,241 (1.89/day)
    Thanks Received:
    1,470
    Location:
    Glasgow - home of formal profanity
    I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.

    Tesla cards are always clocked low for power efficiency but a fast clocked GK110 will consume quite a bit of power. I don't know if Nvidia have any plans to release GK110 as a desktop. Maybe there will be a revision of GK110 to GK114 for desktop as a GTX7xx card.
  11. nikko New Member

    Joined:
    Dec 16, 2011
    Messages:
    42 (0.04/day)
    Thanks Received:
    2
    2 smx x 5 gpc + 4 ddr5phy derivate easily from this one. 1920 of the new perfected Fp64 cores for a mid end card that is.

    And history repeats itself like with 8800GT 128 core to GTX280 240 core separated by 9 months. 8800GT dropping to 160, 110 and $86 shortly after that, the GTX670 being comparable to 8800GT in this case.
    Last edited: May 17, 2012
  12. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Hmm 960 full-rate FP64 cores is something noteworthy, definitely.

    But what I'd like to know is if the SMX is composed of 192 FP32 + 64 FP64 shaders or only 192 shaders of which 64 are DP? And is either one of those options really so much better than what they did on Fermi (for a HPC part I mean, for gaming there is no doubt)? Because 7 billion transistors is quite a lot, it would allow for a Fermi based chip with at least 1280 SPs, I'm sure. How that would translate to performance and perf/watt, that's another story, but remember that a large part of why Kepler is so much more efficient is because nvidia worked closely with TSMC from the start, something they never did for Fermi. The sheer architectural benefit on the perf/watt front is not so clear to me since I heard of such a relationship*. For GK107 the benefit is more clear, but Kepler does not seem to scale as you add SM(X)'s as well as Fermi did. Or maybe it's just GK104 that has too many, admittely it's not like we have too many chips to compare. Of course GK110 might/should use dynamic schedulers if they reall want good HPC performance in all situations and that might be the culprit of the "poor" scaling, so we'll see. And I'm just rambling so...

    *Or lack of relationship with Fermi, because I admit that I used to give such collaboration between a foundry and its customers as granted, I never thought it would be something "extraordinary".

    Remember that all Fermi chips, including low-end ones had DP capable shaders (1:4 ratio) and GF100/110 had 1:2 DP shaders. Now gaming oriented Kepler chips have a lot less DP capabilities, which does not mean that GK110 is less aimed at gaming than the entire Fermi line was. For example there's no mention of reduced amount of texture mapping units and except for the additional FP64 shaders the SMX are suposedly equal, so that means they didn't want to compromise gaming performance.
    Last edited: May 17, 2012
  13. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,159 (6.58/day)
    Thanks Received:
    2,022
    Location:
    Concord, NH
    Confucius say user who double posts didn't read the rules.
    Please don't double post, there is an edit button for a reason. Thanks. :)
    1c3d0g says thanks.
  14. techtard

    techtard

    Joined:
    Sep 4, 2009
    Messages:
    930 (0.51/day)
    Thanks Received:
    204
    Gotta love angry poor people who lack reading skills. Stop being such a self entitled whiner.
    mdbrotha03 says thanks.
  15. Shihabyooo

    Shihabyooo

    Joined:
    Jan 10, 2011
    Messages:
    566 (0.43/day)
    Thanks Received:
    110
    Location:
    A sad excuse of a country called Sudan.
    Ahem..
    You do realize that this is a tech site ?
  16. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    They have posted the white paper:

    http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

    Lots of interesting stuff. After a quick look at it, it does look like everything related to scheduling and warp creation is not only back to GF100 levels, but it goes a lot further. Honestly looking at how they crammed 2880 FP32 and 960 FP64 and all the other stuff that is close to 2x that of GK104, was it really necessary to simplify/cripple GK104's GPGPU capabilities so much? Apparently not on an area efficiency basis, maybe for perf/watt? Not really if their claim of 3x perf/watt is true. Maybe it was just so that GPGPU users had an only option: GK110 based parts. Damn you nvidia.

    Ok. I'll continue reading.
    Law-II and Crap Daddy say thanks.
  17. Completely Bonkers New Member

    Joined:
    Feb 6, 2007
    Messages:
    2,580 (0.94/day)
    Thanks Received:
    516
    I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all still waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really! :rolleyes:
  18. SIGSEGV

    SIGSEGV

    Joined:
    Mar 31, 2012
    Messages:
    505 (0.58/day)
    Thanks Received:
    107
    what a pity, most of peeps here still dreaming on this card will become GTX780 :eek:
    nvidia clearly has split gaming cards and professional cards based..
    so, this is tesla cards for gpu computations :)
  19. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.

    You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.
  20. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    2,202 (2.06/day)
    Thanks Received:
    286
    Did you read the article properly or you are just enjoying trolling and being silly??! :slap:

    Those GPU's are not for "nerds jerking off" or even PC gaming, but for professional use, like CAD workstations, 3D simulations servers, etc, not for average Joe. :shadedshu:shadedshu:shadedshu
  21. Frick

    Frick Fishfaced Nincompoop

    Joined:
    Feb 27, 2006
    Messages:
    10,477 (3.38/day)
    Thanks Received:
    2,138
    Humour nazis? Where is humour involved?
  22. erixx

    erixx

    Joined:
    Mar 24, 2010
    Messages:
    3,258 (2.02/day)
    Thanks Received:
    435
    ....in his moustache :)
  23. atikkur

    Joined:
    May 3, 2012
    Messages:
    45 (0.05/day)
    Thanks Received:
    0
    Location:
    Jakarta
    ok, if it becomes tesla, just fine. but benchmarks still needed when it actually came, just to know how far their compute thing progresses. i like the idea compute for gaming though, so we can simulate everything for more lifelike graphics. just a shame nvidia who is started this, seems to back-off of their own idea, or maybe, too much ask for gaming devs to implement it? dont know. i believe this could be geforce product too,, on the next generation.
  24. SIGSEGV

    SIGSEGV

    Joined:
    Mar 31, 2012
    Messages:
    505 (0.58/day)
    Thanks Received:
    107
    yeah, still you can expect it to come, nothing wrong with this :rolleyes:
    even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 :), i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards. ;)
  25. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous.
    wolf says thanks.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page