Thursday, May 17th 2012

GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot

With its competition checked thanks to good performance by its GK104 silicon, NVIDIA was bold enough to release die-shots of its GK110 silicon, which made its market entry as the Tesla K20 GPU-compute accelerator. This opened flood-gates of speculation surrounding minute details of the new chip, from various sources. We found one of these most plausible, by Beyond3D community member "fellix". The source of the image appears to have charted out component layout of the chip by some pattern recognition and educated guesswork.

It identifies the the 7.1 billion transistor GK110 silicon to have 15 streaming multiprocessors (SMX). A little earlier this week, sources close to NVIDIA confirmed the SMX count to TechPowerUp. NVIDIA revealed that the chip will retain the SMX design of GK104, in which each of these holds 192 CUDA cores. Going by that, GK110 has a total of 2880 cores. Blocks of SMX units surround a centrally-located command processor, along with six setup pipelines, and a portion holding the ROPs and memory controllers. There are a total of six GDDR5 PHYs, which could amount to a 384-bit wide memory interface. The chip talks to the rest of the system over PCI-Express 3.0.

Source: Beyond3D Forum
Add your own comment

65 Comments on GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot

#1
Hustler
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.
Posted on Reply
#2
Chappy
Any news when will I get to lay my hands on these chips? Late 2013? :ohwell:
Posted on Reply
#3
hardcore_gamer
Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.
Posted on Reply
#4
entropy13
by: hardcore_gamer
Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.
It took 3 posts. :laugh::laugh::laugh:
Posted on Reply
#5
the54thvoid
by: Hustler
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.
It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.

Anandtech also has a nice breakdown of K10 and K20.

http://www.anandtech.com/show/5840/gtc-2012-part-1-nvidia-announces-gk104-based-tesla-k10-gk110-based-tesla-k20

Adds this about the CUDA cores:
GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10
Posted on Reply
#6
btarunr
Editor & Senior Moderator
by: Chappy
Any news when will I get to lay my hands on these chips? Late 2013? :ohwell:
Autumn-Winter. Probably as "GTX 780".
Posted on Reply
#7
hardcore_gamer
by: entropy13
It took 3 posts.
I'm having a connection error here. Wifi signal strength is very low here in my Lab
Posted on Reply
#8
hardcore_gamer
by: the54thvoid
It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.
I think there is going to be a Geforce Version of this card for gaming.
Posted on Reply
#9
the54thvoid
by: hardcore_gamer
I think there is going to be a Geforce Version of this card for gaming.
I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.

Tesla cards are always clocked low for power efficiency but a fast clocked GK110 will consume quite a bit of power. I don't know if Nvidia have any plans to release GK110 as a desktop. Maybe there will be a revision of GK110 to GK114 for desktop as a GTX7xx card.
Posted on Reply
#10
nikko
2 smx x 5 gpc + 4 ddr5phy derivate easily from this one. 1920 of the new perfected Fp64 cores for a mid end card that is.

And history repeats itself like with 8800GT 128 core to GTX280 240 core separated by 9 months. 8800GT dropping to 160, 110 and $86 shortly after that, the GTX670 being comparable to 8800GT in this case.
Posted on Reply
#11
Benetanegia
GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10
Hmm 960 full-rate FP64 cores is something noteworthy, definitely.

But what I'd like to know is if the SMX is composed of 192 FP32 + 64 FP64 shaders or only 192 shaders of which 64 are DP? And is either one of those options really so much better than what they did on Fermi (for a HPC part I mean, for gaming there is no doubt)? Because 7 billion transistors is quite a lot, it would allow for a Fermi based chip with at least 1280 SPs, I'm sure. How that would translate to performance and perf/watt, that's another story, but remember that a large part of why Kepler is so much more efficient is because nvidia worked closely with TSMC from the start, something they never did for Fermi. The sheer architectural benefit on the perf/watt front is not so clear to me since I heard of such a relationship*. For GK107 the benefit is more clear, but Kepler does not seem to scale as you add SM(X)'s as well as Fermi did. Or maybe it's just GK104 that has too many, admittely it's not like we have too many chips to compare. Of course GK110 might/should use dynamic schedulers if they reall want good HPC performance in all situations and that might be the culprit of the "poor" scaling, so we'll see. And I'm just rambling so...

*Or lack of relationship with Fermi, because I admit that I used to give such collaboration between a foundry and its customers as granted, I never thought it would be something "extraordinary".

by: the54thvoid
I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.
Remember that all Fermi chips, including low-end ones had DP capable shaders (1:4 ratio) and GF100/110 had 1:2 DP shaders. Now gaming oriented Kepler chips have a lot less DP capabilities, which does not mean that GK110 is less aimed at gaming than the entire Fermi line was. For example there's no mention of reduced amount of texture mapping units and except for the additional FP64 shaders the SMX are suposedly equal, so that means they didn't want to compromise gaming performance.
Posted on Reply
#12
Aquinus
Resident Wat-man
by: hardcore_gamer
I'm having a connection error here. Wifi signal strength is very low here in my Lab
by: hardcore_gamer
I think there is going to be a Geforce Version of this card for gaming.
Confucius say user who double posts didn't read the rules.
Please don't double post, there is an edit button for a reason. Thanks. :)
Posted on Reply
#13
techtard
by: Hustler
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.
Gotta love angry poor people who lack reading skills. Stop being such a self entitled whiner.
Posted on Reply
#14
Shihabyooo
by: Hustler
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
Ahem..
You do realize that this is a tech site ?
Posted on Reply
#15
Benetanegia
They have posted the white paper:

http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

Lots of interesting stuff. After a quick look at it, it does look like everything related to scheduling and warp creation is not only back to GF100 levels, but it goes a lot further. Honestly looking at how they crammed 2880 FP32 and 960 FP64 and all the other stuff that is close to 2x that of GK104, was it really necessary to simplify/cripple GK104's GPGPU capabilities so much? Apparently not on an area efficiency basis, maybe for perf/watt? Not really if their claim of 3x perf/watt is true. Maybe it was just so that GPGPU users had an only option: GK110 based parts. Damn you nvidia.

Ok. I'll continue reading.
Posted on Reply
#16
Completely Bonkers
I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all still waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really! :rolleyes:
Posted on Reply
#17
SIGSEGV
what a pity, most of peeps here still dreaming on this card will become GTX780 :o
nvidia clearly has split gaming cards and professional cards based..
so, this is tesla cards for gpu computations :)
Posted on Reply
#18
Benetanegia
by: SIGSEGV
what a pity, most of peeps here still dreaming on this card will become GTX780 :o
I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.

by: SIGSEGV
nvidia clearly has split gaming cards and professional cards based..
so, this is tesla cards for gpu computations :)
You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.
Posted on Reply
#19
Prima.Vera
by: Hustler
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.
Did you read the article properly or you are just enjoying trolling and being silly??! :slap:

Those GPU's are not for "nerds jerking off" or even PC gaming, but for professional use, like CAD workstations, 3D simulations servers, etc, not for average Joe. :shadedshu:shadedshu:shadedshu
Posted on Reply
#20
Frick
Fishfaced Nincompoop
by: Completely Bonkers
I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all still waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really! :rolleyes:
Humour nazis? Where is humour involved?
Posted on Reply
#21
erixx
....in his moustache :)
Posted on Reply
#22
atikkur
ok, if it becomes tesla, just fine. but benchmarks still needed when it actually came, just to know how far their compute thing progresses. i like the idea compute for gaming though, so we can simulate everything for more lifelike graphics. just a shame nvidia who is started this, seems to back-off of their own idea, or maybe, too much ask for gaming devs to implement it? dont know. i believe this could be geforce product too,, on the next generation.
Posted on Reply
#23
SIGSEGV
by: Benetanegia
I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.



You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.
yeah, still you can expect it to come, nothing wrong with this :rolleyes:
even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 :), i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards. ;)
Posted on Reply
#24
Benetanegia
by: SIGSEGV
yeah, still you can expect it to come, nothing wrong with this :rolleyes:
even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 :), i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards. ;)
Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous.
Posted on Reply
#25
SIGSEGV
by: Benetanegia
Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous.
maybe you should read the full story on fermi and kepler and previous nvidia gaming cards generations ;)
Posted on Reply
Add your own comment