• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GK110 Packs 2880 CUDA Cores, 384-bit Memory Interface: Die-Shot

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
With its competition checked thanks to good performance by its GK104 silicon, NVIDIA was bold enough to release die-shots of its GK110 silicon, which made its market entry as the Tesla K20 GPU-compute accelerator. This opened flood-gates of speculation surrounding minute details of the new chip, from various sources. We found one of these most plausible, by Beyond3D community member "fellix". The source of the image appears to have charted out component layout of the chip by some pattern recognition and educated guesswork.

It identifies the the 7.1 billion transistor GK110 silicon to have 15 streaming multiprocessors (SMX). A little earlier this week, sources close to NVIDIA confirmed the SMX count to TechPowerUp. NVIDIA revealed that the chip will retain the SMX design of GK104, in which each of these holds 192 CUDA cores. Going by that, GK110 has a total of 2880 cores. Blocks of SMX units surround a centrally-located command processor, along with six setup pipelines, and a portion holding the ROPs and memory controllers. There are a total of six GDDR5 PHYs, which could amount to a 384-bit wide memory interface. The chip talks to the rest of the system over PCI-Express 3.0.



View at TechPowerUp Main Site
 
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.
 
Any news when will I get to lay my hands on these chips? Late 2013? :ohwell:
 
Nvidia should fix the yield issues and make 680s available before making SKUs with even bigger die.
 
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.

It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.

Anandtech also has a nice breakdown of K10 and K20.

http://www.anandtech.com/show/5840/...s-gk104-based-tesla-k10-gk110-based-tesla-k20

Adds this about the CUDA cores:

GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10
 
It's a tech site. GK110 is NOT for gaming. It's a compute card. This card is of interest to the scientific and HPC market - it's definitely newsworthy.

I think there is going to be a Geforce Version of this card for gaming.
 
I think there is going to be a Geforce Version of this card for gaming.

I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.

Tesla cards are always clocked low for power efficiency but a fast clocked GK110 will consume quite a bit of power. I don't know if Nvidia have any plans to release GK110 as a desktop. Maybe there will be a revision of GK110 to GK114 for desktop as a GTX7xx card.
 
2 smx x 5 gpc + 4 ddr5phy derivate easily from this one. 1920 of the new perfected Fp64 cores for a mid end card that is.

And history repeats itself like with 8800GT 128 core to GTX280 240 core separated by 9 months. 8800GT dropping to 160, 110 and $86 shortly after that, the GTX670 being comparable to 8800GT in this case.
 
Last edited:
GK110 SMXes will contain 192 CUDA cores (just like GK104), but deviating from GK104 they will contain 64 CUDA FP64 cores (up from 8, which combined with the much larger SMX count is what will make K20 so much more powerful at double precision math than K10

Hmm 960 full-rate FP64 cores is something noteworthy, definitely.

But what I'd like to know is if the SMX is composed of 192 FP32 + 64 FP64 shaders or only 192 shaders of which 64 are DP? And is either one of those options really so much better than what they did on Fermi (for a HPC part I mean, for gaming there is no doubt)? Because 7 billion transistors is quite a lot, it would allow for a Fermi based chip with at least 1280 SPs, I'm sure. How that would translate to performance and perf/watt, that's another story, but remember that a large part of why Kepler is so much more efficient is because nvidia worked closely with TSMC from the start, something they never did for Fermi. The sheer architectural benefit on the perf/watt front is not so clear to me since I heard of such a relationship*. For GK107 the benefit is more clear, but Kepler does not seem to scale as you add SM(X)'s as well as Fermi did. Or maybe it's just GK104 that has too many, admittely it's not like we have too many chips to compare. Of course GK110 might/should use dynamic schedulers if they reall want good HPC performance in all situations and that might be the culprit of the "poor" scaling, so we'll see. And I'm just rambling so...

*Or lack of relationship with Fermi, because I admit that I used to give such collaboration between a foundry and its customers as granted, I never thought it would be something "extraordinary".

I honestly can't say either way but the fact GK110 has far more CUDA cores aimed at double precision work (compute centric work) means the GK 110 architecture will have some compute only design.

Remember that all Fermi chips, including low-end ones had DP capable shaders (1:4 ratio) and GF100/110 had 1:2 DP shaders. Now gaming oriented Kepler chips have a lot less DP capabilities, which does not mean that GK110 is less aimed at gaming than the entire Fermi line was. For example there's no mention of reduced amount of texture mapping units and except for the additional FP64 shaders the SMX are suposedly equal, so that means they didn't want to compromise gaming performance.
 
Last edited:
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Give us the $200 660Ti with 2GB Vram and low power draw, you know, a card that the majority of Pc gamers can actually afford to buy or sensible enough not to want OTT crap like a 690.

Gotta love angry poor people who lack reading skills. Stop being such a self entitled whiner.
 
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Ahem..
You do realize that this is a tech site ?
 
They have posted the white paper:

http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

Lots of interesting stuff. After a quick look at it, it does look like everything related to scheduling and warp creation is not only back to GF100 levels, but it goes a lot further. Honestly looking at how they crammed 2880 FP32 and 960 FP64 and all the other stuff that is close to 2x that of GK104, was it really necessary to simplify/cripple GK104's GPGPU capabilities so much? Apparently not on an area efficiency basis, maybe for perf/watt? Not really if their claim of 3x perf/watt is true. Maybe it was just so that GPGPU users had an only option: GK110 based parts. Damn you nvidia.

Ok. I'll continue reading.
 
I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all still waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really! :rolleyes:
 
what a pity, most of peeps here still dreaming on this card will become GTX780 :o
nvidia clearly has split gaming cards and professional cards based..
so, this is tesla cards for gpu computations :)
 
what a pity, most of peeps here still dreaming on this card will become GTX780 :o

I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.

nvidia clearly has split gaming cards and professional cards based..
so, this is tesla cards for gpu computations :)

You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.
 
Sigh...enough already with these bullshit uber high end nonsense cards that will only sell to basement dwelling nerds jerking off to a few benchmark scores.

Did you read the article properly or you are just enjoying trolling and being silly??! :slap:

Those GPU's are not for "nerds jerking off" or even PC gaming, but for professional use, like CAD workstations, 3D simulations servers, etc, not for average Joe. :shadedshu:shadedshu:shadedshu
 
I don't know why you are all taking Hustler so literally. What's all this self entitlement to criticise a guy who is excited about the affordable 660Ti that we are all still waiting for. Why get your knickers in a twist about a little preamble said out of frustration because of the wait. I count 3 pedantic humour nazis. Really! :rolleyes:

Humour nazis? Where is humour involved?
 
ok, if it becomes tesla, just fine. but benchmarks still needed when it actually came, just to know how far their compute thing progresses. i like the idea compute for gaming though, so we can simulate everything for more lifelike graphics. just a shame nvidia who is started this, seems to back-off of their own idea, or maybe, too much ask for gaming devs to implement it? dont know. i believe this could be geforce product too,, on the next generation.
 
I don't know why you'd say that. It's not profitable to create a chip only for the low volume HPC market. Economics of scale. It retains all the gaming stuff too, Nvidia didn't back down on anything in that regards, something they did do on the Fermi generation. This chip will most definitely become a GeForce eventually. Expecting it to be GTX780 and not GTX685 for example, is actually on the pessimistic/realistic side. We all expect this to come late or in 2013 now, and thus GTX780. To dream would be to expect Nvidia to create a new chip for the GTX780, instead of "milking" Kepler and taking full advantage of the opportunity that AMD so kindly gave them.



You don't waste silicon on 240 texture units unless you want the part to have great gaming performance. Not even Quadro's need texturing power, professional graphics is all about polygons.

yeah, still you can expect it to come, nothing wrong with this :rolleyes:
even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 :), i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards. ;)
 
yeah, still you can expect it to come, nothing wrong with this :rolleyes:
even nvidia hasnt yet releasing an official statement about GTX780 nor GTX685 :), i'm sorry, i'm not a paranormal so the fact for me now that GK110 is Tesla Cards and also nvidia clearly has already split gaming cards and professional cards. ;)

Nvidia didn't make any official statement about GTX680 even 2 weeks before it launched, same for GTX690, same for 670. What makes you think they will make an statement about a card that would be launching in 6+ months (more like 9 months)? And what makes you think that's a clear sign of Nvidia splitting their bussiness? Don't be ridiculous.
 
Back
Top