• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GK104 Silicon Roughly As Big As G92b

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,878 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
When the first PCB shot of the GK104 reference board surfaced, it sent the punters estimating the die area of the GK104 GPU, which the pinned at somewhere around 320 mm². A newer close-up picture of the GK104, helped calculate the figure more accurately, down to around 300 mm² (±5 mm²). This calculation also takes into account that the GK104 chip-package is as big as that of the G92, and the die just as big. It was compared alongside a 55 nm G92b chip. GK104 is NVIDIA's newest performance GPU built on the Kepler architecture, on which SKUs such as the GeForce GTX 680, are based.



View at TechPowerUp Main Site
 
Many Thanks to Duckula for the tip.
 
lets hope it cost the same and becomes the new bang per buck king like the g92 was :)
 
Wishfull thinking; they are using it in gtx 680 so it surely will be expensive.

All we really have is rumors, but originally GK104 was intended to replace the mid-range offerings of the 5xx series. Those cards don't sell for a fortune, so it's conceivable that these won't either. I'm more compelled that Nvidia and AMD are going to just Price Fix so what would have been the GTX660 becomes the GTX680, and is priced just about the 7970.

In an ideal world we'd see Nvidia continue their original plans and price their highest end GK104 around $300, offering performance on par or better than the HD7970, but I have my doubts.
 
Wasn't the NDA for reviewers supposed to be lifted today?
 
All we really have is rumors, but originally GK104 was intended to replace the mid-range offerings of the 5xx series. Those cards don't sell for a fortune, so it's conceivable that these won't either. I'm more compelled that Nvidia and AMD are going to just Price Fix so what would have been the GTX660 becomes the GTX680, and is priced just about the 7970.

In an ideal world we'd see Nvidia continue their original plans and price their highest end GK104 around $300, offering performance on par or better than the HD7970, but I have my doubts.

If it perform as fast as a 7970 they will be pricing it accordingly, i bet my hat on it.

Wasn't the NDA for reviewers supposed to be lifted today?
Somehow i heard the same thing.
 
Maybe it lifts at noon? :P
 
How much longer do I have to wait! I am sitting on a dead 8800GT (died a week ago :(). I need a GFX Card quickly!
 
Last edited:
Wasn't the NDA for reviewers supposed to be lifted today?

I read that around as well, but I also saw on SA that they were allegedly having a press event today.
 
Wishfull thinking; they are using it in gtx 680 so it surely will be expensive.

Unless they moved the naming scheme with the GTX680 as being the hi-mid ranged card :D
 
The way i am reading this is that nVidia has once again been way too over ambitious with its high-end chip, the GK100. This has little to no chance of making it this year so nvidia needs a 'high-end' chip to make some money on.

Because of the highly tailored shaders that seems to work wonders on games set up to take advantage of them, ie games that are in TWIMTBP then nVidia are thinking they can overclock the 660TI and call it the 680, then cripple the others to make the slower cards.

If the rumors are to be believed then the GK104 will destroy some games (the optimised games') but in 'fairly' developed games AMD's 7970 is the more powerful card.

Once again it comes down to price but also in this round it what games you play and whether nVidia 'help' code them!
 
If the rumors are to be believed then the GK104 will destroy some games (the optimised games') but in 'fairly' developed games AMD's 7970 is the more powerful card.

There are tons of TWIMTBP games that run faster on radeon cards than geforce. It means nothing.
 
An NDA date would be nice.
 
Well I wouldn't say it's roughly the same size. It's 1 mm bigger in both directions, so that falls in line with the 300mm^2 number, maybe even slightly smaller, if that comparison is properly made, because G92b was usually measured as 260mm^2.

In any case it's a lot smaller than first reported and without knowing anything else my performance expectations are certainly lower now. An almost 20% smaller die, and assuming same perf/mm^2 or perf/transistor, is going to produce a chip that is 20% slower than I first expected. If GK104 is faster than Tahiti even if only by 5% or 10% it would be very remarkable IMO, considering Tahiti has been measured as 360-380 mm^2, and considering from where Nvidia came from.
 
Can I get my G92c now, please?
 
Well I wouldn't say it's roughly the same size. It's 1 mm bigger in both directions, so that falls in line with the 300mm^2 number, maybe even slightly smaller, if that comparison is properly made, because G92b was usually measured as 260mm^2.

In any case it's a lot smaller than first reported and without knowing anything else my performance expectations are certainly lower now. An almost 20% smaller die, and assuming same perf/mm^2 or perf/transistor, is going to produce a chip that is 20% slower than I first expected. If GK104 is faster than Tahiti even if only by 5% or 10% it would be very remarkable IMO, considering Tahiti has been measured as 360-380 mm^2, and considering from where Nvidia came from.

I told you it wouldn't be much faster than 7970 (in the case it is, which i still have my doubts.) but you always over-optimistic with nvidia products.
 
An NDA date would be nice.

Dude, we had a thread last week wit hW1zz saying he just went to the Kepler event. LATE LAST WEEK.

If W1zz could travel half-way across the world to get home, build up a rig for the card, get it tested, and write a proper review in like 3 days....man, he'd have to not sleep since starting!


But that laso hleps put things into perspective...W1zz says nVidia gave him a chocolate cookie, and posted a pic. The NDA must expire relatively soon, but I cannot imagine that nV wouldn't give reviewers a week, at least, to get a review done. That puts things at some time next week, iMHO.

Of course, Z77 and IB is supposed to launch soon too, or something...and i haven't the foggiest idea when that is athough I'm supposed to be TPU's board reviewer. Clearly I'm not doing any launch reviews there, nor do I have access right now to unreleased hardware, so perhaps my judgement on NDAs and the time frame given to deal with publishing a reivew is not accurate. Oh well.

GTX570 for $250 local, on that day? That'd be awesome.

G92B stuck around for quite some time. I hope this GPU does as well.
 
Dude, we had a thread last week with W1zz saying he just went to the Kepler event. LATE LAST WEEK.

And Nvidia eated the cookie thread, so NO cookie :cry:
 
Given what's being provided regarding "Dynamic profiles", if Nvidia is now using GK104 and employing whiz-bang rapidly changing profiles, what instrumentation might they have added into such chip's on a what might've been a running change. I mean, I would think for this "Dynamic Clock" thing to really perform "on-the-fly" various sensors, data monitoring points, plus pins to input such profiles energetically in real time, would be the right approach and would be unrivaled if implement at that the chip level.

The question I will want to have answered when this finally makes it "out from the wild"; did Nvidia have this figured into the chip early in the design or is this just a way of making a "silk purse from a sow’s ear"?

If Nvidia has really applied the idea from the get-go they now might be able to provide granularity, response and speed to implement smooth transitioning while in actual game play. Although, if an afterthought once they saw the GK110 was gimp, and the only other player was to call-up the "B-Leaguer" and pump it with steroids in the hope it can become proficient at every play and be the "game changer".

I really think that Nvidia is working from, "nobody normally detects over 60-70 FPS" so we'll implement a throttling device that maintains that FPS by vary the power/clocks to limit that. It has merit as like a racecar on a wind’e track excessive HP can be hard to apply, but a car with "enough HP" and then a nitrous bottle to provide the quick boost when it needs to stretch its legs on straight-aways is a threat. When it's not a endurance race then nitrous bottle has advantages, though long term just good well modulated HP is the stamina to do it hours on end and not wreak havoc mechanically.

If this is just a face saving measure by Nvidia and it works and a true a "game changer", will Nvidia provide the added reward and price it compensatorily to really put AMD in a bind, we wait...
 
Last edited:
Back
Top