1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Refresh GPU Family Detailed

Discussion in 'News' started by btarunr, Oct 17, 2012.

  1. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    9,885 (6.81/day)
    Thanks Received:
    2,183
    Location:
    Seattle, WA
    It doesn't matter. If you look at most recent performance numbers they trade blows. Thats how it has been for the last 2-3 generations. Only reason the 680 truely looks like the better card is because it consumes a lot less power then the 7970 for the same performance range.
  2. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    @cadaveca

    Unless I'm seriously misunderstanding you, you're arguing that GK110 is/was scrapped due to inherent problems with die size even though we're sitting here reading and commenting on an article that is proposing that GK110 is going to be released just fine for the 7xx series cards.

    That makes no sense.

    Much more likely is that their yields sucked last year on this chip and so they bumped GK104 up a tier from 660ti to 680, while putting GK110 on the back burner until they got yield issues fixed.
  3. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    Neither does the doubling of transistor count, but only a 55% performance increase...unless...it's so big they had to drop clocks by 40%...


    :eek:



    :shadedshu

    What's in the news doesn't make sense. period. I'm not gonna argue it's bogus...if you don't realize that yourself..well...I won't argue with you.


    :D


    It's twice the size of GTX680, this news says, but only 55% faster.

    We know it'll use the smae silicon process as current GTX680...how can die size NTO be a problem?


    If it wasn't, then a doubling of the same transitors should equal a doubling of performance, no?
  4. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
  5. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    It would still leave it twice as large as GTX680...
  6. NHKS

    NHKS New Member

    Joined:
    Sep 28, 2011
    Messages:
    596 (0.56/day)
    Thanks Received:
    375
    that's right.. 'no one can say for sure', but I am inclined to believe your case.. also that atleast on paper, ie, in the 'plans', nvidia could possibly have included the GK1x0(100 or 110) in the GTX 6xx line-up, but since its a big chip(500+ mm²) and TSMC's 28nm process was in its nascent stages, nvidia might have changed plans anticipating poor yields(they might have made a pre-production study too)

    you seem to make a valid point, sir.. but I am not convinced just looking at the pics(they are zoomed at slightly diff levels, based on the match-stick size).. moreover, based on calc :
    Tahiti ≈ 365 mm²
    Gk104 ≈ 295 mm²

    Difference in size ≈ 24%
    Difference in # of. transistors ≈ 33%

    looking at the above numbers, Tahiti does pack more transistors but the die sizes are not too close either
  7. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    Hey, if this is the smoking gun that for you means that all this isn't possible/makes no sense, then that's fine.

    However, there's nothing that inherently makes that the case. S|A's article (linked above) offers a fairly good explanation here, I think.

    And of course, we're only dealing in rumorville, so we'll see what happens.
  8. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    Yeah, and Tahiti has 384-bit bus, so really needs to be physically bigger, for more connections to PCB for the added ram chips.

    see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.

    No, really, the smoking gun is that design schedules ALWAYS work this way.


    See, nVidia and AMD are both contrained by what TSMC offers. They both buy wafers from TSMC, TSMC makes all chips for both, and as such, they are even using the same process.


    AMD packs 33% more transistors into HD 7970. It's not 33% bigger.

    Nvidia may be able to further increase transistor density, for sure, but it's not going to be enough that even qualifies GTX 680 as "mid-range"
    Last edited: Oct 17, 2012
  9. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    That's because you a priori exclude the possibilty of GK110's existence/plausibility based on size. If GK110 exists as stated, then it defines what the high end chip is, and GK104 is comfortably midrange in comparison.
  10. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    What I am denying is the ability to cool a chip that large in size, yes.

    I'm not denying it might have been planned...but reality says, since chips take liek 2 years to design, that they knew since day one it wasn't going to happen. They knew LONG before those "claims" came out that GTX 680 was the chip we got.




    GK110 or GK100 or whatever...was NEVER meant to be GTX 680. Nor was it meant to compete with the current HD7970.


    I'm not denying that a new GPU is coming, either. :p
  11. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    All I've been arguing is that they planned GK110 as the GTX 680 and had to scrap it and bump the GK104 up a level.

    Basically, I'm arguing that GK104 was not drawn up as a high end GPU. It wound up filling that role just fine, but that doesn't mean it was planned that way.
    NHKS says thanks.
  12. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    12,212 (4.68/day)
    Thanks Received:
    1,405
    i can only guess reason this is happening is because they losing a lil share unless they are getting geared to launch a new series after the HD 8s come out
  13. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,133 (0.93/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    That’s going to be the question… has TSMC got their process to the point that makes parts that are viable to gaming enthusiasts and not out of bounds on power. I think with geldings from Tesla and a more tailored second gen boost mapping this is saying they can, but I would say it not $550. Something tells me these will all be as GTX690’s, Nvidia only outing a singular design and construction as a factory release.

    Nvidia can minimize the shortcoming, and exhort the virtues to attain a card that enthusiasts will exculpate, just to exclaim its presence in the market exclaiming how great thou art! (for $600 and a 280W TDP)

    I think it was always a TSMC issue that caused both their woe's, but yes GK104 once Nvidia got good stuff surprise themselves as to what could be wrung out, but they had to use Boost to insure they wouldn't have chip committing Hari Kari. This time around boost theyll get more aggressive and tolerate to heat and power, so that's where the gains will really come from, but will effectively quell any OC’n.
  14. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    It HAD to be. You can only fit so many transistors into so much space. :p THere is no way it coudl have ever worked, just like AMD's 2560x shader Cypress couldn't work either.
  15. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    So Nvidia couldn't possibly have just made a design mistake?? :wtf:

    Because companies never do that...
  16. NHKS

    NHKS New Member

    Joined:
    Sep 28, 2011
    Messages:
    596 (0.56/day)
    Thanks Received:
    375
    if you expect mid-range chips to have sub-250mm² die sizes, then GF104 (GTX 460) & even GF114 were well-over 300mm².. as for me, I am going by 'naming' convention of Fermi gen.. it had GF100 & GF110 as the high-end chips, so same could be said for Kepler(knowing that GK110 exists)...

    anyways with due respect I wish to end it here, to each his own (its all speculation)..
    BigMack70 says thanks.
  17. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    It may not make sense to you, but it makes all the sense in the world. You are arguing against history. Are you going to suggest that GF104 was not a midrange chip? It was 332 mm^2. Significantly bigger than GK104 and definitely bigger than your 250 mm^2 figure.

    All Nvidia high-end chips (GPU + HPC) of past generations have been close to or bigger than 500 mm^2. G80 484mm^2, GT200 576mm^2. GF100 520 mm^2.

    Time to have a reality check man. GK100/110 IS the high-end chip. A chip that Nvidia decided it was not economically feasible this past months when TSMC supply was so constrained and yields (for everybody) were not good. End of story. It really is. There's no problem with it other than that and the fact that by being bigger it's going to have lower yields and lower number of dies, nothing that Nvidia didn't do previously or that are afraid of. GK106 took long to release too. Was it because it was not posible? No, because it was economically less "interesting" than GK104 and so was GK110. If they could win with a 294mm^2 chip there was absolutely no reason to release the big one and have lower margins as they had to with first Fermi "generation". HPC moves slower and relies on designs like the Oak Ridge supercomputer that would not been ready back at the time, so more reason to delay.
    NHKS and BigMack70 say thanks.
  18. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    Oh, I never mean to say that my expectations are the same as what the industry sets, but yes, if a 28nm, and let me repeat...a 28nm chip is over 250mm, then yes, I would not consider it a mid-range chip. If you need more than that space(and neither AMD or nvidia did), then you've got some serious engineering issues, for sure.


    Of course bigger processes took up more space. :p


    Silly.:roll:


    I never said GK100 or GK110 is NOT the high-end chip...sure is...but it was NEVER meant to be GTX680.

    TSMC had yield issues. :p That is comical. Yeah, blame the infant technology. :laugh:

    Of course it was horrible. nVidia KNEW it would be, as did AMD...and they dealt with it, as they have with every process.


    Actually, no, i think nVidia did NOT make a big mistake, at all, and that GK100 was planned for next year ALWAYS, rather than this January or whatever.


    It's not like Kepler is some new thing..it's a tweaked Fermi. nVidia admitted big mistakes with Fermi, so I do expect that there were extra-cautious with kepler.


    As will be the next chip.
    Last edited: Oct 17, 2012
  19. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    I like that you repeatedly just a priori dismiss dozens and dozens of reputable stories/rumors from the past year for no real reason other than your own theories. :laugh:
  20. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    Stories and rumours. Yep.


    Except, of course, as a reviewer, I do have a bit more info than the average joe, although, not as much as many other reviewers do, I'm sure.


    See, the difference between me and other reviewers..I do this for fun, as a hobby..and not for cash.


    I'm not posting news for hits, because that garners money for the site with ads...


    TPU isn't built upon that, at all.

    This is specualtion, after all, not fact, so yeah, I offer a different perspective...So?

    At the end of the day, it's me playing with the hardware NOW you guys want to buy IN THE FUTURE. I don't really care who has the faster chips, who is cheaper, or what you buy...this stuff just shows up on my doorstep, with ZERO cost.



    I'm just not afraid to be wrong. :laugh: In the future, we can say "look, this was right, and this wasn't"...and I won't care if I'm wrong. :p You might...but I won't.:roll:
  21. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    Pulling rank as a reviewer doesn't mean rumors/stories are untrustworthy just because you don't believe them and/or they don't fit your ideas of what is or is not going on. Maybe if we were talking about some isolated or crazy things, but not when we're talking about widespread info.
  22. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    If I had actual info about an unreleased product, I wouldn't be able to talk about it.

    That's where me being a reviewer is important.


    Who cares that I review stuff. It's not important, really. Like, really...big deal..I get to play with broken stuff 9/10 times, when it's pre-release. I've said it before, I'd much rather have stuff later, but I guess some OEMs value my feedback prior to launch. That's like the whole "ES is better for OC" BS.

    That fact I do that for them, for free...well...it's not a big of a deal that most seem to think it is. I actually think it's kind of the opposite...

    At the same time though, those that DO have info about unreleased products, like myself, also cannot say much, except what they are allowed, or their info cannot be real.


    THAT is a fact I learned as as reviewer, that many seem to not know. That is just how it works. Either this info is force-fed, or it's fake.
  23. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    12,212 (4.68/day)
    Thanks Received:
    1,405
    well thats how they improve it later but its for PR honestly when NDA is lifted too bro
  24. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Explain why GK110 wastes so much space in 240 texture mapping units and ROPs, and tesselators and whatnot, if it was never meant for high-end gaming card? ;)

    Of course they dealt with it. They released the mid-range chip as the high-end card knowing that it would be able to compete with AMD's fastest chip. :laugh:

    No one's blaming the "infant tech". Both AMD and Nvidia design their chips according to TSMC's guidances on the process. They have to, since they have to design the chips long before TSMC is ready for production. They design around them and weight in the feasibility and profitability based on them. Guidances are one thing and reality is often a very different one. Of course AMD by being a fabbed chip maker in the past, knows better than Nvidia how to deal with them. We are not discussing that so to the point. Trying to deny that volume and yield issues are TSMC's problem is stupid. Ther guidances for the process and reality didn't match and everyone has suffered from it, be it Nvdia, Qualcomm or AMD, even if AMD has not been as vocal. Each company has very different things to address in their conference calls and trying to extract any conclusions from whether they talk about TSMC issues or not is again stupid. AMD is in far more trouble and has much more things to excuse than having to explain why profit margins on the GPU bussiness are slightly lower than expected.

    So imagine we are Nvidia. 28nm is not as good as it was "promised" to be. We get close to Kepler release dates. Volume is not good, yields are not good either, neither worse then 40nm, as Jen Hsun Huang said. But Nvidia had 2 options, repeat GF100 or release GK104 as the high-end. The answer is simple. In a waffer you can have 201 GK104 die candidates. And ~100 GK110 candidates. Again, knowing that GK104 would be close to Tahiti performance or beat it, it's an easy choice*. GK104 at $500. There was no price point at which GK110 would have been more profitable, no matter how much faster than HD7970 it could have been. With the severely low 28nm volume, they would never be able to sell enough GK110 cards so as to be more profitable than they have been with GK104, even if they had acheved 100% market share.

    * More so when you know that the next node willl not be ready until 2-3 years later. You'll have to do a refresh and you'll have to make it appealing, faster, so by doing what they did, they can kill 2 birds with a single stone.
    Last edited: Oct 17, 2012
  25. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.56/day)
    Thanks Received:
    111
    What I'm saying is that your status as a reviewer gives no inherent credibility to your dismissal of tech rumors/stories (sorry to break it to you...). That might be true if the stories were from people clueless about tech, or if everyone who is well informed about GPUs agreed with you, but that's not the case. When you get corroborating evidence from many reliable and semi-reliable tech sources, there's something to it.

    http://en.wikipedia.org/wiki/Argument_from_authority#Disagreement
    http://en.wikipedia.org/wiki/Appeal_to_accomplishment

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page