1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Refresh GPU Family Detailed

Discussion in 'News' started by btarunr, Oct 17, 2012.

  1. james888

    james888

    Joined:
    Jun 27, 2011
    Messages:
    4,279 (3.81/day)
    Thanks Received:
    1,423
    That is probably how it will turn up.
    Crunching for Team TPU
  2. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,131 (4.18/day)
    Thanks Received:
    2,745
    Location:
    04578
    i love how everyone is saying AMD will have a hard time competing :roll: did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

    8970 is expected to be 40% faster than the 7970

    GTX 780 is expected to be 40-55% faster than the 680

    add in overclocking on both and we end up with the exact same situation as this generation. So in reality it just plain doesnt matter lol performance is all i care about and who gets product onto store shelfs and from their into my hands. Doesn't matter whos fastest if it takes 6 months for stock to catch up.
  3. hoodlum New Member

    Joined:
    Oct 17, 2012
    Messages:
    1 (0.00/day)
    Thanks Received:
    0
    Low Power?

    If you go back to the original linked article the performance gains for the GK114 and GK116 will only be 5-15%. That seems quite low considering the improvements to memory bandwidth, shaders, ROPs, etc. That would suggest nvidia may be focusing on even lower TDP than pure performance increases. And prices will be increasing too.

    I think people may be disappointed by the time these are released. I suspect AMD will show similar improvements next year as well with more focus on TDP.
  4. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    I think, from reading pretty much every review of these cards, that the general impression this round is (wrongly) more favorable to Nvidia than AMD, and this carries over into forums/etc.

    AMD did this to themselves because they released their 79xx cards fairly horridly underclocked (especially the 7950), and at price points that were too high. They didn't make a move on either front soon enough, and so when Kepler finally hit, reviewers were left looking at a situation where the 7970 was outperformed by a cheaper card. Then the 670 came in, trashed the 7950, and competed with AMD's previously $550 card at $150 less.

    Those things defined the impressions most people have of this round. AMD then made the mistake of releasing their GHz edition as a reference card for reviewers, and most reviewers then dismissed it as too loud/etc.

    You have to do a decent amount of homework before you start realizing that both companies at this point in time are pretty much dead even, and most people don't like to think that hard.

    If AMD had released their 7970 clocked around 1050/1500 MHz for $500 at launch, and their 7950 at maybe 950/1400 for $400, I can guarantee you that the impressions would be different. Pretty much every single 7970/7950 will hit those clocks without messing with voltages, so I have no idea why they got so conservative. But they didn't make those moves, and so here we are.
  5. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,131 (4.18/day)
    Thanks Received:
    2,745
    Location:
    04578
    they were conservative in order to get better yields essentially most chips yes can do 1050 but not all can at the proper voltage or TDP level, they also have to harvest chips for the 7950 lower clocks meant more chips more usable chips means greater volume to put on store shelves.

    Regardless the refresh will probably see Nvidia take the lead but not by a whole lot they have more room to play when it comes to TDP than AMD does right now.
  6. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    I understand they did it for better yields, but I haven't seen a 7970 that wouldn't do 1050 on stock volts. I'm sure they're out there, but they've gotta be a tiny minority. I think AMD just flat out screwed up figuring out how they needed to clock their cards for viable yields.
  7. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,131 (4.18/day)
    Thanks Received:
    2,745
    Location:
    04578
    probably but it doesnt matter much most overclocked 7970s on the market were already 1000-1100 mhz before the GHz edition dropped lol but i digress looking at the info available if AMD limits themselves to 32 ROPs again but increases shader count they will be beaten by NVIDIA. should AMD wise up and increase ROP count to 48 they stand a good chance of being within reach in that pre - overclocked models should fair well against Stock 780 time will tell of course.
  8. james888

    james888

    Joined:
    Jun 27, 2011
    Messages:
    4,279 (3.81/day)
    Thanks Received:
    1,423
    Can you explain what a ROP is and why it is/might be bottlenecking the 7970?
    Crunching for Team TPU
  9. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,131 (4.18/day)
    Thanks Received:
    2,745
    Location:
    04578
    http://en.wikipedia.org/wiki/Render_Output_unit

    Look back at the 5850 and 5870

    clock both to the same clock speed the 5850 with less shaders but same ROP count was within 1-2% of the 5870 so increased shader count didnt do a whole hell of a lot

    with GCN shaders scale a bit better yes but notice

    7870 1280 GCN stream processors and 32 ROPs can take on the 7950 which is 32ROPs 1792 shaders etc

    looking at previous GPUs

    7770 = 640 shaders 16 ROPs, 10 Compute Units, 40 TMUs - 3Dmark 11 P3500
    7870 = 1280 shaders 32 ROPs, 20 Compute Units, 80 TMUs - 3Dmark 11 P6600
    7970 = 2048 shaders 32 ROPs, 32 Compute Units, 128 TMUs - 3Dmark 11 P8000

    what 7970 probably looked like if following AMDs previous design philosphy
    1920 shaders 48 Rops, 30 Compute Units, 120 TMUs add in higher GPU clock

    for the 8970 being at the same 28nm its looking like AMD will push for 2500-2600 shaders many are saying 2560 but no one knows for sure yet

    thats 25% increase in shaders however we can see from the 7870 to 7950 a 20-30% increase in shaders didnt do much for performance

    AMD needs more ROPs and higher clocks for GCN to scale well with a large number of stream processors

    so with just increasing shaders AMD wont get far they will need to up the # of compute units as well as TMUs and with that ROPs count needs to be bumped up to maintain a balanced GPU design Tweaks in architecture will help but a simple bump in shaders would mean that a heavily clocked 7970 could possible catch the 8970 if the basis of 40% is compared to the 925 Mhz stock cards in which case we see the 7970 at full overclocks pulling as far as 20% faster right now on avg. that would make a stock 8970 just 20% faster so a better balance and more optmized design is necessary.

    NVIDIA already has their design finished, AMD on the other hand we can only hope didnt screw the pooch.
    Last edited: Oct 17, 2012
    james888 says thanks.
  10. Xzibit

    Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,102 (1.35/day)
    Thanks Received:
    247
    That deals with capacity something that nvidia complains very little of. The past 3 quaters they've "nvidia" has been complaining about wafer yields since they moved to a buy per wafer instead of a per working die.
    Look up any Nvidia transcript this year and 28nm yields issues along with margins will be the dominate fall-back.

    Nvidia is currently in talks with Samsung to use its 28nm fabs but Samsung is more expensive and Nvidia only uses Samsung for initial fab of desings and looks to Global Foundries and TSMC for production.
    Samsung will have a open slot given there recent litigation with Apple and companies like Qualcomm, Nvidia and others will be looking to fill in that slot and Samsung will charge a premium i'm sure.
  11. GoldenTiger

    GoldenTiger

    Joined:
    May 18, 2005
    Messages:
    37 (0.01/day)
    Thanks Received:
    10
    It proves nothing. In fact, if anything, it shows nVidia didn't have a great, available GK100. Now that GK110 came out well, they may be releasing it as the high-end. You really need to not be so hung-up on codenames.

    Considering the Tesla card specs have been outed by a CAD vendor recently accidentally (K20 card) with them up for order of GK110.... and 3dcenter tends to be pretty knowledgeable.... I would put my bet on this rumor being fairly accurate, pending good clock speeds at release for the GeForce variant.

    Also, a useful post from OCN and my reply:

    -----

    Exactly... and we may see further optimizations ala the GF104 vs. GF114. I doubt it'll come in at "just" 700mhz, but if it does, it's still not outside the realm of possibility that it could be 50% faster out of the box.
  12. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    There's more than enough evidence to substantiate that the GK104 was drawn up to be the 660ti and not the 680...
  13. GoldenTiger

    GoldenTiger

    Joined:
    May 18, 2005
    Messages:
    37 (0.01/day)
    Thanks Received:
    10
    Oh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.
  14. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,758 (4.55/day)
    Thanks Received:
    6,800
    Location:
    Edmonton, Alberta
    This is always what I thought. If nVidia could truly release a card twice as fast as what AMD has, using the same foundry, then they would, since that would ensure far more sales and profit than selling something that "saves on costs" instead.

    In fact, had nVidia done this, to a degree, would amount to price fixing, and of course, is illegal.

    Of course, now that both cards are here, and we can see the physical size of each chip, we can easily tell that this is certainly NOT the case, at all, so whatever, it's all just marketing drivel.

    In fact, it wouldn't really be any different than AMD talking about Steamroller. :p "Man, we got this chip coming...";)
  15. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    You are correct... the idea that it was intentionally held back is nonsense. However, the chip did disappear among a ton of rumors about yield problems, so it seems best to reason that they were forced into holding it back due to poor yields. Fortunately for them, they were able to hit the performance target they needed (set by AMD) with GK104.

    Wound up being a big win for them on the business side of things (because it IS a midrange card from a manufacturing point of view, with a high end price) and a loss for consumers (who lost out on potentially much greater performance).
  16. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,131 (4.18/day)
    Thanks Received:
    2,745
    Location:
    04578
    more likely it was held back because Nvidia needed to release something rather than face ongoing delays like they did with Fermi aka GTX 480 Gk104 offered plenty of performance and allowed them to keep GK110 in the wings for a refresh it essentially gave them a performance boost for the next series without need much more input and instead gave them time to further tweak the chip.

    Its better to release a product when its truly ready than to release early with massive issues my guess is with Kepler Nvidia learned from their mistakes with Fermi and to great effect.
  17. Xzibit

    Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,102 (1.35/day)
    Thanks Received:
    247
    If the GK104 was a true mid-size Nvidia would be making out like thiefs with a very profitable mid-range chip. Thats not what Nvidia has been saying in there quarterly reports and conference call to investors. They have been voicing concerns about production, yields and margins since there first report this year.
    That theory doesnt really reflect Nvidias own stance and it makes less sense given 2 quater straight AMD has gain market share in discrete graphic sector.

    Think thats more of a forum myth driven by fanboyism.

    Think about it. As a company your loosing market share and sales down 1million units sold form quater to quarter. You'd think it be the opposite if your selling a mid-range chip at great profit for the high-end market.

    If for some weird reason that would be true then its a horrible design and execution.
  18. renz496

    Joined:
    Mar 24, 2012
    Messages:
    86 (0.10/day)
    Thanks Received:
    7
    the way i heard it GK100 was not held back but it was scrapped and redesign into GK110. IMO if AMD able to put out much better performance out of 7970 from the launch day maybe nvidia will be forced to use that scrapped GK100 as their flagship. but luckily for nvidia amd choose to be conservative with 7970 clock and somewhat nvidia was able to make GK104 to match 7970 performance. lol i think originally nvidia wants GK104 to be clocked around 700mhz and intend to market the card with 'overclockers dream' slogan just like they did with 460 and 560. :roll:
  19. atikkur

    Joined:
    May 3, 2012
    Messages:
    45 (0.06/day)
    Thanks Received:
    0
    Location:
    Jakarta
    only buy nvidia at their revision stage,, that is their second refreshs after their major architecture change. GK110 looks sweet.
  20. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    If you assume that GK104 was drawn up originally to be the 680, as it eventually was, you have to come up with an explanation for:

    -All the rumors and leaked info until late Jan/Feb of this year which had the GTX 680 being based on the GK110. That wasn't one or two isolated rumors... there was tons of info floating around indicating that to be the case. Almost NOTHING indicated GK104 to be the high end chip, not until GK110 completely disappeared and rumors of yield problems started cropping up all over.
    -The limited memory bus (256 bit) on the GK104, which is typically reserved for mid level cards and not high-end
    -The PCB design itself, most notably as it appears on the 670 (which is close to being a half-length PCB in the reference designs).

    If you assume that GK110 was originally supposed to be the 680 and GK104 was to be the 660ti, as I do, it makes sense of the above information quite well. As for Nvidia not "making out like [a thief]", the explanation for that is readily apparent in their yield problems, which affected GK104 as well (remember - the GTX 680 was basically a paper launch for 2+ months). Also, aren't desktop GPUs a relatively low-profit/revenue area anyways from a business perspective?

    We'll never know with 100% certainty, but I think that it makes better sense of the available data that the original GTX 6xx lineup was to include both Gk110 (680/670?) and GK104 (660ti/660).
    Last edited: Oct 17, 2012
    NHKS says thanks.
  21. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,758 (4.55/day)
    Thanks Received:
    6,800
    Location:
    Edmonton, Alberta
    You do not have to explain anything.

    Period.


    Die sizes say GK100 or whatever was never possible.

    HD 7970:

    [​IMG]

    GTX 680:

    [​IMG]


    Note how the AMD chip has nearly 33% more transistors, but is barely physically larger than GTX 680.

    If nVidia could have fit more functionality into the same space, they would have.


    They could have planned to release something different all they wanted, but if they had, that chip would have to have been quite a bit larger than HD 7970 is.

    Since nvidia is selling a chip that is much the same size as 7970. per wafer ,they aren't getting that many more chips.


    If Nvidia is selling a mid-range chip as high-end, they either have HUGE HUGE HUGE design issues,


    OR AMD is doing the exact same thing.


    :roll:


    Fact isd, GTX 680 ain't no mid-range chip, unless you beleive that most of that there chip is deactivated.
    NHKS says thanks.
  22. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    This doesn't make much sense... why do we now have rumors of that same GK110 being released? Die size constraints will still be there... if the die size were the inherent problem here, GK110 would have been scrapped and we wouldn't be reading this article right now.
  23. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,758 (4.55/day)
    Thanks Received:
    6,800
    Location:
    Edmonton, Alberta
    it WAS scrapped.
  24. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    498 (0.58/day)
    Thanks Received:
    111
    Did you read the article?

    Doesn't sound like "scrapped" to me... unless you want to argue that this is just bogus, which it could be.
  25. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,758 (4.55/day)
    Thanks Received:
    6,800
    Location:
    Edmonton, Alberta
    I'm not arguing that it is bogus.

    Not at all.

    But the fact of the matter is that what nVidia can do with TSMC's 28nm, AMD can as well.

    And AMD's already 33% more efficient in used die space.

    If you beleive the 7.1 billion transistor thing, than it must be twice as big as current GTX680 silicon(3078 Million transitors, BTW), or current GTX 680 really is a horrible horrible design, and it's a feat of wonder that nvidia managed to get it stable.

    And how does a doubling of transistors only equal a 55% increase in performance?

    Oh, I read it just fine. :p


    Argue that it's bogus... :roll:

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page