1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Refresh GPU Family Detailed

Discussion in 'News' started by btarunr, Oct 17, 2012.

  1. blanarahul

    blanarahul

    Joined:
    Dec 17, 2011
    Messages:
    116 (0.11/day)
    Thanks Received:
    7
    Hmmm. Nvidia. One request. Try to release these GPU's without GPU Boost. It really hampers overclocking. If the GTX 680 didn't have GPU Boost, it would have easily reached 1.4 GHz with good binning.
     
  2. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    +1 I would LOVE an option to disable GPU-Boost. Maybe put a dual BIOS or throw an option into the control panel or something.

    If they do that and start allowing voltage control again, they'll have a far superior product for people wanting to overclock. GPU Boost nonsense + no voltage control kept me away from the GTX 680.
     
  3. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    12,947 (4.85/day)
    Thanks Received:
    1,624
    with the trend NV is following especially after forcing EVGA to disable voltage tuning, i honestly dont think they will listen to customer feedback in this sense
     
  4. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    I know, and I'm very sad about that. I often prefer Nvidia's GPUs, but if AMD offers me something that I can turn into a superior product via overclocking while Nvidia cripples that capability, as happened with the 7970 and 680, I'll take AMD's offering every time.
     
  5. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    I never said it did. I said that you must assume that what I post IS speculation only, since I do what I do, and any real info about un-released products I cannot post.


    And like-wise, the same applies to any tech site.


    That is all. GK110 is un-released, nobody except nVidia employees and those that work at nvidia board pertners know anything about it, and none of them can comment due to NDA.


    So anything, anything at all about it...is questionable.


    Heck, it might not even actually exist, and is only an idea.


    Post a pic of a GK100 chip, full specs and everything else officail form nvidia, and I'll stop my speculation.

    Otherwise, if you don't like my post..that's just too bad. the report button is to the left, if you like.

    you cna say asll you like that it was planned, you have no proof, and neither do I. And neither of us, if we did, could post it. So I can think and post what I like, and so can you. It's no big deal...only you are making it a big deal that I do not agree with this news.
     
  6. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.44/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Argument from ignorance. You ARE claiming both that GK100 never existed and that it didn't exist because it can not be made, based on the fact that we can not provide proof to disprove your theory. You are the only one claiming anything using this argument from ignorance falacy to back it up.

    The rest of us is just saying that it is entirely posible and probable that GK100 existed and was simply delayed or slightly redesigned into GK110, in a move similar to GF100 -> GF110. The proofs although rumors, are out there and have been there for a long time. Rumors about chips don't always end up being entirely true, but there's always some true to it. GK100 was mentioned many times. GK110 DOES exist. 2+2=4

    All in all Nvidia has already shipped cards based on the 7.1 b transistor GK110 chip, so the notion that such a chip cannot be made is obviously false.
     
  7. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    I've just never seen someone so ready to cavalierly dismiss a multitude of tech rumors based on their own idea of what is or is not possible from a manufacturing perspective...

    To each his own I guess.
     
  8. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    Nah, actually, I'm claiming this since i know all the specs of GK110 already. I even have a die shot. And yeah, liek you said, it is now for sale.

    You can find info just as easy, too.

    And becuase of this, I do think nvidia knew long before AMD's 7970 release that GK110 was not possible(which is when that news of GTX680 being a mid-range chip), and as such it wasn't meant to be GTX680, ever. Is GK110 the ultimate Kepler design...sure. but it was NEVER intended to be released as GTX680. It was always meant as a Tesla GPGPU card.

    Liekwise, AMD knew that Steamroller...and excavator were coming...and that they arre the "big daddy" of the Bulldozer design...but that doesn't mean that Bulldozer or Piledriver are mid-range chips.
     
  9. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.44/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Everybody knows the specs and has seen die shots, for a long time already. That means nothing to the discussion at hand. Specs and dies shots say nothing about whether it is feasible to do or not (it IS, it's been already been created AND shipped to customers) and certainly says nothing regarding the intentions of Nvidia.

    If GK100/110 was so unfeasible as a gaming card that it was never meant to be one, they would design a new chip to fill in that massive ~250mm^2 difference that exists between GK104 and GK110, instead of using GK110 as the refreshed high-end card. GK110 being an HPC chip wouldn't have so many gaming features wasting space either.

    EDIT: SteamRoller, etc. Worst analogy ever.
     
  10. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    I dunno. You know, the one thing that nVidia is really good as is getting the most dollar for R&D, and designing another chip kinda goes against that mantra.

    I mean, it's like dropping the hot clock. They knew they had to.

    Keeping within the 300W power envelope with the full-blown Kepler design was obviously not possible, proved by Fermi, IMHO.

    Jen Hsun said "The interconnecting mesh was the problem" for Fermi. That mesh...is cache.

    And gaming doesn't need that cache. But... HPC does. Gaming needs power-savings, and dropping the hotclock and lowering cache and DP lowered power consumption enough that GTX680 is pretty damn good.

    GK104..was that chip you just mentioned.


    HPC is more money. WAY MORE MONEY. So for THAT market, yes, a customized chip makes sense.


    See, Fermi was the original here. GF100 is the original, NOT GK100. or GK110.


    If nvidia started with Kepler as the new core design, then I would have sided with you guys, for sure, but really, to me, Kepler is a bunch of customized Fermi designs, customized in such a way to deliver the best product possible for the lowest cost, for each market.

    You may think the Steamroller analogy is wrong here, but to me, that is EXACTLY what Kepler is. And you know what..nVidia says the same thing, too. :p


    The hotclock to me, and the lack of DP functionality, says it all. hotclock lets you use less die space, but requires more power. DP functionality also requires more power, because it requires more cache. Dropping 128-bits of memory control..again, to save on power...


    If the current GTX680 was meant to be a mid-range chip, after doing all that to save on power, damn, Nvidia really does suck hard. :p
     
  11. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,167 (0.91/day)
    Thanks Received:
    87
    Location:
    So. Cal.
    Sure and if you click the check box to enable OC or break the seal on the switch and then it bricks? I would love to have heard how GK104 could do with the dynamic nanny turned off... While you may believe it might find 1.4Ghz... would it live on for any duration?

    I speculate it wouldn’t or Nvidia would've not put in place such restrictions if there wasn't good reasons. Will they still have it that way for next generation? Yes, almost assuredly but at that point better TDP and improve clock and thermal profiles will mean there's will be no gain over operating at an exaggerated-fixed clock. I think for mainstream both sides will continue to refine boost type control. It provides them the best of both worlds, lower claimed power usage, while the highest FpS return.
     
  12. [H]@RD5TUFF

    Joined:
    Nov 13, 2009
    Messages:
    5,615 (3.11/day)
    Thanks Received:
    1,707
    Location:
    San Diego, CA
    :roll::roll::roll::roll::roll::roll:

    It's allways entertaining when fanboys get butt hurt over peoples opinions, the fact is a stock reference 7970 is slower than a stock reference 680, I know this hurts you to accept this fact but it is true. As for the Ghz edition, compare it to a 680 that is factory overclocked and the result is thew same. My 680 classified walks all over any 7970, so :cry::cry: less and check facts more.
     
  13. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    Guys, we should only look to cadaveca now for tech rumors, this guy obviously knows what's up and we can't trust dozens of other knowledgeable people/sites. They all just make stuff up and obviously only release info to get page views.

    :rolleyes:
     
    [H]@RD5TUFF says thanks.
  14. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    My 1200/1800 Lightnings very seriously doubt that... OC'd 680s and 7970s are about even overall, trading blows depending on the game/benchmark.

    The 7970 gets beat by the 680, sure, but the pricing has updated itself to reflect that now - the 7970 is priced about the same as the 670, and the GHz edition priced around the 680.
     
  15. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.44/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    :laugh: at Kepler being Fermi. Sure and Fermi is Tesla arch (GT200). :laugh:

    If we go by similarities, as in they look the same to me with a few tweaks we can go back to G80 days. Same on AMD side. But you know what? They have very little in common. Abandoning hot-clocks is not a trivial thing. Tripling the number of SPs on a similar transistor budget is not trivial either and it denotes exactly te opposite of what you're saying. Fermi and Kepler schematics may look the same, but they aren't the same at all.

    As to the rest. It makes little sense to think that GK104 is the only thing they had planned. In previous geerations they created 500 mm^2 chips that were 60%-80% faster than their previous gen and AMD was close, 15%-20% behind. But on this gen they said: "you know what? What the heck. Let's create a 300mm^2 chip that is only 25% faster than our previous gen. Let's make the smallest (by far) jump on performance that we've ever had, let's just leave all that potential there. Later we'll make GK110 a 550 mm^2, so we know we can do it, and it's going to be a refresh part so it IS going to be a gaming card, but for now, let's not make a 450mm^2 chip, or a 350mm^2, no, no sir, a 294mm^2 and with a 256 bit interface that will clearly be the bottleneck even at 6000 MHz, let's just let AMD rip us a new one..."

    EDIT: If GK110 had not been fabbed and shipped to customers already, you'd have the start of a point. But since it's already been shipped, it means that it's physically posible to create a 7.1 b chip and make it economically viable (the process hasn't changed much in 6 months). So like I said something in the middle, lika a 5b transistor and/or 400mm^2 would be entirely posible and Nvidia would have gone with that, because AMD's trend has been upwards in regards to die size and there's no way in hell Nvidia would have tried to compete with a 294mm^2 chip, when they knew 100% that AMD had a bigger chip AND they have been historically more competent at making more in less area. Nvidia can be a lot of things, but they are not stupid and would not commit suicide.
     
    Last edited: Oct 17, 2012
  16. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    Yep. :p


    The fact you can't ignore that bit, says something.


    What, I cannot speculate myself?

    And when you can't attack my points, you go after my character? lulz.

    As if I want to be the source of rumours. :p Yes, I want to be a gossip queen.


    Like, do you get that? I'm not the one that posted the news...BTA didn't either...he just brought it here for us to discuss...

    These same sites you trust, get it wrong just as often as right. Oh yeah, Bulldozer is awesome, smokes INtel outright..yeah..that worked...


    HD7990 form AMD in auguest....but it was Powercolor...


    Rumours are usually only part-truths, so to count them all as fact...is not my porogative. :p

    Well, that's just it. This is complicated stuff.

    I am not saying at all that GK104 was the only thing...it isn't. But GK110 was never meant to be a GTX part. Kepler is where the Geforce and Tesla become truly seperate products.

    [​IMG]

    And yeah, it probably did work exactly like that...300mm2...best they could get IN THAT SPACE, since this dictates that they can get so many chips per wafer. You know, designs do work like that, so they can optimize wafer usage...right?
     
    Last edited: Oct 17, 2012
  17. BigMack70

    Joined:
    Mar 23, 2012
    Messages:
    506 (0.54/day)
    Thanks Received:
    114
    Didn't mean it as an attack on your character, I'm just saying that your last couple posts had an "I know what I'm talking about because I'm a reviewer and you peons don't" flavor to them, that's all.

    Could just be reading them wrong, I suppose, but I think not.

    Anyways, rumors are rumors, but they exist for a reason, and this particular family of rumors has been around for almost a year now... plenty long enough to indicate there's something to it.

    Enough debating about rumors for me, though.
     
  18. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    Yeah, you're reading that wrong. I was saying explicitly that I don't know WTF I'm talking about here, since I'm a reviewer. If I did know what I was tlaknig about, I'd not be able to discuss it.
     
  19. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.44/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Oh gosh. Before you say that one more time, can you please explain at least once, why it has so many units that are completely useless in a Tesla card?

    Looking at the whitepaper, anyone who knows a damn about GPUs can see that GK110 has been designed to be a fast GPU as much as it's been designed to be a fast HPC chip. Even GF100/110 was castrated in that regards compared to GF104, and G80 and G9x had the same kind of castration, but in Kepler the family where "Geforce and Tesla become truly seperate products." they choose to mantain all those innecessary TMU, tesselators and geometry engines.

    - If GK104 was at least close to 400mm^2, your argument would hold some water. At 294mm^2 it does not.
    - If GK104 was 384 bits, your argument would hold water. At 256 bit, it doe not.
    - If GK110 didn't exist and had not released 6 months after GK104 did...
    - If GK110 had no gaming features and wasn't used as the high-end refresh card...
    - If GK104 had been named GK100... you get it.
     
    Last edited: Oct 17, 2012
  20. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    Because all those things are needed for medical imaging. HPC products still need 3D video capability too. Medical imaging is a very vast market, worth billions. 3D is not gaming. That's where you miss some things. :p

    And no, I do not agree with the summation that GK110 was intended to be a "fast GPU". The needed die size says that is not really possible.


    But, since it's for HPC, where precision is needed over speed as a priority, that's OK, and lowered clocks, but greater functionality, makes sense.


    However, for the desktop market, where speed wins overall, the functionality side isn't so much needed, so it was stripped out. This makes for two distinct product lines, with staggered releases, and hence not competing for each other.

    I mean likewise, what do all those HPC features have to do with a gaming product? :p
     
  21. [H]@RD5TUFF

    Joined:
    Nov 13, 2009
    Messages:
    5,615 (3.11/day)
    Thanks Received:
    1,707
    Location:
    San Diego, CA
    You confusing value in a debate about performance, not the same thing at all, nor valid in any way.:shadedshu
     
  22. Protagonist

    Protagonist

    Joined:
    Sep 7, 2010
    Messages:
    723 (0.48/day)
    Thanks Received:
    125
    Yes there is something and what we know for sure is GK110 TESLA/QUADRO,... for now.

    And as cadaveca said, the info we have right now is just rumors and speculation, lets just wait and sooner or later we will all know for sure.
     
    Caribana says thanks.
  23. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.44/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Medical imaging is not HPC. Maybe you should have been more clear. That being said, Nvidia has announced GK110 based Tesla, but no Quadro:

    http://www.techpowerup.com/170096/N...tion-Revolution-With-Kepler-Architecture.html

    Their Maximus platform is composed off GK104 based Quadro and GK110 based Tesla cards. So I think that you're missing much more than I.

    And oh, I don't doubt there will be a GK110 based Quadro, but it's not been announced yet afaik. I've only heard about them in the same rumors as the GeForce part so... ;)

    And yet it all points to Nvidia using it. And in the past they have used chips of the same size and quite successfully.

    And an HPC chip has never been profitable on it's own and I don't think it is right now either.
     
  24. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    14,049 (4.51/day)
    Thanks Received:
    7,203
    Location:
    Edmonton, Alberta
    I bet nVidia would disagree.


    For me, Medical imaging is part of the HPC market. Precise imaging isn't needed just for medical uses either, anything that needs a picture that is accurate, from oil and gas exploration to military uses, all fall under the same usage. Both Tesla and Quadro cards are meant to be used together, building an infrastucture that can scale to consumer demands, called Maximus. If you need more rendering power, say for movie production, you got it, or if you need more compute, for stock market simulation, that's there too, so I fail to agree you've posted much that agrees with your stance there. Nvidia doesn't build single GPUs...they build compute infrastructure.


    Welcome to 2012.


    Did you read that press release?


    :p

    I mean, that whole press release is nVidia claiming it IS profitible, or they wouldn't be marketing towards it. :p



    In fact, that press release kinda proves my whole original point, now doesn't it? ;) GK104 for imaging(3D, Quadro and Geforce), GK110 for compute(Tesla).


    Like, maybe I'm crazy...but...well...whatever. I'm gonna play some BF3. :p
     
    Last edited: Oct 17, 2012
  25. crazyeyesreaper

    crazyeyesreaper Chief Broken Rig

    Joined:
    Mar 25, 2009
    Messages:
    8,161 (4.01/day)
    Thanks Received:
    2,805
    Location:
    04578
    i could care less if you want to call me a fanboy [H]@RD5TUFF but honestly it just makes you look like a child.

    i could care less about your classified 680s blah blah i still had my card months before 680 was available and enjoying roughly the same performance.

    Simple fact is if i want to be a dick and pull useless numbers the 7970 holds the World Record for 3DMark 11 Extreme, Heaven Extreme, among others

    when both cards are clocked they perform the same, they excell in certain games over their rival and vice versa

    AvP favors AMD
    BF3 favors NVIDIA
    etc
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page