1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc.

Discussion in 'News' started by btarunr, Mar 7, 2012.

  1. Crap Daddy

    Crap Daddy

    Joined:
    Oct 29, 2010
    Messages:
    2,738 (2.16/day)
    Thanks Received:
    1,043
    Well, we still don't know the whole part of the story especially the price of the GTX680.
    So there is still a chance because NV has to fill some parts all the way down to the GTX560Ti/HD7850 which are 200-250$ cards and they have only one chip ready. The GK106 is nowhere to be seen (in fact it's more mysterious than the GK110).
  2. TheMailMan78

    TheMailMan78 Banstick Dummy

    Joined:
    Jun 3, 2007
    Messages:
    20,635 (8.21/day)
    Thanks Received:
    7,244
    Tenerife could be the 8900 series too. Right now Tahiti is top tier.....unless AMD goes something like 7995 or some crap.

    I never flamed you for the price. 7970 is way over priced.
  3. xkche

    xkche

    Joined:
    Feb 4, 2009
    Messages:
    143 (0.08/day)
    Thanks Received:
    5
    I see the performance of HD7870 so close to the HD7950... maybe the HD7900 is limited by drivers until nvidia release the GTX600???

    Maybe i'm paranoic.... @.@
  4. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,069 (0.98/day)
    Thanks Received:
    75
    Location:
    So. Cal.
    Eactly, and if this works out and blindsided AMD... kuodos.

    But, here’s me thinking… What happened or is happening with a GK110? Why so late? If GK104 came out this great, why not redeploy with a GK110 “death blow” at any price? Or, is it not working out right, how can a bigger die not be working, they can't correct it? ...

    They're providing AMD time to engineer and release a re-spin? Something doesn't make sense with this; I mean is it that revolutionary size, performance, efficiency, and price… they just aren't compelled to stand the market on its ear?
  5. erocker

    erocker Super Moderator Staff Member

    Joined:
    Jul 19, 2006
    Messages:
    39,139 (13.83/day)
    Thanks Received:
    13,587
    It's all semantics and naming. All the 8900 series will be basically is a beefed up Tahiti. Same architecture.
  6. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.60/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    And GK104 is top tier now.

    I never said you did, but oh, I was flamed by many, because 15% over GTX580 was miraculous and Nvidia would never come up with something much faster and if they did it would cost $1000 and draw 500w and whatnot.
  7. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,065 (1.12/day)
    Thanks Received:
    315
    Could be any number of reasons:
    1. The larger GPU is obviously going to need a wider memory bus. Nvidia are lagging in memory controller implementation at the present time -hardly surprising since the GDDR5 controller was basically pioneered by AMD. Witness the relatively slow memory clocks for Fermi.
    A 384 (or larger) bus width is likely a necessity for workstation, and particularly HPC, and for whatever else GK110 is, it will primarily earn back its ROI in the pro market.
    2. Likewise cache
    3. Double precision optimization ?
    4. Maybe the sheer size of the die is problematic for yield, heat dissipation etc. Not an unknown factor with large GPU's in general and Nvidia's large monolithic dies in particular.
  8. erixx

    erixx

    Joined:
    Mar 24, 2010
    Messages:
    3,080 (2.07/day)
    Thanks Received:
    386
    Opening another beer before going to bed. Wake me up when we can order this. :)
  9. Dj-ElectriC

    Dj-ElectriC

    Joined:
    Aug 13, 2010
    Messages:
    2,080 (1.55/day)
    Thanks Received:
    802
    Missleading naming is missleading
  10. v12dock

    v12dock

    Joined:
    Dec 18, 2008
    Messages:
    1,509 (0.77/day)
    Thanks Received:
    286
    Nvidia is no longer this magical super powerful and mysterious company that worshipers had once believed, performance levels are well within I expected. It's going to tricky picking a GPU for a build I have coming up in late April.
  11. TheMailMan78

    TheMailMan78 Banstick Dummy

    Joined:
    Jun 3, 2007
    Messages:
    20,635 (8.21/day)
    Thanks Received:
    7,244
    Then what will they call the next one? 780 in the same year? Sorry I'm not buying it.

    Well.....as its been said we havent seen the price or power draw yet. Could be 1000 bucks with a 500w power draw for 10% faster then the 7970 lol. I doubt it.....but NVIDIOTS would pay for it. I wouldnt put it past NVIDIA to charge it knowing this.
    hardcore_gamer and Nihilus say thanks.
  12. OneCool

    OneCool

    Joined:
    Nov 27, 2005
    Messages:
    832 (0.27/day)
    Thanks Received:
    64
    Location:
    Look behind you!!
    Sounds like their stressing a mid-range chip to be top dog.


    Something is telling me their adding voltage to get the clocks up to compete.

    "Speed Boost" come on!! They already have 3 clock profiles now.Why some other kind voltage control unless your worried the damn thing is going to overheat in 3D situation.I can just hear the fan going up and down,up and down :rolleyes:

    I hope im wrong but ...... we shall see :shadedshu
  13. bear jesus

    bear jesus New Member

    Joined:
    Aug 12, 2010
    Messages:
    1,535 (1.14/day)
    Thanks Received:
    200
    Location:
    Britland
    I have to wonder what effect the "clock speed-boost feature" could have on overclocking and if it could be turned off.

    Hopefully GK104 clocks well as if it is only a relatively small percentage ahead of a stock 7970 then surly the 7970s with high clocks (1.1ghz+) would be so close or in theory even beat it.
    Whatever happens it looks like things could get interesting but in a kind of unexpected way.

    As far as the name goes obviously after seeing all the dual mid range GPU cards Nvidia chose to make the 680 just 660 SLI on a chip but the yields failed them so now the 660 is the 680 and GK100 is the 780 when AMD brings out the 89xx cards :p
  14. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,065 (1.12/day)
    Thanks Received:
    315
    So when's launch day for the GTX 780? I'd like to get my pre-order in

    BTW:
    HD 2900 series ....May 2007
    HD 3870 series.....Nov 2007

    So, not exactly unheard of, even if you use the "same year" terminology rather than a calender year. If we're talking the same architecture, you might want to check on the GF100/GF104 launch timeable.

    Something to be said for building a brand. Maybe if ATi/AMD had shown more than a passing interest in dev support (GiTG) and pro graphics we wouldn't be looking at this situation.

    Still, no pleasing some people....as your avatar proclaims.
  15. farquaid New Member

    Joined:
    Jul 21, 2009
    Messages:
    18 (0.01/day)
    Thanks Received:
    1
    Question i ask is how well it will work. It would be a really good thing if can remove dips in the fps. Those happen very sudden so I think it would be hard to instantaneously boost clock speed and if it doesnt boost speed instantaneously it would have to predict the future.
    bear jesus says thanks.
  16. TheMailMan78

    TheMailMan78 Banstick Dummy

    Joined:
    Jun 3, 2007
    Messages:
    20,635 (8.21/day)
    Thanks Received:
    7,244
    And if you owned a 2900 series you would also know what a bitter taste that left in your mouth. Why do you think its not common place anymore? Hmmmmm.

    Also I love all the "But, but AMD does it too" crap. Some of it isn't even remotely the same. Yet people use it as an excuse for what NVIDIA is doing. Guess what? This thread is about NVIDIA not AMD.

    There I bit. Ya happy? Do you really wanna troll me?
  17. Crap Daddy

    Crap Daddy

    Joined:
    Oct 29, 2010
    Messages:
    2,738 (2.16/day)
    Thanks Received:
    1,043
    I heard there's a GTX780 special edition handmade and signed by Jen Hsun Huang waiting for you in the lobby at NV headquarters in Santa Clara. I heard it beats the heck out of Tenerife.
  18. TheMailMan78

    TheMailMan78 Banstick Dummy

    Joined:
    Jun 3, 2007
    Messages:
    20,635 (8.21/day)
    Thanks Received:
    7,244
    Buying plane ticket nowz!
  19. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,128 (1.97/day)
    Thanks Received:
    1,354
    Location:
    Glasgow - home of formal profanity
    I still don't really know why folk say the 7970 is over priced. It's a consumer article made by a private company for profitable means. The stark reality is it is better than the 580 by a reasonable margin and can grossly overclock without any hassle to make it vastly superior (to me that means 40-50% faster).

    The 3GB AMD card is on par (or cheaper) than the 3GB GTX 580 versions. Likewise the 6970 was priced reasonably high at launch (although the premium to move to the 580 was not proportional to it's superiority). The 7970 requires to be priced higher than the previous best performing single gpu card - that is just reality.

    As for the 680, if it has a lower production cost (than the 580 had) then it is not unreasonable to assume it will sell at a competitive price. Many reports mention it is an efficient chip, unlike Fermi. If that is the case, it does not need an exhorbitant price tag. NV marketing knows how to sell (for better or worse, ethically) - It is not unreasonable to suggest they release a superior card and use AMD's high pricing to make consumers double take AMD's prices. "Hey look at those AMD douchebags ripping you off" scenario.

    As for people harking on about AMD will just release higher clocked cards to 'hump' the 680, that's an invalid point. IF GK104 is efficient and conservatively clocked, then it may also be an overclocking dream - we dont know yet. My 580 can run at 950 (23% overclock). A 7970 at stock is 925, a lot of reviewers topped out at 1125 (TPU review hit 1075). That's a 21% overclock. Okay, so my 580 is a Lightning but the point is the same, overclocking can be done on both sides.

    The 680 will also be the contemporary top tier NV card. It doesn't matter if it is not the uber perfoming card of myth. It is NV's top and possibly the worlds top performing single gpu card. If all the reasonable rumours are true, GK110(112, whatever), the daddy Kepler card IS the be all and end all and NV are in no rush with it. They've seen Tahiti and thought, "oh, is that it!" and focussed on the GK104 launch because they know they can beat it. It's a stern possibility that whatever AMD come up with, Big GK will win. Reasoning?
    GCN is AMD's new design. They'll evolve their compute design for better or worse to compete with GK. NV have CUDA well under control. They can shrink it onto the current fab process and make it a monster.

    I really think this round of gfx cards are little 'offerings'. AMD saying, "oh looky at our new compute stuff" and NV saying, "oh looky at our new efficient card". I think Q4 2012 will be when the real shit hits the fan and both camps make tweaks and redesigns that establish their proper power play.

    Oh, Charlie at S/A says TSMC has halted ALL 28nm processes for now due to an issue.
    http://semiaccurate.com/2012/03/07/tsmc-suddenly-halts-28nm-production/

    Anyway, all of this is just logical personal opinion. I'm just as eager as all to see the real benchmarks from reviews.
    m1dg3t says thanks.
  20. bear jesus

    bear jesus New Member

    Joined:
    Aug 12, 2010
    Messages:
    1,535 (1.14/day)
    Thanks Received:
    200
    Location:
    Britland
    That is a very good point, if it could respond fast enough and with enough of a boost it could in theory possibly improve the game play experience across the board by at least dampening the fps dips.

    I would expect it to act kind of like AMDs powertune but in reverse.
  21. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,069 (0.98/day)
    Thanks Received:
    75
    Location:
    So. Cal.
    The reference HD7950 3Gb is showing 16% better @2650x with an $550 MSRP. While a GTX580 1.5Gb originally MSRP at $500! So AMD gave another 1.GB memory 16% performance, better efficiency and at that time didn't see Nvidia challenging with a GK104, so that price was not totally out of line.
  22. phanbuey

    phanbuey

    Joined:
    Nov 13, 2007
    Messages:
    5,198 (2.21/day)
    Thanks Received:
    973
    Location:
    Miami
    I wondered that same thing, there is definitely no obvious way of doing it... like turboboost makes sense because it can detect when an application is bound by clockspeed bc it is a single thread, and then boosts that core with that thread....

    Unless it dynamically overclocks the bottlenecking parts of the GPU, I don't see how could benefit. I mean, it is clear that it will save power by doing this but power saving always = more latency and reduced perf. Maybe it detects a safe overclock and applies it during games? The only other option is if the card boosts to an unstable long-term clock... but something that is stable for short bursts.
    Last edited: Mar 7, 2012
  23. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    19,484 (6.35/day)
    Thanks Received:
    5,726
    I don't see it as such. I think what they are talking about is more of a short speed boost that if ran at constantly would overheat the card. When really high loads are detected, the card overclocks itself for a short period of time, which will overload the cooler if done for a long time.

    The cards already do power saving when not under load, but this detects extremely heavy load and cranks up the speeds to overcome. For example:

    Image you are playing a FPS and someone throws a grenade and there is an explosion. This is an instance of high load, where a normal card would experience a framerate drop(or lag spike). But the GK104 detects this high load and momentarily boosts the clock speed to help mitigate the lag experience.

    Using your example, it would be a 750HP engine that has to use a 650HP engine's cooling system due to space constraints, but you can push a button and for a few seconds get 750HP.

    They already have the "Render 3 Frames in Advance" option, so....

    But I think it could be a matter of only taking a frame or two to boost the speed.

    Frame 1: This frame is really hard to render.
    Frame 2: Speed boost kicks in.

    We know the cards are already measuring load, so it probably isn't hard to detect hard to render frames and give a momentary speed boost.
    phanbuey says thanks.
    Crunching for Team TPU 25 Million points folded for TPU
  24. phanbuey

    phanbuey

    Joined:
    Nov 13, 2007
    Messages:
    5,198 (2.21/day)
    Thanks Received:
    973
    Location:
    Miami
    ^^ cant wait to see the reviews and what this does to aftermarket overclocking

    There is probably a time limit too... what if you're playing a game that gives the card an all-round general hard time...
  25. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    19,484 (6.35/day)
    Thanks Received:
    5,726
    Actually, now that I think about it, it only really has to detect framerate. The drivers are already monitoring framerate in real time, that is how OSD programs like FRAPS work. So, when it detects a drop in framerate, speed boost kicks in for 30 seconds(or whatever) to help through it, then some kind of cool off period between boosts or something to keep the card form overheatings as well as a maximum temp for the cards to run at where beyond that temp there will be no speed boosts until the card cools off.
    Crunching for Team TPU 25 Million points folded for TPU

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page