1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GK104 Dynamic Clock Adjustment Detailed

Discussion in 'News' started by btarunr, Mar 8, 2012.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,195 (11.43/day)
    Thanks Received:
    13,568
    Location:
    Hyderabad, India
    With its GeForce Kepler family, at least the higher-end parts, NVIDIA will introduce what it calls Dynamic Clock Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load. The approach to this would be similar to how CPU vendors do it (Intel Turbo Boost and AMD Turbo Core). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically, is.

    There is quite some confusion regarding NVIDIA continuing to use "hot clocks" with GK104, the theory for and against the notion have been enforced by conflicting reports, however we now know that punters with both views were looking at it from a binary viewpoint. The new Dynamic Clock Adjustment is similar and complementary to "hot clocks", but differs in that Kepler GPUs come with a large number of power plans (dozens), and operate taking into account load, temperature, and power consumption.

    [​IMG]

    The baseline core clock of GK104's implementation will be similar to that of the GeForce GTX 480: 705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load. The CUDA core clock domain (de facto "CUDA cores"), will not maintain a level of synchrony with the "core". It will independently clock itself all the way up to 1411 MHz, when the load is at 100%.

    Source: VR-Zone
  2. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,123 (0.95/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    Nothing like a dangling carrot release, dribble... dribble then I’m no longer stimulated.

    So what 3-4 different Sku's with varying levels of Dynamic simulation? How's that effect OC'ing... Sounds like they took OC'n away or you get one or the other. This is just down-clocking to save power and provides a boost when needed and saves face cause maybe the GK104 isn’t as good as they/we were told.

    Man I hope they got the bugs out and response is right? This is a big what if it... doesn't work flawlessly do you want to be the first to drop $450-550 on their software smoke and mirrors?

    Humm... I think the smart ones will maintain/jump to conventional - established way of doing it, less be their guinea pig.

    Bend over for their version of a pipe dream! :twitch:
    Last edited: Mar 8, 2012
  3. m1dg3t

    m1dg3t

    Joined:
    May 22, 2010
    Messages:
    2,246 (1.49/day)
    Thanks Received:
    513
    Location:
    Canada
    Don't ATi card's already have a "throttle" function? They run @ XXX clock's for desktop/2d/media then when gaming/rendering or 100% load they ramp up to full clock's?

    Could you not merge this with the other thread about the same topic? Would be nice to have all the info in the same place :eek:
  4. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,743 (4.56/day)
    Thanks Received:
    6,766
    Location:
    Edmonton, Alberta
    I do not understand the purpose of this. The way it is presented suggests to me that nVidia had another Fermi on their hands, and the card cannot handle high clocks all the time without having issues. This seems the opposite of power saving to me, as lowering the clocks under lower load would lead to higher GPU utilization, which just doesn't make sense.


    It's like if they let the card run high FPS, it can pull too much current? I mean, there's no point in running 300 FPS in Unreal or Quake 4, and in these apps, a slower GPU would still give reasonable framerates when downclocked. So they are saving power by limiting FPS?

    I HAZ CONFUUZ!!!
  5. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,123 (0.95/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    Yea, but this sounds like they will have it (large number of power plan adjustment) fluctuating with dozens of profiles right while your play the game.

    You're focusing for that head shot... (dynamically dumps clocks)... boom you'r dead. :eek:
    Last edited: Mar 8, 2012
  6. sanadanosa

    sanadanosa

    Joined:
    Jun 13, 2011
    Messages:
    125 (0.11/day)
    Thanks Received:
    15
    Location:
    Semarang, Indonesia
    Fermi does it too, I think what they mean is something like dynamic overclock. Just look the difference between Intel Speedstep and Intel Turbo Boost
    Last edited: Mar 8, 2012
  7. dir_d

    dir_d

    Joined:
    Sep 1, 2009
    Messages:
    848 (0.48/day)
    Thanks Received:
    110
    Location:
    Manteca, Ca
    Im with you on this one, i guess we will have to wait for w1zz to explain it fully.
  8. Inceptor

    Inceptor

    Joined:
    Sep 21, 2011
    Messages:
    497 (0.49/day)
    Thanks Received:
    119
    It sounds to me like a way to regulate power usage, and at the same time increase minimum framerates.
    Here's my speculation:
    If they increase clocks too far, it eats too much power and/or is unstable, but its performance is already good at lower power, so to give a turbo *oomph* and increase overall performance, they use this feature in some kind of burst mode.

    There's been talk of doing this kind of thing with mobile processors in order to momentarily increase performance when needed.

    There's nothing bad about the idea behind it.
    As to whether it allows you to overclock, there's no information you can go on to make any kind of comment based on overclockability. So, why trash the thread with "OMG! it's horrible! the worst feature ever! Fail1!!" comments
  9. bill_d New Member

    Joined:
    Mar 9, 2008
    Messages:
    35 (0.02/day)
    Thanks Received:
    0
    sound like a way to compare a heavy overclocked card to a stock 7970 card "oh look 10% faster"
  10. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,123 (0.95/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    I brought up a legitimate question will this permit overclocking and if it does how will that effect this features operation?

    I didn't trash the thread there wasn't one... I never said it's horrible! the worst feature ever! But are you subconsciously thinking that?
  11. Inceptor

    Inceptor

    Joined:
    Sep 21, 2011
    Messages:
    497 (0.49/day)
    Thanks Received:
    119
    Actually, I find this quite interesting, and for me, it doesn't detract from my buying a 6xx series gpu, at all.
    But maybe I should have clarified, and extended the comment to include all (three?) threads discussing the new NV gpu. Too much juvenile idiocy gets posted, that was my exasperation with it all.
  12. ZoneDymo

    ZoneDymo

    Joined:
    Feb 11, 2009
    Messages:
    331 (0.17/day)
    Thanks Received:
    40
    I don't get why this news is posted, sure its new news regarding the card but these clocks mean nothing to anyone seeing as its a new card.
    Like the information that its 10% faster then an HD7970 in BF3, that tells us something because we know how fast teh HD7970 is in BF3, again these cores mean nothing at this point.

    This card needs to run the gauntlet of tests asap.
  13. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,328 (2.10/day)
    Thanks Received:
    544
    Location:
    Manchester uk
    i read that as

    With its yeild issues on GeForce Kepler family, at least the higher-end parts (theres one part lets be real here, the lower SKU's are just more useless then the higher SKu's), NVIDIA will introduce what it calls Dynamic GPU stability saveing Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load and temperature. The approach to this would be similar to how CPU vendors do it ,ie useless in most scenarios (ie useless in most scenarios and turned off for those in the know). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically

    look if i make a 1.1ghz GPU tommorow or tonight and then setup its firmware to normally run at 0.9Ghz but boosting to 1.1(11) during heavy load spells until it heats up to a certain point then drops back down to 0.9,what use is that ive an amp that goes to 10, i dont want or need one thats got 11 on the effin knob, i just want a bigger more noise and scribing 11 on the dial dosnt make it louder , Im seeing this as yet another underhanded way of selling off less then ideal silicone,, simples
  14. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,123 (0.95/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    I agree as to being a lot rhetoric, but this is truly earth shattering news, which has all the makings of good/bad depending.

    I suppose Nvidia isn’t going to be releasing full bore (could be just one top Sku) on opening day, so if there are issues they can minimize and have damage control. I do hope they have up’d there CS or the AIB’s have got up to speed on the particulars this new Dynamic Clock Adjustment operation.
  15. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,887 (6.59/day)
    Thanks Received:
    1,888
    Location:
    Concord, NH
    ...It sounds like another way of describing what GPUs are already doing. It sounds like a marketing ploy. GPUs already down-clock in low-load situations. nVidia needs to stop egging us on and just release the damn thing. :confused:
  16. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,328 (2.10/day)
    Thanks Received:
    544
    Location:
    Manchester uk
    totally agree, and this spinal tap inspired Bs is nonesense , I cant beleive some of you are haveing this as a feature, it simply isnt both companys have used dunamic clocks for the last few years.......... so hows this in any way a new feature , i Could rephrase their PR as

    We have clocked it a bit lower as std in 3d but dont worry on occasion ,when we decide we will use a profile that will allow a slight oc untill heat or power overcomes its stabillity< but then that would be the truth and not very good PR:rolleyes:

    wizz's review may prove them right and/or it may still oc nicely, But i doubt it, they clearly need to get rid of some iffy stock in my eyes.:wtf:

    ill STILL be getting one either way as hybrid physx is worth the effort to me but marketing BS and fanboi,istic backing grates my nerves,, carry on, my peice has been said, ill rant no more.
  17. the54thvoid

    the54thvoid

    Joined:
    Dec 14, 2009
    Messages:
    3,204 (1.92/day)
    Thanks Received:
    1,429
    Location:
    Glasgow - home of formal profanity
    To me it means:

    The gpu can dynamically alter clocks to meet a steady fps solution, allowing reduced power usage in real terms.

    It is not the same as an idle state gpu. When the 3D rendering initiates on a standard gpu it kicks in to the steady clock rate (i.e. 772 for gtx 580 or 925 for 7970.) and it doesn't budge from that during the 3D session.

    Or using media on the PC, the GF110 clocks at about 405MHz.

    This to me sounds like a good thing. A variable clocks domain that allows steady fps (perhaps software/hardware detects optimum rates and adjusts clocks to meet).

    I would happily buy a card that dumps a steady 60fps on my screen (or 120 for 3D purposes). I dont need my GTX 580 at 832 giving me 300fps. I'd rather have a steady 60fps and core clocks of 405 (or whatever) reducing power usage.

    Let's wait for Monday 12th to see (rumoured NDA lift).
  18. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,887 (6.59/day)
    Thanks Received:
    1,888
    Location:
    Concord, NH
    The one problem is that this can be implemented at the driver level.
  19. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,123 (0.95/day)
    Thanks Received:
    82
    Location:
    So. Cal.
    Kind of a further adaptation of what the GTX 480 had when it saw it was running FurMark... that wasn't the worst as you can't lose while running that. This will be chugging away until a extremely demanding spike is require then bam… 950MHz! I can see how it helps by quickly bouncing the clock(s) 35% in a millisecond, but the power section will need to be very robust to provide a inrush of currant to supply the demand.

    In theory it been done, but I can’t think of an GPU example where such an implantation has been "up clocking" so quickly or dramatically, but implications when enthralled in split second heat of battle are going to be grueling.

    Will Nvidia have such profile as firmware (hard wired) or more of a driver software type program, we wait for release day.
  20. Fairlady-z

    Joined:
    Feb 22, 2012
    Messages:
    122 (0.14/day)
    Thanks Received:
    14
    This is pretty interesting if you ask me, and I just bought two HD7970's as I just got out of SLI GTX570 system that was nothing short of a dream. I did want to see what team red had to offer, as I like to mix things up from time to time.

    Now my only concern to this feature is what type of micro stutter will occur during SLI? I mean, how fast is this to respond when you have two or three cards in SLI. Anywas, I am sure its a crazy fast card and most of our concerns will be washed away once the NDA lifts.
  21. xBruce88x

    xBruce88x

    Joined:
    Oct 29, 2009
    Messages:
    2,356 (1.37/day)
    Thanks Received:
    544
    or once W1zzard gets his hands on it.
  22. DRDNA

    DRDNA

    Joined:
    Feb 19, 2006
    Messages:
    4,768 (1.56/day)
    Thanks Received:
    561
    Location:
    New York
    unless these babies clock as well as ATI's then they have achieved absolutely nothing.
  23. buggalugs

    buggalugs

    Joined:
    Jul 19, 2008
    Messages:
    919 (0.42/day)
    Thanks Received:
    135
    Location:
    Australia
    I can see it now.....guys with this nvidia card will be in the forums complaining that their GPU is clocking down too much and they're losing FPS, and they'll be asking how can I lock the clocks to max speed? etc etc.

    This turbo thing could be ok for guys who run stock GPU speeds and are too scared to overclock but most enthusiasts would prefer the GPU at max clocks when they're gaming.

    I guess we'll see how it turns out....
  24. jpierce55

    Joined:
    Oct 7, 2006
    Messages:
    1,335 (0.47/day)
    Thanks Received:
    91
    I was thinking if AMD had done this the 680 would not have 10% over the 7970.

    Rumored 10%
  25. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,887 (6.59/day)
    Thanks Received:
    1,888
    Location:
    Concord, NH
    Source?

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page