Thursday, March 8th 2012

GK104 Dynamic Clock Adjustment Detailed

With its GeForce Kepler family, at least the higher-end parts, NVIDIA will introduce what it calls Dynamic Clock Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load. The approach to this would be similar to how CPU vendors do it (Intel Turbo Boost and AMD Turbo Core). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically, is.

There is quite some confusion regarding NVIDIA continuing to use "hot clocks" with GK104, the theory for and against the notion have been enforced by conflicting reports, however we now know that punters with both views were looking at it from a binary viewpoint. The new Dynamic Clock Adjustment is similar and complementary to "hot clocks", but differs in that Kepler GPUs come with a large number of power plans (dozens), and operate taking into account load, temperature, and power consumption.

The baseline core clock of GK104's implementation will be similar to that of the GeForce GTX 480: 705 MHz, which clocks down to 300 MHz when the load is lowest, and the geometric domain (de facto "core") will clock up to 950 MHz on high load. The CUDA core clock domain (de facto "CUDA cores"), will not maintain a level of synchrony with the "core". It will independently clock itself all the way up to 1411 MHz, when the load is at 100%.Source: VR-Zone
Add your own comment

56 Comments on GK104 Dynamic Clock Adjustment Detailed

#1
Casecutter
Nothing like a dangling carrot release, dribble... dribble then I’m no longer stimulated.

So what 3-4 different Sku's with varying levels of Dynamic simulation? How's that effect OC'ing... Sounds like they took OC'n away or you get one or the other. This is just down-clocking to save power and provides a boost when needed and saves face cause maybe the GK104 isn’t as good as they/we were told.

Man I hope they got the bugs out and response is right? This is a big what if it... doesn't work flawlessly do you want to be the first to drop $450-550 on their software smoke and mirrors?

Humm... I think the smart ones will maintain/jump to conventional - established way of doing it, less be their guinea pig.

Bend over for their version of a pipe dream! :twitch:
Posted on Reply
#2
m1dg3t
Don't ATi card's already have a "throttle" function? They run @ XXX clock's for desktop/2d/media then when gaming/rendering or 100% load they ramp up to full clock's?

Could you not merge this with the other thread about the same topic? Would be nice to have all the info in the same place :o
Posted on Reply
#3
cadaveca
My name is Dave
I do not understand the purpose of this. The way it is presented suggests to me that nVidia had another Fermi on their hands, and the card cannot handle high clocks all the time without having issues. This seems the opposite of power saving to me, as lowering the clocks under lower load would lead to higher GPU utilization, which just doesn't make sense.


It's like if they let the card run high FPS, it can pull too much current? I mean, there's no point in running 300 FPS in Unreal or Quake 4, and in these apps, a slower GPU would still give reasonable framerates when downclocked. So they are saving power by limiting FPS?

I HAZ CONFUUZ!!!
Posted on Reply
#4
Casecutter
by: m1dg3t
then when gaming/rendering or 100% load they ramp up to full clock's?
Yea, but this sounds like they will have it (large number of power plan adjustment) fluctuating with dozens of profiles right while your play the game.

You're focusing for that head shot... (dynamically dumps clocks)... boom you'r dead. :eek:
Posted on Reply
#5
sanadanosa
by: m1dg3t
Don't ATi card's already have a "throttle" function? They run @ XXX clock's for desktop/2d/media then when gaming/rendering or 100% load they ramp up to full clock's?

Could you not merge this with the other thread about the same topic? Would be nice to have all the info in the same place :o
Fermi does it too, I think what they mean is something like dynamic overclock. Just look the difference between Intel Speedstep and Intel Turbo Boost
Posted on Reply
#6
dir_d
by: cadaveca
I do not understand the purpose of this. The way it is presented suggests to me that nVidia had another Fermi on their hands, and the card cannot handle high clocks all the time without having issues. This seems the opposite of power saving to me, as lowering the clocks under lower load would lead to higher GPU utilization, which just doesn't make sense.


It's like if they let the card run high FPS, it can pull too much current? I mean, there's no point in running 300 FPS in Unreal or Quake 4, and in these apps, a slower GPU would still give reasonable framerates when downclocked. So they are saving power by limiting FPS?

I HAZ CONFUUZ!!!
Im with you on this one, i guess we will have to wait for w1zz to explain it fully.
Posted on Reply
#7
Inceptor
It sounds to me like a way to regulate power usage, and at the same time increase minimum framerates.
Here's my speculation:
If they increase clocks too far, it eats too much power and/or is unstable, but its performance is already good at lower power, so to give a turbo *oomph* and increase overall performance, they use this feature in some kind of burst mode.

There's been talk of doing this kind of thing with mobile processors in order to momentarily increase performance when needed.

There's nothing bad about the idea behind it.
As to whether it allows you to overclock, there's no information you can go on to make any kind of comment based on overclockability. So, why trash the thread with "OMG! it's horrible! the worst feature ever! Fail1!!" comments
Posted on Reply
#8
bill_d
sound like a way to compare a heavy overclocked card to a stock 7970 card "oh look 10% faster"
Posted on Reply
#9
Casecutter
by: Inceptor
As to whether it allows you to overclock, there's no information you can go on to make any kind of comment based on overclockability. So, why trash the thread with "OMG! it's horrible! the worst feature ever! Fail1!!" comments
I brought up a legitimate question will this permit overclocking and if it does how will that effect this features operation?

I didn't trash the thread there wasn't one... I never said it's horrible! the worst feature ever! But are you subconsciously thinking that?
Posted on Reply
#10
Inceptor
by: Casecutter
I brought up a legitimate question will this permit overclocking and if it does how will that effect this features operation?

I didn't trash the thread there wasn't one... I never said it's horrible! the worst feature ever! But are you subconsciously thinking that?
Actually, I find this quite interesting, and for me, it doesn't detract from my buying a 6xx series gpu, at all.
But maybe I should have clarified, and extended the comment to include all (three?) threads discussing the new NV gpu. Too much juvenile idiocy gets posted, that was my exasperation with it all.
Posted on Reply
#11
ZoneDymo
I don't get why this news is posted, sure its new news regarding the card but these clocks mean nothing to anyone seeing as its a new card.
Like the information that its 10% faster then an HD7970 in BF3, that tells us something because we know how fast teh HD7970 is in BF3, again these cores mean nothing at this point.

This card needs to run the gauntlet of tests asap.
Posted on Reply
#12
theoneandonlymrk
by: btarunr
With its GeForce Kepler family, at least the higher-end parts, NVIDIA will introduce what it calls Dynamic Clock Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load. The approach to this would be similar to how CPU vendors do it (Intel Turbo Boost and AMD Turbo Core). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically, is.
i read that as

With its yeild issues on GeForce Kepler family, at least the higher-end parts (theres one part lets be real here, the lower SKU's are just more useless then the higher SKu's), NVIDIA will introduce what it calls Dynamic GPU stability saveing Adjustment, which adjusts the clock speeds of the GPU below, and above the base-line clock speeds, depending on the load and temperature. The approach to this would be similar to how CPU vendors do it ,ie useless in most scenarios (ie useless in most scenarios and turned off for those in the know). Turning down clock speeds under low loads is not new to discrete GPUs, however, going above the base-line dynamically

look if i make a 1.1ghz GPU tommorow or tonight and then setup its firmware to normally run at 0.9Ghz but boosting to 1.1(11) during heavy load spells until it heats up to a certain point then drops back down to 0.9,what use is that ive an amp that goes to 10, i dont want or need one thats got 11 on the effin knob, i just want a bigger more noise and scribing 11 on the dial dosnt make it louder , Im seeing this as yet another underhanded way of selling off less then ideal silicone,, simples
Posted on Reply
#13
Casecutter
by: Inceptor
Too much juvenile idiocy gets posted, that was my exasperation with it all.
I agree as to being a lot rhetoric, but this is truly earth shattering news, which has all the makings of good/bad depending.

I suppose Nvidia isn’t going to be releasing full bore (could be just one top Sku) on opening day, so if there are issues they can minimize and have damage control. I do hope they have up’d there CS or the AIB’s have got up to speed on the particulars this new Dynamic Clock Adjustment operation.
Posted on Reply
#14
Aquinus
Resident Wat-man
...It sounds like another way of describing what GPUs are already doing. It sounds like a marketing ploy. GPUs already down-clock in low-load situations. nVidia needs to stop egging us on and just release the damn thing. :confused:
Posted on Reply
#15
theoneandonlymrk
by: Aquinus
...It sounds like another way of describing what GPUs are already doing. It sounds like a marketing ploy. GPUs already down-clock in low-load situations. nVidia needs to stop egging us on and just release the damn thing.
totally agree, and this spinal tap inspired Bs is nonesense , I cant beleive some of you are haveing this as a feature, it simply isnt both companys have used dunamic clocks for the last few years.......... so hows this in any way a new feature , i Could rephrase their PR as

We have clocked it a bit lower as std in 3d but dont worry on occasion ,when we decide we will use a profile that will allow a slight oc untill heat or power overcomes its stabillity< but then that would be the truth and not very good PR:rolleyes:

wizz's review may prove them right and/or it may still oc nicely, But i doubt it, they clearly need to get rid of some iffy stock in my eyes.:wtf:

ill STILL be getting one either way as hybrid physx is worth the effort to me but marketing BS and fanboi,istic backing grates my nerves,, carry on, my peice has been said, ill rant no more.
Posted on Reply
#16
the54thvoid
To me it means:

The gpu can dynamically alter clocks to meet a steady fps solution, allowing reduced power usage in real terms.

It is not the same as an idle state gpu. When the 3D rendering initiates on a standard gpu it kicks in to the steady clock rate (i.e. 772 for gtx 580 or 925 for 7970.) and it doesn't budge from that during the 3D session.

Or using media on the PC, the GF110 clocks at about 405MHz.

This to me sounds like a good thing. A variable clocks domain that allows steady fps (perhaps software/hardware detects optimum rates and adjusts clocks to meet).

I would happily buy a card that dumps a steady 60fps on my screen (or 120 for 3D purposes). I dont need my GTX 580 at 832 giving me 300fps. I'd rather have a steady 60fps and core clocks of 405 (or whatever) reducing power usage.

Let's wait for Monday 12th to see (rumoured NDA lift).
Posted on Reply
#17
Aquinus
Resident Wat-man
by: the54thvoid
To me it means:

The gpu can dynamically alter clocks to meet a steady fps solution, allowing reduced power usage in real terms.

It is not the same as an idle state gpu. When the 3D rendering initiates on a standard gpu it kicks in to the steady clock rate (i.e. 772 for gtx 580 or 925 for 7970.) and it doesn't budge from that during the 3D session.

Or using media on the PC, the GF110 clocks at about 405MHz.

This to me sounds like a good thing. A variable clocks domain that allows steady fps (perhaps software/hardware detects optimum rates and adjusts clocks to meet).

I would happily buy a card that dumps a steady 60fps on my screen (or 120 for 3D purposes). I dont need my GTX 580 at 832 giving me 300fps. I'd rather have a steady 60fps and core clocks of 405 (or whatever) reducing power usage.

Let's wait for Monday 12th to see (rumoured NDA lift).
The one problem is that this can be implemented at the driver level.
Posted on Reply
#18
Casecutter
by: Aquinus
It sounds like another way of describing what GPUs are already doing
Kind of a further adaptation of what the GTX 480 had when it saw it was running FurMark... that wasn't the worst as you can't lose while running that. This will be chugging away until a extremely demanding spike is require then bam… 950MHz! I can see how it helps by quickly bouncing the clock(s) 35% in a millisecond, but the power section will need to be very robust to provide a inrush of currant to supply the demand.

In theory it been done, but I can’t think of an GPU example where such an implantation has been "up clocking" so quickly or dramatically, but implications when enthralled in split second heat of battle are going to be grueling.

Will Nvidia have such profile as firmware (hard wired) or more of a driver software type program, we wait for release day.
Posted on Reply
#19
Fairlady-z
This is pretty interesting if you ask me, and I just bought two HD7970's as I just got out of SLI GTX570 system that was nothing short of a dream. I did want to see what team red had to offer, as I like to mix things up from time to time.

Now my only concern to this feature is what type of micro stutter will occur during SLI? I mean, how fast is this to respond when you have two or three cards in SLI. Anywas, I am sure its a crazy fast card and most of our concerns will be washed away once the NDA lifts.
Posted on Reply
#20
xBruce88x
or once W1zzard gets his hands on it.
Posted on Reply
#21
DRDNA
unless these babies clock as well as ATI's then they have achieved absolutely nothing.
Posted on Reply
#22
buggalugs
I can see it now.....guys with this nvidia card will be in the forums complaining that their GPU is clocking down too much and they're losing FPS, and they'll be asking how can I lock the clocks to max speed? etc etc.

This turbo thing could be ok for guys who run stock GPU speeds and are too scared to overclock but most enthusiasts would prefer the GPU at max clocks when they're gaming.

I guess we'll see how it turns out....
Posted on Reply
#23
jpierce55
by: DRDNA
unless these babies clock as well as ATI's then they have achieved absolutely nothing.
I was thinking if AMD had done this the 680 would not have 10% over the 7970.

Rumored 10%
Posted on Reply
#24
Aquinus
Resident Wat-man
by: jpierce55
I was thinking if AMD had done this the 680 would not have 10% over the 7970.

Rumored 10%
Source?
Posted on Reply
#25
[H]@RD5TUFF
Good idea IMO, as long as it's implimented in an intelligent manner, and performs as well if not better than AMD, Nvidia has a winner on their hands IMO.
Posted on Reply
Add your own comment