• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Memory Clock Speed incorrect on Sensor tab

Nicholas Steel

New Member
Joined
Jan 2, 2022
Messages
13 (0.01/day)
with VRAM clocks at default the following is observed:
1901MHz in GPU-Z Sensor tab (incorrect value)
2002MHz in GPU-Z Graphics Card tab (correct value)
1901MHz in AIDA 64 Overclocking section
3802MHz in MSI Afterburner

With VRAM clocks increased by 202MHz in MSI Afterburner the following is observed:
1952MHz in GPU-Z Sensor tab
2103MHz in GPU-Z Graphics Card tab*
1952MHz in AIDA 64 Overclocking section
4006MHz in MSI Afterburner

*I get the feeling MSI Afterburner is applying a theoretical 202MHz which is why only +101MHz is observed in GPU-Z's Graphics Card tab. Another words this part of GPU-Z is showing both the correct value and that increases to clock speed via MSI Afterburner are visually doubled from the actual increase being applied (MSI Afterburner seems to be showing the theoretical doubling of speed attributed to DDR).

GPU-Z 2.41.0
Nvidia Driver 472.12
Windows 10 21H2
 
Screen shots would help.
 
Top image compilation is without an overclock. Bottom image compilation is with a 202MHz VRAM overclock.

No overclock.png


========== ========== ========== ==========

Yes overclocked.png


202MHz VRAM overclock applied.

Per my original post, GPU-Z's Graphics Card tab seems to be correct while everything else is wrong.

Additionally nothing seems to be able to tell if the GPU is currently boosting. Everything reports 1607MHz for the GPU Core regardless of what activity I'm doing instead of the clock speed increasing up to 1683MHz. If I manually overclock the core via MSI Afterburner by 16MHz to make it 1630MHz, neither MSI Afterburner, GPU-Z's Sensors tab nor AIDA64's overclocking section will report the increase.

It oddly increases both GPU Clock and Boost values on the Graphics Card tab of GPU-Z, not sure why Boosting would be affected and I'm still unsure if the card is ever boosting:
1641115323101.png


Hmmm Special_K reports variable GPU Core clock speed with it topping out at 1.82GHz in Assassins Creed Valhalla, which seems to be around 315MHz higher than the supposed 1683MHz boost clock that the Techpowerup GPU database and GPU-Z report as being what the card would boost up to...
 
Last edited:
So... uh, I can't reproduce the issues anymore and I have no fricken clue why. I'm running the same driver version and the same version of all the monitoring apps and they all report:

Either 2002MHz, 2003MHz or 4006MHz for VRAM*
Up to 1850MHz for the GPU Core clock speed while gaming*^

* In the GPU-Z Sensor tab, the clock speed tachometer around the numerical display in MSI Afterburner and AIDA 64's Overclocking section.
^ I learned GPU Boost 3.0 lets the GPU boost beyond the advertised boost value.

Maybe a cumulative update to Windows fixed something?
 
Last edited:
Heres mine, stock VRAM with a GPU curve limited to 1600Mhz
1219 vs 1187.7
Testing with Unigine Heaven, both heaven and Afterburner will show 9501Mhz - divided by 8, that matches the second (sensors) result - while the first would math to 9752Mhz
1641357055961.png



I can add +1400 stable to my VRAM
Now it's 1394 vs 1362.8 - same 31Mhz difference, with the sensors result being more accurate
1641357018904.png
 

Attachments

  • 1641356940531.png
    1641356940531.png
    82.8 KB · Views: 203
So... uh, I can't reproduce the issues anymore and I have no fricken clue why. I'm running the same driver version and the same version of all the monitoring apps and they all report:

Either 2002MHz, 2003MHz or 4006MHz for VRAM*
Up to 1850MHz for the GPU Core clock speed while gaming*^

* In the GPU-Z Sensor tab, the clock speed tachometer around the numerical display in MSI Afterburner and AIDA 64's Overclocking section.
^ I learned GPU Boost 3.0 lets the GPU boost beyond the advertised boost value.

Maybe a cumulative update to Windows fixed something?
Your screenshot shows a 4% load on your GPU. Some graphics cards have multiple load states for VRAM (as well as GPU). I'm guessing your particular model is one of those. As soon as the load is high enough, the driver puts both the GPU and VRAM into their highest available load state.

Also, in the nvidia Control Panel in the 3D settings menu, under the power management options, what option have you selected?
 
Your screenshot shows a 4% load on your GPU. Some graphics cards have multiple load states for VRAM (as well as GPU). I'm guessing your particular model is one of those. As soon as the load is high enough, the driver puts both the GPU and VRAM into their highest available load state.

Also, in the nvidia Control Panel in the 3D settings menu, under the power management options, what option have you selected?
I was sure I was testing while a game was running, but it's plausible that I wasn't. As for the Nvidia Control Panel I had it set to Prefer Maximum Performance.

By the way, did you know the "Power Management Mode" setting in the Global Profile is constantly overriden by profiles for Desktop Window Manager (dwm.exe), Windows Explorer (explorer.exe) and Microsoft Shell Experience Host? Rendering the Global Profile's Power Management Mode setting redundant the moment you log in to your Windows account? If not, now you know why the clock speeds still decrease below stock clock speeds when idle despite setting the Global Profile to Prefer Maximum Performance (those 3 profiles are set to Adaptive by default and are always in effect).
 
I was sure I was testing while a game was running, but it's plausible that I wasn't.
That's it. ;)

As for the Nvidia Control Panel I had it set to Prefer Maximum Performance.

By the way, did you know the "Power Management Mode" setting in the Global Profile is constantly overriden by profiles for Desktop Window Manager (dwm.exe), Windows Explorer (explorer.exe) and Microsoft Shell Experience Host? Rendering the Global Profile's Power Management Mode setting redundant the moment you log in to your Windows account? If not, now you know why the clock speeds still decrease below stock clock speeds when idle despite setting the Global Profile to Prefer Maximum Performance (those 3 profiles are set to Adaptive by default and are always in effect).
Why would you not want your clocks below stock load clocks at idle? What's the point of a 30-40+ W power consumption for nothing? Idle clocks are meant to decrease power consumption, hardware wear and heat. I personally prefer "Adaptive" in the nvidia settings and letting Windows do what it does.
 
Well guys if you look at what i posted together with his, we're seeing some sort of offset that doesnt belong there - and isn't always there, either.

@W1zzard any ideas?
 
Well guys if you look at what i posted together with his, we're seeing some sort of offset that doesnt belong there - and isn't always there, either.

@W1zzard any ideas?
Your screenshot shows that the VRAM is clocked in several steps between idle and full load. I still think that the same screenshot with a constant 100% load would show the correct values, and nothing is wrong.
 
Your screenshot shows that the VRAM is clocked in several steps between idle and full load. I still think that the same screenshot with a constant 100% load would show the correct values, and nothing is wrong.
It was at load - i had heaven running windowed beside it - but taking the screenshots caused the usage to dip

I could try and re-do it, but the main thing is i was seeing that flat offset between the numbers
 
whats the output of nvidia-smi ?
 
power saving was screwing with me, the moment i clicked out of a game everything dropped
Never used nvidia smi before, heres what i got


Unsure if having two GPU-Z's open is part of the problem but it seems when you open to the main page it grabs the current clock and goes with that? 500Mhz is definitely not my base clock

Sensors tab matches the smi output once you math it out

1641458469152.png
 
power saving was screwing with me, the moment i clicked out of a game everything dropped
Never used nvidia smi before, heres what i got


Unsure if having two GPU-Z's open is part of the problem but it seems when you open to the main page it grabs the current clock and goes with that? 500Mhz is definitely not my base clock

Sensors tab matches the smi output once you math it out

View attachment 231358
Strange. Here's what my card looks like during a Superposition run (main window screenshot taken at idle):
gpuz.gif
load.gif


Also, I've noticed that both yours and OP's card run at PCI-e x8. Why is that? Your base/boost clock detection seems to be off too.
 
Unsure if having two GPU-Z's open is part of the problem but it seems when you open to the main page it grabs the current clock and goes with that? 500Mhz is definitely not my base clock
This is an NVIDIA driver bug on older GPUs, but should go away after a couple of refreshes (the 1st tab refreshes things periodically)

Sensors tab matches the smi output once you math it out
good, so gpuz seems to be correct
 
This is an NVIDIA driver bug on older GPUs, but should go away after a couple of refreshes (the 1st tab refreshes things periodically)
I'm on an RTX3090 with the latest drivers :p
(The moment the 3090Ti is announced, BAM it's old now...)


Shouldnt the front page show the 'max' clocks and the sensors tab show the current/possibly lower ones? cause the problem i'm seeing is that the main page is reporting lower, which is what throws me (and others) off
 
Last edited:
I'm on an RTX3090 with the latest drivers :p
(The moment the 3090Ti is announced, BAM it's old now...)
I think he meant an old driver bug that still hasn't been fixed. :ohwell:

Shouldnt the front page show the 'max' clocks and the sensors tab show the current/possibly lower ones? cause the problem i'm seeing is that the main page is reporting lower, which is what throws me (and others) off
That, and also that you're on PCI-e x8 for some reason.
 
I'm on an RTX3090 with the latest drivers
I think he meant an old driver bug that still hasn't been fixed.
Oh I thought you had the 1070 Ti from the start of the thread. This difference on RTX 3090 is strange indeed, and first time I'm hearing about it. Can you check a few older drivers?
 
Also, I've noticed that both yours and OP's card run at PCI-e x8. Why is that? Your base/boost clock detection seems to be off too.
For me, laziness. My previous computer involved an original Asus P6T motherboard with an Intel i7 920 CPU. That motherboard featured a Multiplexor which meant that either of the 2 slots closest to the CPU would operate at x16 speed so long as the other one was unoccupied. I did not realize that multiplexors had fallen out of fashion when assembling my current computer and assumed it would also operate in x16 mode in the 2nd slot closest to the CPU but that is not the case and I've been too lazy to move it (plus the extra distance from the CPU helps keep the CPU cooler).

There's various reviews showing the performance difference of PCI-E 3.0 x16 and x8 is marginal at best with a higher tier video card than the 1070Ti.
 
For me, laziness. My previous computer involved an original Asus P6T motherboard with an Intel i7 920 CPU. That motherboard featured a Multiplexor which meant that either of the 2 slots closest to the CPU would operate at x16 speed so long as the other one was unoccupied. I did not realize that multiplexors had fallen out of fashion when assembling my current computer and assumed it would also operate in x16 mode in the 2nd slot closest to the CPU but that is not the case and I've been too lazy to move it (plus the extra distance from the CPU helps keep the CPU cooler).

There's various reviews showing the performance difference of PCI-E 3.0 x16 and x8 is marginal at best with a higher tier video card than the 1070Ti.
I see. I'm wondering whether that could be a reason for your weird clocks. My mind says it's doubtful, but it may be worth a try relocating your card into the first slot, nevertheless.
 
I think he meant an old driver bug that still hasn't been fixed. :ohwell:


That, and also that you're on PCI-e x8 for some reason.
My WD PCI-E SSD was in the second GPU slot, that was expected
 
Back
Top