• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

VRAM doesn't downclock with 2 high refresh rate monitors

Joined
Apr 7, 2016
Messages
75 (0.02/day)
Hi,
I have an RTX 4090, and 2 OLED monitors, main display is Aorus 1440p 360hz@10bit, secondary is MSI 4k 240hz@10 bit, both use DSC, and both are connected to HDMI 2.1 cables and ports (my 4090 has 2 HDMI 2.1 ports).

Nvidia control panel is set at 1440p, 360hz, 10biit for the Aorus, and 4k, 240hz, 10bit for the MSI.

Using MSI afterburner, GPU-Z, and all other monitoring applications, it shows my VRAM clock is 1327mhz (full clock), no matter what is shown on the monitor, even at idle and at all times. I use complete black color background and no icons, enabled auto hide task bar.

Driver version as of today, just released October 1st 2024. 565.90.

Also high power consumption with 2 monitors active (likely related to the VRAM full clock).

When only one monitor is active (the other is at power saving, off, or disabled) VRAM clock on idle goes back to 50mhz.

Is that a bug? Nvidia should know about it? Any work around it?
 
That's the most common issue when refresh rates don't match. GPU can't figure out when it will need to send data to the monitor next, so it runs VRAM full throttle, just in case.
 
My work around has been reducing my cards powertarget significantly during average usage. Not ideal I know :banghead:

I do it via the command line without extra tools. Example:

nvidia-smi.exe -pl 41
 
My work around has been reducing my cards powertarget significantly during average usage. Not ideal I know :banghead:

I do it via the command line without extra tools. Example:
Thanks man!
You have a command to adjust fan speed by command (no tool or applications)?
 
That's the most common issue when refresh rates don't match. GPU can't figure out when it will need to send data to the monitor next, so it runs VRAM full throttle, just in case.
It shouldn't be a problem, imo. I'm using a 144 Hz 1440 Ultrawide, and a 7" 800p touchscreen at 43 Hz, and my VRAM clock is fine.

Combining two high-refresh displays is more of a culprit.

Any work around it?
What CPU do you have? Any chance you can connect the secondary display to the iGPU? Or you need both for gaming?
 
Both must have freesync/G-Sync enabeld and if the throughput is still too much you have to reduce the refreshrate of the second Monitor.
RDNA3 is the worst. You can have two displays but it only idles at 8-10W if allexcept of one is at 60Hz and freesync must be enabled.
In my case 360Hz + 75Hz = 75W at idle.
360Hz + 60Hz = 8W at idle.
 
It shouldn't be a problem, imo. I'm using a 144 Hz 1440 Ultrawide, and a 7" 800p touchscreen at 43 Hz, and my VRAM clock is fine.

Combining two high-refresh displays is more of a culprit.
Fixed refresh has been figured out (though even that can regress from time to time). VRR I expect it's a much taller order than that.
 
Fixed refresh has been figured out (though even that can regress from time to time). VRR I expect it's a much taller order than that.
VRR shouldn't kick in on the Windows desktop anyway. At least my monitor shows constant 144 Hz when I'm not in a game.
 
It's total bandwidth required for the uncompressed image on your displays and it's expected behaviour on both AMD and Nvidia GPUs.

Multiple high res, high refresh, 10-bit displays is just too much bandwidth to fit in VRAM at the low-power clock states, so the GPU has to crank up the VRAM frequency to an active state. Your display config requires as much VRAM bandwidth as ten equivalent 60Hz panels!

If you want your GPU to enter a low power state, you'll want to reduce the refresh rates of one or both screens when not gaming. If you want to streamline this process you can either write a PowerShell script or grab something like MonitorSwitcher.
 
My 4070ti super doesn't down clock in desktop with HDR and VRR enabled.
One monitor? Resolution? Refresh rate? Cable type?

It's total bandwidth required for the uncompressed image on your displays and it's expected behaviour on both AMD and Nvidia GPUs.

Multiple high res, high refresh, 10-bit displays is just too much bandwidth to fit in VRAM at the low-power clock states, so the GPU has to crank up the VRAM frequency to an active state. Your display config requires as much VRAM bandwidth as ten equivalent 60Hz panels!

If you want your GPU to enter a low power state, you'll want to reduce the refresh rates of one or both screens when not gaming. If you want to streamline this process you can either write a PowerShell script or grab something like MonitorSwitcher.
Even doing nothing sitting on a pitch black desktop not even icons on the said desktop?

That doesn't make sense.

I would understand running a high res video or 2D or 3D scene, other wise no need for the huge bandwidth you mentioned.

It shouldn't be a problem, imo. I'm using a 144 Hz 1440 Ultrawide, and a 7" 800p touchscreen at 43 Hz, and my VRAM clock is fine.

Combining two high-refresh displays is more of a culprit.


What CPU do you have? Any chance you can connect the secondary display to the iGPU? Or you need both for gaming?
Unfortunately I use both for gaming, the 360hz is for high fps titles and the 4k for more immersion titles
 
Even doing nothing sitting on a pitch black desktop not even icons on the said desktop?
Yes.

It doesn't matter what the content of the screen is. Each frame sits in the buffer, and you're asking for the memory buffer to hold this much data per second:

(360Hz*10bpp*3,686,400 pixels) + (240Hz*10bpp*8,294,400 pixels) =
13,271,040,000 + 19,906,560,000 =
33,177,600,000 bits per second per buffer

So your VRAM needs a bare minimum of 66.35 Gbps bandwidth with double-buffering, and a bare minimum of 99.5 Gbps of bandwidth for triple-buffering, which is evidently too much for 50Mhz idle memory clocks. The 4090 has a peak bandwidth of 1008Gbps when the VRAM is running at max speed, so 66.35 is about 7% of this. I'm pretty sure that idle clocks are closer to 4% of peak clocks, which means you're almost certainly exceeding the idle bandwidth even of a 4090!

I don't have a 4090 on hand so I don't know what the lowest active VRAM clock is, but until you reduce your total bandwidth somehow, you're going to need higher clocked memory to meet the minimum needs of the two screen buffers in double-buffered output. Display stream compression happens after the buffer - all that does is help squeeze all that data down a thin cable - it can't reduce the raw number of bits that your display buffers need to hold.

Check that your issue is that you've actually exceeded the minimum bandwidth by reducing your refresh rate on both screens to 60Hz, rebooting, and making sure your VRAM clocks are at idle again. If you still have high VRAM clocks with very low refresh rates, then it might be another underlying issue such as drivers set to max performance mode, or other power-saving options in your OS.
 
FWIW I started having this problem after installing 565.90 and it did not happen on 561.09. Single display, 3840x1600@144hz, g-sync ultimate monitor. This seems to happen every 10th or so driver update and nVidia fixes it on the next release...
 
This is expected bad behaviour. It's something that seriously needs to be addressed BC of the massive power usage increase for an idling system You used to be able to manually create a custom resolution for the high refresh monitor that had a reduced total pixel clock, but sadly I've been unable to use this on my most recent monitor.

Maybe it will work with yours tho. I think the util was CRU. It was a popular tool for overclocking monitors too, but that also no longer works on the last monitor I purchased.

By no longer works, I mean everything appears to be successful but the custom resolution never shows up anywhere and can't be used. Custom resolutions in NCP also never show up.

In my case, I have a 144hz 4K center monitor and 2 x 1080p monitors on either side of the main monitor. I've found that the system idles perfectly if I set the main monitor to 120hz. So I use it at 120hz for everything but gaming, and when I game I just switch it to 144hz.
 
Check that your issue is that you've actually exceeded the minimum bandwidth by reducing your refresh rate on both screens to 60Hz, rebooting, and making sure your VRAM clocks are at idle again. If you still have high VRAM clocks with very low refresh rates, then it might be another underlying issue such as drivers set to max performance mode, or other power-saving options in your OS.
Inconvenient. Even if vram downclocks at 60hz both monitors. Everything on situations other than gaming will be choppy at 60hz. Not exactly why I bought 2 expensive high refresh rate monitors. I can't stand 60hz even moving mouse cursor on desktop.

Plus the constant switching between 60hz and 240/360 will create other issues.

I checked everything power related in Control Panel/Settings/NVCP and nothing points to anything max power 24/7

FWIW I started having this problem after installing 565.90 and it did not happen on 561.09. Single display, 3840x1600@144hz, g-sync ultimate monitor. This seems to happen every 10th or so driver update and nVidia fixes it on the next release...
I'm also suspecting a driver issue
 
Last edited:
Inconvenient. Even if vram downclocks at 60hz both monitors. Everything on situations other than gaming will be choppy at 60hz. Not exactly why I bought 2 expensive high refresh rate monitors. I can't stand 60hz even moving mouse cursor on desktop.

Plus the constant switching between 60hz and 240/360 will create other issues.

I checked everything power related in Control Panel/Settings/NVCP and nothing points to anything max power.


I'm also suspecting a driver issue

Did you actually have lower VRAM clock on previous drivers? Your setup is pretty hefty in terms of display bandwidth required. With those 2 panels, I wouldn't expect any less than full VRAM clock from any GPU available today.

Yes, GeForce tends to offer more intermediate VRAM P-states than Radeon, but too much display is simply too much display. For the driver team it's always a balancing act to find the right cutoff for a given Pstate. VRAM clock too low and you get artifacting/blacking out, force VRAM clock too high and people don't like the power consumption. I'm sure there are manual ways to force the card into a specific Pstate if you're into that, but otherwise the drivers dictate the behaviour, probably to find a reasonable compromise between the above issues.

Black color empty desktop and hiding taskbar doesn't change anything about hardware requirements to run the displays, on Nvidia. Having less 2D/3D load affects RDNA somewhat since VRAM clock is more dynamic there when fixed full clock is not required, but GeForce doesn't run its VRAM nearly as dynamically as RDNA does at less than full VRAM clock. It depends exclusively on your displays.

You have something like 40% higher display bandwidth requirements on 2 displays than I have on 3. Usually my 4070 Ti runs single and dual 1440p165 at 7W/10W respectively, both low VRAM clock somewhere in the 100-200MHz region. But if I add in the third 4K120 panel, it goes to a much higher Pstate and is 40W (at the last intermediate Pstate that is roughly half VRAM clock) pretty much regardless of refresh rate set. Stands to reason that another 30-40% more display than that would default to full clock.
 
Last edited:
Inconvenient. Even if vram downclocks at 60hz both monitors. Everything on situations other than gaming will be choppy at 60hz. Not exactly why I bought 2 expensive high refresh rate monitors. I can't stand 60hz even moving mouse cursor on desktop.
An inconvenient truth is still the truth.

Try 120Hz. It's a huge improvement over 60Hz desktop application look and feel without needing the insane bandwidth requirements. Most people's sensitivity to framerate increases follow a curve of diminishing returns. You're most of the way there at 90Hz and it's unlikely half the population can identify the difference between 120 and 180Hz. My "60Hz" is closer to 85Hz but even then I'm happy with a dynamic refresh rate on my laptop of 83Hz in power-saving desktop mode and 165Hz in games.

Your GPU will likely downclock the VRAM at some combination of reduced resolutions and refresh rates. To know for sure what that is you'd have to ask Nvidia, but on lesser GPUs (RTX 3060Ti, 3080) I've seen two 165Hz 1440p displays be too much for idle states. At least with a 4090 your idle state threshold should be higher - but clearly it's not high enough for what you're asking of it.
 
Last edited:
Ah, so that's the reason why my 3080 runs its VRAM all day at full blast.

and it's unlikely half the population can identify the difference between 120 and 180Hz.
Hell, I don't see any difference above 120Hz, no matter what's the refresh rate.
 
Ah, so that's the reason why my 3080 runs its VRAM all day at full blast.

Hell, I don't see any difference above 120Hz, no matter what's the refresh rate.
Have you tried 120Hz on both panels? I never bothered to check what the exact cutoff was for idle VRAM clocks when I had the 3080 in there, but I know 2x 75Hz was just fine.
 
It doesn't matter what the content of the screen is. Each frame sits in the buffer, and you're asking for the memory buffer to hold this much data per second:

(360Hz*10bpp*3,686,400 pixels) + (240Hz*10bpp*8,294,400 pixels) =
13,271,040,000 + 19,906,560,000 =
33,177,600,000 bits per second per buffer
Multiply this by three because RGB.

I don't know how to do exact calculations but if 10bpp/color is actually stored as 2 bytes/color, or 6 bytes in total, then the entire frame buffer size is 68.6 MiB. That comes dangerously close to the L2 cache size of 72 MiB. The card's behaviour is probably quite different when the frame buffer fits in the cache (which is only possible in an idle state, when the cache isn't used for much else).
 
My work around has been reducing my cards powertarget significantly during average usage. Not ideal I know :banghead:

I do it via the command line without extra tools. Example:
how can i undo this command? i think it's causing issues in games for me.

i know i shouldn't have tried this command because i have a single 60hz monitor, but there you have it :banghead:
i did that just as an experiment because my gpu shows 35w idle power consumption whereas the guy i bought from shows proper idle temps, there is possibility of display issue with this one from what i could gather.
 
Last edited:
Using a 3070ti got a 240Hz and 60Hz card downclocks fine.

Probably some software you got running like garbage called chrome.
 
Have you tried 120Hz on both panels? I never bothered to check what the exact cutoff was for idle VRAM clocks when I had the 3080 in there, but I know 2x 75Hz was just fine.
My media panel is a 60Hz one so nope.
 
RX 6700 XT here with July WHQL drivers. 1080p, 8-bit, VRR enabled.

150 Hz: can idle at ~10 MHz VRAM.
170 Hz: VRAM cranks to 11, +25 W power consumption.
 
FWIW I started having this problem after installing 565.90 and it did not happen on 561.09. Single display, 3840x1600@144hz, g-sync ultimate monitor. This seems to happen every 10th or so driver update and nVidia fixes it on the next release...
So downgrade then and report your problem
 
Back
Top