• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Running Discord Lowers NVIDIA GPU Memory Clocks by 200 MHz, Company Posts Workaround

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The Windows app of Discord, the popular social-networking software, apparently trims the graphics card memory clock of NVIDIA GPUs by an innocuous 200 MHz, or so observe gamers. NVIDIA GeForce GPUs dynamically adjust memory clock speeds in response to load, as part of their power-management. Ideally, with gaming workloads, the GPU is supposed to hit its maximum rated memory frequency, but some keen-eyed gamers with monitoring tools noticed that with the Discord app running in the background, the memory clock tops out at T-minus 200 MHz (i.e. if it was supposed to be 7000 MHz, it tops out at 6800 MHz). Even under the infernal stress of Furmark, something that's designed to push memory clocks to the maximum rated speeds until the graphics card runs into thermal limits; the memory clock is seen falling 200 MHz short.

NVIDIA took note of this issue, and assured that a fix is on the way in a future GeForce driver update. In the meantime, it posted a DIY workaround to the problem that involves downloading the GeForce 3D Profile Manager utility, making the utility "export SLI profiles" (applicable even to single-GPU machines), editing the exported SLI profiles file as a plaintext document, and importing the profile back. This basically alters the way the driver behaves with the Discord app running. The NVIDIA 3D Profile Manager utility can be downloaded from here, and step-by-step instructions on using it to fix this issue, here.



Update Feb 6th: NVIDIA released a GeForce driver application profile that automatically downloads to your driver, which should fix this issue. You don't need GeForce Experience to receive the update.

View at TechPowerUp Main Site | Source
 
By the way, this would be evidence of a software bug. All software can have bugs as they are created by humans and humans can make mistakes. This is not evidence of bad software or bad drivers. Remember that the next time you read a post regarding similar bugs on reddit or some other corner of the web and try to use the occurrence to malign your favorite companies’ competitors.
 
While I very much agree this doesn't mean bad drivers or bad software. It still seems like a clear driver bug, discord should not be allowed to change the frequency like that in general.

To me it mostly seems like a weird / interesting issue and I would love to hear the cause, sadly I don't think we will ever know. Nvidia tends to be quite tight lipped about the details.
 
Netfilx app, chrome-based browsers, discord, viber and many other programs have problems with GPU-acceleration on both AMD and nVidia drivers. Bad programming for sure.
 
While I very much agree this doesn't mean bad drivers or bad software. It still seems like a clear driver bug, discord should not be allowed to change the frequency like that in general.

To me it mostly seems like a weird / interesting issue and I would love to hear the cause, sadly I don't think we will ever know. Nvidia tends to be quite tight lipped about the details.
my 2 cents:
Discord uses hw acceleration for almost everything(including video playback) plus extensive HW encode support for video/desktop sharing(nvenc), my guees is that the drivers are putting the card in "video decode" mode and not overriding that when you open a game.

Now i'm super curious and will have to test this when i get back home, 200mhz is not that innocuous, it negates all the overclocking/power limit removal i've done to my gpu
 
Netfilx app, chrome-based browsers, discord, viber and many other programs have problems with GPU-acceleration on both AMD and nVidia drivers. Bad programming for sure.
Had this issue with GoG Galaxy when it was in it's original beta stage years ago. Issue like this also presented itself with the Razer Synapse software years ago. I know GoG fixed the issue and I'm sure Razer did, too, but I haven't used their software in almost a decade now (crap, I'm getting old).
 
I find this thread and the discussion it has developed a bit strange. I don't see a problem in reducing frequency and consumption when they don't need to be at maximum for good performance.
 
While I very much agree this doesn't mean bad drivers or bad software. It still seems like a clear driver bug, discord should not be allowed to change the frequency like that in general.

To me it mostly seems like a weird / interesting issue and I would love to hear the cause, sadly I don't think we will ever know. Nvidia tends to be quite tight lipped about the details.

It's very much not a driver bug, something that Discord did must be triggering a long known limitation that NVIDIA imposes on gaming cards. Dropping memory clocks by 200 MHz is intended behavior on GeForce segment whenever the driver detects a compute application, it will reduce the power state from P0 to P2. Mining, for example (or any CUDA shader application, really) causes the same behavior to show which is why some people overclock their memory with those additional 200 MHz already accounted for, even if the target is not stable, after the clock speed reduction it may very well be.

As far as I know, the same issue should not affect the enterprise GPUs at all.
 
Now i'm super curious and will have to test this when i get back home, 200mhz is not that innocuous, it negates all the overclocking/power limit removal i've done to my gpu

With Discord open my 3060 Ti's memory runs at 6801 MHz in games and then 7001 if I tab out and close Discord down.
Did a quick test in Cyberpunk where I'm 100% GPU limited and it changed nothing in regard of performance like not even a single FPS on my end so wuteva I will just wait for an official fix instead of tinkering around.
 
No bug with my 4090. Furmark and HWiNFO show the same memoryclock that is set in Afterburner. :p Maybe Afterburner overwrites the bug.
And no change on my 4090 when using stock settings on it, while closing and opening Discord during a Furmark run.
 
Netfilx app, chrome-based browsers, discord, viber and many other programs have problems with GPU-acceleration on both AMD and nVidia drivers. Bad programming for sure.
Add most of the other streaming apps to this list.....ie Paramount+, Hulu, Prime etc....

I have tried them all and their performance not as good as opposed to accessing the sites directly thru a browser, so perhaps this "BUG" is in those apps, instead of the GPU drivers ??
 
No bug with my 4090. Furmark and HWiNFO show the same memoryclock that is set in Afterburner. :p Maybe Afterburner overwrites the bug.
And no change on my 4090 when using stock settings on it, while closing and opening Discord during a Furmark run.
Nah, I have Afterburner running and noticed it was slower than normal (it's actually 500MHz on the memory, BTW) after playing a lot of BF2042 last night.

Not that it really affects framerate at all. I already had my memory overclocked by 750MHz, so not like I was running slower than stock anyway.
 
my 2 cents:
Discord uses hw acceleration for almost everything(including video playback) plus extensive HW encode support for video/desktop sharing(nvenc), my guees is that the drivers are putting the card in "video decode" mode and not overriding that when you open a game.
Yup. Sounds like there's a piece of code being called out of order in the power state evaluation codepath, with the result that the driver determines a P-state that's lower than it should be. If that code is well-written it's an easy fix, if not it could be a PITA to get right. And 200MHz is such a small dip that it explains why this was missed in QA, it's within the margin of error.
 
This is a real facepalmy problem. You just couldn't make this sht up.
 
I found out today that signalrgb also drops my 3080ti memory with 250mhz. I saw when i benchmark heaven my memory runs 9250 mhz when signalrgb is on. When i close it my memory runs at 9500 mhz
 
With Discord open my 3060 Ti's memory runs at 6801 MHz in games and then 7001 if I tab out and close Discord down.
Did a quick test in Cyberpunk where I'm 100% GPU limited and it changed nothing in regard of performance like not even a single FPS on my end so wuteva I will just wait for an official fix instead of tinkering around.
So does the RTX 3060/Ti use the same VRAM standard as GTX 1660 Super? GDDR6 standard of the same variant? 7000 is default on GTX 1660 Super, which mine is on my Intel Comet Lake build.
 
So does the RTX 3060/Ti use the same VRAM standard as GTX 1660 Super? GDDR6 standard of the same variant? 7000 is default on GTX 1660 Super, which mine is on my Intel Comet Lake build.

Yeah, it's the same type of 14 Gbps GDDR6 memory. The 3070 uses it as well, it's only the 3070 Ti and up that use GDDR6X.

Yup. Sounds like there's a piece of code being called out of order in the power state evaluation codepath, with the result that the driver determines a P-state that's lower than it should be. If that code is well-written it's an easy fix, if not it could be a PITA to get right. And 200MHz is such a small dip that it explains why this was missed in QA, it's within the margin of error.

Problem has been around for some time. I think Discord must be using NVDEC and thus it was brought to people's attention somehow.

NVDEC forces GPU into P2 CUDA state -> much higher power consumption than with VDPAU - Graphics / Linux / Linux - NVIDIA Developer Forums

There is more or less of a workaround on Windows:

How to turn off Nvidia's Force P2 Power State : nvidia (reddit.com)

But it really boils down to an intentional gimp on GeForce's compute performance by limiting the VRAM bus ever so slightly when CUDA is running. My suspicion that it has something to do with the usage of NVENC/NVDEC is that Discord was recently updated to support AV1 broadcasting when the user has an Ada GPU installed.
 
It's been 2 days that I've been trying to solve this problem, I've reinstalled the driver, updated the GPU BIOS, and restore my PC. Apparently the problem had been resolved, but today the problem has returned again. Until I found this post and did the tests with Discord closed and apparently it solved my Clock problem in the memories.

For me, who am very anxious and careful with my hardwares, those 200MHZ of memory make a lot of difference hehehe
 
Damn it. I use Discord whenever gaming with my peeps (I have it running and using the voice chat) and am running an Nvidia GPU. I also use Amazon Prime Video, Disney+, Netflix, etc. Until Nvidia actually fixes this issue with a drive update, I guess I'm going to have to try the workaround in the meantime.
 
just tested on my 2060 super the gpu clock does not change
 
I wonder if this is related to the Electron version Discord uses for its desktop app? From my understanding Discord is stuck on a very old version of Electron because of their spaghetti code (which causes a lot of problems on the Linux side).
 
I gave up on the Discord app long ago. It doesn't even let you use a narrow window size next to a stream popout, while the web based verison does. But neither one lets you hide the server channel list, which is *profane word here* stupid.
 
Back
Top