• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RX 5700 XT - 75 Hz/144 Hz/non-standard refresh rates causing VRAM clock to run at max

Did some testing.

Just changing bpp from 8 to 6 made no difference at 3440x1440 60Hz and 100Hz.

Only with a custom resolution and CVT-RB does my VRAM downclock. That locks me to 6bpp, however.

After doing the comparisons, dithering is especially noticeable on darker and grayscale images at 6bpp. Dark grays have a lot of visible dithering.

100Hz 8bpp and 6bpp
8bpp 100Hz
6bpp 100Hz

60Hz 8bpp and 6bpp
60Hz 8bpp
xOZlFPO.png

Custom Resolution: 100Hz CVT-RB
Custom Resolution Settings
100Hz CVT-RB 6bpp

I used Radeon software for stats since I'll likely be doing a bug report.

Please let me know how severely flawed my testing methodology is.
 
I would think this post supports what I just said. It's not like I'm making shit up to stir your GPU pots up..... What I shared is my genuine real world experience

4K 120Hz is above the 600Mhz value so clocks up mem speed
4K 60Hz is below the 600Mhz value, so does NOT clock up mem speed

The dots connect to me...what am I missing?

@Bomby569 you didn't share your "native res"?
According to the quoted post...."144Hz is 559Hz" (I'm assuming this value is considering a 1440p res based on how it is written). With that said, this would mean that a 1440p or lower res with a 144Hz or lower refresh rate wouldn't push the GPU to clock up mem speed which aligns exactly with what your stating....this isn't a surprise to me....the dots still connect (also assuming you're using a 1440p res setting until you share otherwise)

The post by @Mussels aligns with my personal experience. You cannot call that "bizarre" nor "BS"....perhaps coincidence but c'mon as far as I can tell what they wrote is accurate. I also take note Mussels wrote "some GPUs raise their clock speeds" which I take away as this may not be how every GPU operates. In my experience in recent years, last 4 or 5, which involved a RX 580 and RX 5700 XT this is exactly what happened with me. YMMV apparently. A 3060ti may be an apple to my orange of a RX 5700 XT making this comparison pointless


hmm but yours still clocks up mem at 1440p at 144Hz? that does NOT align with the post from Mussels....?! I don't have a 144Hz display to test anything there myself. Any chance you use more than one display? I mean, you both cannot be right, (or can you?) so now I don't know what to think.
Remember that different GPU's have differfent HDMI and DP standards

559Mhz is fine for HDMI 2.1 and DP 1.4 GPU's, but if your GPU or monitor uses the older standards...
 
if you go to the top, i specifically said it ISN'T A FIX, it's a way to know that it's an AMD driver problem and the same problem that plagued the 5700 for years and never was solved.
But I've already identified it as a driver issue. That should really be plenty clear from my initial post: I stated that the issue appeared after a semi-recent driver update (I was already aware of the issue in general, so I know it wasn't there before, and I have also seen the increased idle power consumption). As there have been no hardware changes or notable third party software changes, that makes it overwhelmingly likely for this to be a driver issue. I really don't see why you would think I would need help in pinning this down as a driver issue. That was never a question here. If you actually read my posts you'd also see that
disabling [my secondary monitor] and disabling my Aquasuite monitoring desktop overlay finally brought me down to idle clocks
So I'm hardly in need of ways to check whether my GPU is capable of clocking down at all.
You have to wait for AMD to fix it, good luck with that, apparently people with 5700 are still waiting.
I'm well aware of that. But as I said, given that the issue only appeared 1-2 driver releases ago, it shouldn't be a massive issue. That the most buggy GPU in the past decade or more still has issues is hardly indicative of the general speed of bug fixes. I still don't trust them to fix it immediately, as this issue has been present for years across multiple generations, but I just wanted to point out that it only recently appeared on my RDNA2 GPU. Nothing more than that.
But there is a fix, i had found it on reddit, i'm sorry but i can't find now, it was something about changing the monitor range in some tool, maybe someone can remember. It's not a fix, it's a workaround, but it solves the problem.
If i find that post i post it here.
That's the thing though: I'm want to use my PC, not spend hours troubleshooting an unimportant idle power issue. It's annoying, yes, and AMD needs to be called out on this and get around to fixing it permanently, but it's not something I'm willing to spend time or energy fixing. I said above that it ultimately doesn't affect me in any real way, so it's more about the principle of the matter in my case.

I get that you're just trying to give advice here, but your advice is completely missing the point of my post and isn't suited to my "problem", so ultimately it's not useful. It might be to someone else, but I just came here to point out that this issue can also appear on 6000-series GPUs with recent drivers.
 
Remember that different GPU's have differfent HDMI and DP standards

559Mhz is fine for HDMI 2.1 and DP 1.4 GPU's, but if your GPU or monitor uses the older standards...

yea idk how you calculate that stuff (I'm sure google could teach me!) but it was helpful none the less.

For everybody, I want to correct/clarify what I said. I stated it was a 120Hz refresh rate that forces my GPU to run mem clocks at max, and that is true. Not until I select 60Hz or anything under will mem clock drop to normal idle speed ~200Mhz. However I wrongly stated that lowering the resolution from 4K, implying a res less than 4K but still with a 120Hz refresh rate would also lower mem clock.....not true. I checked this situation again last night and I now realize each time I was changing resolutions my TV automatically goes to either 59.94 or 60Hz. So for example I would switch from 4K 120Hz to 3200x1800 and ASSUMED it was still set at 120Hz....well it was automatically dropping it to either 59.94 or 60Hz. So it is 100% the refresh rate that makes my GPUs mem clocks ramp up/down when just using a desktop. Even at something much lower like 1080p and a lower refresh of 100Hz....still keeps the mem clock pegged on the desktop. In other words any res with a refresh rate above 60Hz pegs the mem clock.

I don't consider this a problem, I just assumed for all purposes this is how it works. the GPU is working harder to make those frames faster....it's going to need to either use more power and/or faster clocks. it ends up being the difference of around 8-10W with a 60Hz or under refresh rate to 30-32W when going above 60Hz. Which % wise.....it's a considerable jump of damn near 200% more power. in context though, this isn't concerning. no reason to fret over +/-20W power when doing low load tasks like browsing folders or the internet
 
Well, I'll be damned. Just updated to AMD's most recent optional driver (22.2.1), and power consumption since reboot is sitting in the 12-25W range, rather than the 30-40W range previously, and memory clocks are scaling from a reported 0-~1300MHz. It's not noted in the changelog, but they definitely fixed this, whatever the issue was. (Worth noting: the previous install was a clean install, while this was not.) Here's hoping they'll go back and do the same for previous generation cards too, ASAP.
UCTzD21.png

This was with both monitors running at their max refresh rate and the Aquasuite desktop overlay active, btw. Doing various things (opening Firefox, Radeon Software, etc.) saw some spikes past 30W, but there's still a very notable overall drop.
 
Well, I'll be damned. Just updated to AMD's most recent optional driver (22.2.1), and power consumption since reboot is sitting in the 12-25W range, rather than the 30-40W range previously, and memory clocks are scaling from a reported 0-~1300MHz. It's not noted in the changelog, but they definitely fixed this, whatever the issue was. (Worth noting: the previous install was a clean install, while this was not.) Here's hoping they'll go back and do the same for previous generation cards too, ASAP.
UCTzD21.png

This was with both monitors running at their max refresh rate and the Aquasuite desktop overlay active, btw. Doing various things (opening Firefox, Radeon Software, etc.) saw some spikes past 30W, but there's still a very notable overall drop.
Were you using D port or hdmi?
 
Well, I'll be damned. Just updated to AMD's most recent optional driver (22.2.1), and power consumption since reboot is sitting in the 12-25W range, rather than the 30-40W range previously, and memory clocks are scaling from a reported 0-~1300MHz. It's not noted in the changelog, but they definitely fixed this, whatever the issue was. (Worth noting: the previous install was a clean install, while this was not.) Here's hoping they'll go back and do the same for previous generation cards too, ASAP.
UCTzD21.png

This was with both monitors running at their max refresh rate and the Aquasuite desktop overlay active, btw. Doing various things (opening Firefox, Radeon Software, etc.) saw some spikes past 30W, but there's still a very notable overall drop.
Sadly not the case for me on my 5700 XT.

Still at 1750MHz and 30-40W power usage if I don't use my custom resolution.

snFI1zi.png


What version of DP are your monitors using? Mine only supports 1.2.
 
Last edited:
Sadly not the case for me on my 5700 XT.

Still at 1750MHz and 30-40W power usage if I don't use my custom resolution.

snFI1zi.png


What version of DP are your monitors using? Mine only supports 1.2.
Not quite sure, but the U2711 is from 2011, so it's not a recent revision. I haven't checked since updating but before the U2711 (1440p60) was running at 4 lanes of HBR with the AOC (1080p75) at 4 lanes of RBR - in theory both of those could be 1st gen DP. I doubt either of them supports anything more advanced than DP1.2.
 
Not quite sure, but the U2711 is from 2011, so it's not a recent revision. I haven't checked since updating but before the U2711 (1440p60) was running at 4 lanes of HBR with the AOC (1080p75) at 4 lanes of RBR - in theory both of those could be 1st gen DP. I doubt either of them supports anything more advanced than DP1.2.
Looks like both of your monitors support DP 1.2.

Maybe AMD only fixed it for RDNA2 for now. Here's hoping they roll it out to RDNA1 and earlier (if older cards have the same issue).
 
Looks like both of your monitors support DP 1.2.

Maybe AMD only fixed it for RDNA2 for now. Here's hoping they roll it out to RDNA1 and earlier (if older cards have the same issue).
Given that the issue only appeared 1-2 driver releases ago I'm suspecting it's a separate thing (new/significantly changed memory controller, maybe?). IMO they should still make fixing this properly and universally a high priority though, given the prevalence of the issue.
 
To reiterate, this high VRAM clock is no bug. There's an algorithm in the drivers (all vendors) that determines whether or not the memory can idle in order to hit the vBlank timings. If it can't, AMD always goes to full VRAM clock where NVIDIA, I believe, has a VRAM clock at about 500 MHz before jumping to full. This is why AMD's power consumption jumps quicker than NVIDIA's and it's not something that can be fixed in drivers.
 
Decided to try out CRU and it lets me do CVT-RB while keeping 8bpp. VRAM downclocks as normal.

TwZ4zPQ.png


Looks like I can use GPU scaling again which is nice.
 
To reiterate, this high VRAM clock is no bug. There's an algorithm in the drivers (all vendors) that determines whether or not the memory can idle in order to hit the vBlank timings. If it can't, AMD always goes to full VRAM clock where NVIDIA, I believe, has a VRAM clock at about 500 MHz before jumping to full. This is why AMD's power consumption jumps quicker than NVIDIA's and it's not something that can be fixed in drivers.
That might have been true previously, but at least on my 6900XT the VRAM clock at idle normally scales freely between a reported 0MHz and ~1300MHz (and I've seen at least 100, 166, ~300, ~400, ~500, ~600, ~800 and ~900MHz in there). It doesn't seem like it has any discrete clock states outside of full load, or at least there is a degree of granularity far beyond previous solutions. It might still be that it has a failsafe for vBlank that sets VRAM speed to max, but that would seem extremely odd given the free scaling otherwise. Also it was most definitely a bug in my case, seeing how the issue appeared with one driver release only to disappear a release or two later.

AFAIK GPUs since at least Polaris have had support for intermittent memory speeds of some sort, so I don't see how adjusting this to avoid this power consumption issue wouldn't qualify as fixing a bug, nor how it would be impossible. It's undesirable and unnecessary behaviour, even if it is intended on some level. It might require a firmware update if it needs adjustment of a VRAM clock profile or some such, but it should still be doable.
 
That might have been true previously, but at least on my 6900XT the VRAM clock at idle normally scales freely between a reported 0MHz and ~1300MHz (and I've seen at least 100, 166, ~300, ~400, ~500, ~600, ~800 and ~900MHz in there).
It's interesting to read this and then W1zzard with his regular problems with power consumption in benchmarks where he can sometimes reproduce this and sometimes the vram is simply using high clocks. AMD certainly needs to do better, why has Nvidia worked this out for an eternity and AMD still struggling? Everything else seems to be fine though.
 
It's interesting to read this and then W1zzard with his regular problems with power consumption in benchmarks where he can sometimes reproduce this and sometimes the vram is simply using high clocks. AMD certainly needs to do better, why has Nvidia worked this out for an eternity and AMD still struggling? Everything else seems to be fine though.
Yeah, it's pretty weird. They really need to figure this out, and in a way that isn't "if something won't work at minimum clock, go to max and stay there".
 
That might have been true previously, but at least on my 6900XT the VRAM clock at idle normally scales freely between a reported 0MHz and ~1300MHz (and I've seen at least 100, 166, ~300, ~400, ~500, ~600, ~800 and ~900MHz in there). It doesn't seem like it has any discrete clock states outside of full load, or at least there is a degree of granularity far beyond previous solutions. It might still be that it has a failsafe for vBlank that sets VRAM speed to max, but that would seem extremely odd given the free scaling otherwise. Also it was most definitely a bug in my case, seeing how the issue appeared with one driver release only to disappear a release or two later.
Do you only have one monitor connected? AMD did try to improve the high memory clock in a driver and that might be what you're seeing.

For AMD, there are only two memory clock states: variable or static. Variable can be anywhere from 0 MHz to max. Static is just max. The vBlank timings force it from variable to static.


AFAIK GPUs since at least Polaris have had support for intermittent memory speeds of some sort, so I don't see how adjusting this to avoid this power consumption issue wouldn't qualify as fixing a bug, nor how it would be impossible. It's undesirable and unnecessary behaviour, even if it is intended on some level. It might require a firmware update if it needs adjustment of a VRAM clock profile or some such, but it should still be doable.
I think a vBlank occurs every frame so if you're running at 144 Hz, you'll hit a vBlank every 6.94 ms. Meanwhile, GDDR6 is pretty high latency and it gets worse the more data (pixels * frequency) is involved:

On top of that, you only get minimal latency when the GDDR6 is running at maximum clockspeed--when you're running at a fraction of max, the latency soars. Let's do some math...

3840 * 2160 * 24 / 8 / 1,000,000 = 24.8832 MB
1440 * 900 * 24 / 8 / 1,000,000 = 3.888 MB
Add them together: 28.7712 MB
If you look at the graph above, this is about 114 ns for RX 6800 XT and 268 ns for RTX 3090.

Every frame of mine requires that much data to be, at bare minimum, read to send to the monitors. It doesn't sound like much but you have to remember, in the space of a second, this translates to 3.2 GB of data for 120 Hz on the 4K (8.33 ms) or 3.8 GB when the 4K is set to 144 Hz (6.94 ms). 8.33 ms is a big enough span of time for the memory, at idle, to be able to service but 6.94ms isn't enough necessitating the memory to run at a much higher clock (in the case of AMD, maximum clock) so the memory is always primed to service the vBlank.

The (expensive) solution to this problem is HBM2 where there's a veritable mountain of memory available to tap all of the time. Vega and Fiji cards aren't affected by high clock issue because in most reasonable cases, the memory can easily handle the vBlank requests.


TL;DR: If AMD constrained the VRAM clocks, there's a chance you could see tearing on the desktop. In their view, a little extra power consumption is worth it to completely eliminate that risk.
 
Last edited:
The (expensive) solution to this problem is HBM2 where there's a veritable mountain of memory available to tap all of the time. Vega and Fiji cards aren't affected by high clock issue because in most reasonable cases, the memory can easily handle the vBlank requests.
Yea we all know about that, it's a expensive and lackluster "solution" to a problem they had for too long and only for a few cards. Not good enough.

TL;DR: If AMD constrained the VRAM clocks, there's a chance you could see tearing on the desktop. In their view, a little extra power consumption is worth it to completely eliminate that risk.
As long as Nvidia can do it better, I don't see a reason to excuse AMD here and if you add up million devices, this is unnecessary energy waste.
 
Do you only have one monitor connected? AMD did try to improve the high memory clock in a driver and that might be what you're seeing.

For AMD, there are only two memory clock states: variable or static. Variable can be anywhere from 0 MHz to max. Static is just max. The vBlank timings force it from variable to static.



I think a vBlank occurs every frame so if you're running at 144 Hz, you'll hit a vBlank every 6.94 ms. Meanwhile, GDDR6 is pretty high latency and it gets worse the more data (pixels * frequency) is involved:

On top of that, you only get minimal latency when the GDDR6 is running at maximum clockspeed--when you're running at a fraction of max, the latency soars. Let's do some math...

3840 * 2160 * 24 / 8 / 1,000,000 = 24.8832 MB
1440 * 900 * 24 / 8 / 1,000,000 = 3.888 MB
Add them together: 28.7712 MB
If you look at the graph above, this is about 114 ns for RX 6800 XT and 268 ns for RTX 3090.

Every frame of mine requires that much data to be, at bare minimum, read to send to the monitors. It doesn't sound like much but you have to remember, in the space of a second, this translates to 3.2 GB of data for 120 Hz on the 4K (8.33 ms) or 3.8 GB when the 4K is set to 144 Hz (6.94 ms). 8.33 ms is a big enough span of time for the memory, at idle, to be able to service but 6.94ms isn't enough necessitating the memory to run at a much higher clock (in the case of AMD, maximum clock) so the memory is always primed to service the vBlank.

The (expensive) solution to this problem is HBM2 where there's a veritable mountain of memory available to tap all of the time. Vega and Fiji cards aren't affected by high clock issue because in most reasonable cases, the memory can easily handle the vBlank requests.


TL;DR: If AMD constrained the VRAM clocks, there's a chance you could see tearing on the desktop. In their view, a little extra power consumption is worth it to completely eliminate that risk.
I understand that, I just think they could tune their dynamic VRAM scaling to avoid going to the static mode quite as often. It's absolutely understandable as a fallback, I just think the threshold for going there is too low - this shouldn't be necessary in what is after all quite ordinary usage scenarios. The dynamic setting ought to be able to avoid the lowest clock speeds if output resolution necessitates it, for example.

For my case, as I mentioned above I have one 1440p60 (10-bit) and one 1080p75 (8-bit) monitor, and before the recent update I could only get it to clock down if I went to a single monitor and closed any applications exerting even a very minor load on the GPU.
 
I understand that, I just think they could tune their dynamic VRAM scaling to avoid going to the static mode quite as often. It's absolutely understandable as a fallback, I just think the threshold for going there is too low - this shouldn't be necessary in what is after all quite ordinary usage scenarios. The dynamic setting ought to be able to avoid the lowest clock speeds if output resolution necessitates it, for example.

For my case, as I mentioned above I have one 1440p60 (10-bit) and one 1080p75 (8-bit) monitor, and before the recent update I could only get it to clock down if I went to a single monitor and closed any applications exerting even a very minor load on the GPU.
So the newest driver brought tangible benefits to this situation?
 
So the newest driver brought tangible benefits to this situation?
For my 6900 XT that was problem-free up until a couple of driver releases ago, yes. I used to have idle power draws in the 10-20W range, which jumped to 30-40W a driver update or two ago, but now it's back down to where it should be.
 
For my 6900 XT that was problem-free up until a couple of driver releases ago, yes. I used to have idle power draws in the 10-20W range, which jumped to 30-40W a driver update or two ago, but now it's back down to where it should be.
Oh my god AMD.
 
Oh my god AMD.
Meh. Stuff happens, most likely they updated something unrelated and didn't check if it affected memory clocks. Given that it was fixed almost immediately I don't see the issue. Nothing unique to AMD about that.
 
Meh. Stuff happens, most likely they updated something unrelated and didn't check if it affected memory clocks. Given that it was fixed almost immediately I don't see the issue. Nothing unique to AMD about that.
No I'm not trying to overdramatize it, but this is a bit sloppy. I'm not used to this stuff with Nvidia. I also had Radeon HD 5000 series, and it was static high memory clocks, I didn't like it as it made the card audible and unnecessary energy waste. With a high end 6900 XT, I would probably not care - eg your situation.
 
As another data point, my 6600xt has clocked down properly at all resolutions since I got it about 2 months ago, but that's only through 3 or so driver updates so far. And the idle power draw is ridiculously low at 4W. My gaming PC's total idle power usage is 24-25W from the wall after this change. Crazy low.
 
Back
Top