• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RX 9070 XT & RX 9070 non-XT thread (OC, undervolt, benchmarks, ...)

After years of using NVCP, Nvidia Inspector, MSI Afterburner, and recently Nvidia App to configure my GPU, it's nice to do all of that with just Adrenaline. I still really miss Inspector though.
 
  • Like
Reactions: NSR
The gpu is bigger then youyr case :laugh:
Lol, didn't expect it to be bigger than my 6800, almost struggled for space. Speaking of which, the hidden 12vhpwr design was annoying to route. I managed to get the cable sitting at a comfortable bend that I can accept but it wasn't easy (or maybe I'm bad at building PCs).

Update: I've been benching in Cyberpunk and caught my GPU clock peak as high as 3000Mhz. The screenshots below shows it going beyond the advertised 2700Mhz at 4k (rendering 4k but downscaled to 1080p), RT-disabled, no upscaling. It maintains this very easily in different scenes. Am I missing something here? I am yet to touch any overclocking/undervolting settings.

Screenshot 2025-03-20 093039.png
Screenshot 2025-03-20 094044.png


On an extra note, I'd like to rant for a second about path-tracing in this game. For whatever reason, enabling the setting leads to an eventual crash, not just the game but the GPU. Twice, I have had to reset my PC whilst testing path-tracing because it will be okay for a minute or two, freeze, crash, and then there will be no display output. I don't know if this is slowly damaging my GPU but I don't really want to keep having it reset like this.

Cyberpunk has done this for me ever since I was on AM4 on an entirely different set-up. I don't know why this happens, no reviewers seem to report the same problem but some forums online show that multiple other users crash on path-tracing even on 4090 GPUs.
 
Last edited:
Hi, I have a question for you guys : do you know what can affect a card like my XFX 9070 XT QuickSilver Magnetic Air Edition (non OC) which is unable to activate its boost mode ?

I ran several GPU stress test, and the max core clock I can reach is 2430 Mhz. That's weird. I have tried a lot of settings even stock ones btw. Is this a possible PSU issue ? I have (only a 700W) Certified Silver when XFX recommend at least a 800W PSU. But my whole system consume few ressources : Ryzen 5800X, 32 Go Ram, 2 SSD, 1 nvme, no leds, no others stuffs. Moreover, I can increase the Power Limit in Adrenaline without any issue, which result in more power draws (330W) without any issue, and it increase the benchmark tests scores I ran. So from what I understand, this is not a powerlimit issue because the card has probably a mechanism which throttle the card in order for the system to don't crash.

So I'm repeating my question again : What can block the core clock to reach it's boost frequency ? I have looked in the bios of my motherboard (MPG X570 GAMING EDGE WIFI (MS-7C37)) but there is only boost settings for the CPU, not the GPU. I'm kinda stucked.
 
Hi, I have a question for you guys : do you know what can affect a card like my XFX 9070 XT QuickSilver Magnetic Air Edition (non OC) which is unable to activate its boost mode ?

I ran several GPU stress test, and the max core clock I can reach is 2430 Mhz. That's weird. I have tried a lot of settings even stock ones btw. Is this a possible PSU issue ? I have (only a 700W) Certified Silver when XFX recommend at least a 800W PSU. But my whole system consume few ressources : Ryzen 5800X, 32 Go Ram, 2 SSD, 1 nvme, no leds, no others stuffs. Moreover, I can increase the Power Limit in Adrenaline without any issue, which result in more power draws (330W) without any issue, and it increase the benchmark tests scores I ran. So from what I understand, this is not a powerlimit issue because the card has probably a mechanism which throttle the card in order for the system to don't crash.

So I'm repeating my question again : What can block the core clock to reach it's boost frequency ? I have looked in the bios of my motherboard (MPG X570 GAMING EDGE WIFI (MS-7C37)) but there is only boost settings for the CPU, not the GPU. I'm kinda stucked.
What games do you play? Resolution? Do you use VSync or frame cap/limiter? What is your PSU?

GPU won't reach maximum clocks unless it's necessary. So, if you play games like CS2 capped at 144 Hz, it may never boost past 2.5 GHz.
Also, in games like Anno 1800 your CPU may be bottlenecking your GPU.

You are really living on the edge with that PSU. Go get some 850W Gold or 1000W (more future proof) Gold PSU. You can get solid 850W Gold PSU for around 120 € incl. VAT. Don't buy Gigabyte PSU.
 
  • Like
Reactions: NSR
Here are the numbers I promised:

System:
Sapphire Nitro+ RX 9070
7600X (stock)
32GB Corsair Vengeance DDR5 CL30 6000MHz (EXPO disabled)
1742822397427.png


3DMark 9070/7600X Global Scores:
Average: 6222
Best: 6850

My Scores
:
Stock Power Limit (240W): 6251 62.52 FPS
-10% Power Limit (220W): 6007 60.07 FPS
-20% Power Limit (200W): 5723 57.25 FPS
-30% Power Limit (170W): 5373 53.74 FPS

Per -10% there is a 2-3 FPS loss.

I then decided to mess around...
+10% Power Limit (270W), +1000MHz Max Offset, -100mV Offset, Fast Timing, 2800MHz Memory Offset: 6931 69.31 FPS.
1742822615475.png
1742822564219.png

This is by no means optimized so I bet I could push this to 7000 if I really tried.

Anyways, regarding the cooler model itself, temperatures are great. I don't think I've seen this thing even touch 60 degrees yet but that's to be expected since this cooler was designed for the XT to begin with.
 
And what's the score with memory at default clocks?
 
And what's the score with memory at default clocks?
I take it you mean whilst keeping everything else the same from that last result?
1742846353608.png
1742846371212.png

My CPU clocks fluctuate so much. I don't know why that is but it doesn't negatively affect my frame time or anything.
 
Yes, exactly. Thanks. So you got ~ +3% performance gain from OCing memory from 2518 MHz to 2800 MHz. Could you please do a retest with 2750 MHz?
How did mem temps and power draw change when you went to 2800?

Thanks.
 
Yes, exactly. Thanks. So you got ~ +3% performance gain from OCing memory from 2518 MHz to 2800 MHz. Could you please do a retest with 2750 MHz?
How did mem temps and power draw change when you went to 2800?

Thanks.
Without changing anything else:
At 2518MHz GPU power is 270W and memory temps peaked at 80 degrees Celsius.
At 2800MHz GPU power is 270W and memory temps peaked at 82 degrees Celsius:

Power draw seems to be strictly tied to my power limit, it's just using the extra power to push the GPU clocks.

Here's the result from 2750 MHz on the memory:
1742901433889.png
1742901404097.png
 
Without changing anything else:
At 2518MHz GPU power is 270W and memory temps peaked at 80 degrees Celsius.
At 2800MHz GPU power is 270W and memory temps peaked at 82 degrees Celsius:

Power draw seems to be strictly tied to my power limit, it's just using the extra power to push the GPU clocks.

Here's the result from 2750 MHz on the memory:
View attachment 391451View attachment 391449

I believe your card is 9070 S Nitro, if yes than remove secondary back plate, some user reported 5-6 C less without secondary back plate on 9070 XT Nitro.
 
I believe your card is 9070 S Nitro, if yes than remove secondary back plate, some user reported 5-6 C less without secondary back plate on 9070 XT Nitro.
Oh? Interesting...I'll take it off and carry on to see how numbers change after an hour or so.

To update: I really can't say I'm noticing a difference after removing the backplate.
 
Last edited:
Without changing anything else:
At 2518MHz GPU power is 270W and memory temps peaked at 80 degrees Celsius.
At 2800MHz GPU power is 270W and memory temps peaked at 82 degrees Celsius:

Power draw seems to be strictly tied to my power limit, it's just using the extra power to push the GPU clocks.
It seems ECC algorithm was affecting your results with 2800 MHz VRAM clocks. See how you got it to legendary? :D Sometimes less is more.
I wanted to see how memory power draw changed with frequency. HwInfo shows this info (memory junction temperature).
 
Last edited:
I wanted to see how memory power draw changed with frequency. HwInfo shows this info (memory junction temperature).
1743099075309.png

Which one is that here?
 
Mostly "GPU Memory Power (VDDIO)". VDDCI_MEM should be negligible.

The card is idling and VRAM has 60°C?!
 
Why do you think that matters?
It might not matter, but seems to be a bit high (from my experience) for a system which is idling.

I can compare it to RX 5700 XT having 44°C VRAM temp in idle and RX 7800 XT having around 49-51°C memory junction temperature in idle.
5700 XT (Sapphire Pulse) has substantially lower clocks (1.75 GHz) despite being same GDDR6 technology, 7800 XT (AsRock Phantom Gaming) has 2.45 GHz and 9070 (XT) has 2.54 GHz.

For same memory technology and roughly same clocks, almost 10°C difference in idle is ... interesting. I'm not saying it's an issue, it's just something I'm not used to.
 
Mostly "GPU Memory Power (VDDIO)". VDDCI_MEM should be negligible.

The card is idling and VRAM has 60°C?!
At 2518MHz: 27-28W
At 2800MHz: 28-30W

And yes, most of these cards are apparently.
 
Many thanks for your inputs. 3% perf. increase at the cost of 2 watts of power draw and 2°C? I expected much worse.


I'm sorry, previously I mixed up some of your screenshots.
You posted you reached score of 6931 with 2800 MHz, 6803 with 2750 MHz, 6754 with 2518 MHz.
My bad. It looks like error correction has not kicked in yet with 2800 MHz. Apologies.
 
Many thanks for your inputs. 3% perf. increase at the cost of 2 watts of power draw and 2°C? I expected much worse.


I'm sorry, previously I mixed up some of your screenshots.
You posted you reached score of 6931 with 2800 MHz, 6803 with 2750 MHz, 6754 with 2518 MHz.
My bad. It looks like error correction has not kicked in yet with 2800 MHz. Apologies.
What's error correction? I'm still new to all this. xD
 
Memory chips tend to do computational errors, this error rate increases with higher clocks (OC).
Instead of showing signs of instabilities of memory chips, error correction is applied first.
This algorithm needs some resources in order to work.

This is a very brief explanation. For you important is to know is that when error correction kicks in, GPU performance will start to lower.
Maybe with 2850-2900 MHz memory clocks you'll start to notice score similar to when clock was just 2518 MHz or even worse scores.
Because part of GPU compute resources is used for error correction purposes, less compute resources are left allocated for rendering.
Thus, higher clocks on VRAM will not always yield higher scores or fps.
 
Memory chips tend to do computational errors, this error rate increases with higher clocks (OC).
Instead of showing signs of instabilities of memory chips, error correction is applied first.
This algorithm needs some resources in order to work.

This is a very brief explanation. For you important is to know is that when error correction kicks in, GPU performance will start to lower.
Maybe with 2850-2900 MHz memory clocks you'll start to notice score similar to when clock was just 2518 MHz or even worse scores.
Because part of GPU compute resources is used for error correction purposes, less compute resources are left allocated for rendering.
Thus, higher clocks on VRAM will not always yield higher scores or fps.
Oh okay, that explains why above 2800MHz I saw performance dips. I was still stable at 2900MHz but was getting worse scores. Good to know, thanks!
 
Yes, I think it was some kind of thermal limit, because since I adjusted the fan curve, positive PL increases performance as expected, at least up to a certain point. I actually created a spreadsheet to monitor my 3DMark tests: TimeSpy, Steel Nomad, and Speed Way.

Anyway, I had to dial my undervolt back to -80mV, since some strange behavior happens beyond -90mV. In both 3DMark Steel Nomad and Unigine Heaven, I'm getting a driver crash sometimes. When I check the monitoring software, it seems to follow a similar pattern: core clocks ramping up close to their max limit, followed by a VCore spike and then a crash. I tried limiting the core clock to a lower value, but same behaviour happens. Didn't experience it at Forza or God of War, can't understand. Maybe it's related to P-state transitions, or it could be a driver issue, or it's my card which can't handle that curve anymore, IDK. I'll give a shot to -85mV within some days.


I just played with PL till -5%, and for the same undervolt value, -80mV. Differences are as following, and just for those 3 tests, so basicly DX12 high settings:
  • PL ~0% // Undervolt -80mV -> 104,16% gain // 99,18% max draw // 105,02% relative performance (score per watt consumed)
  • PL +5% // Undervolt -80mV -> 104,69% gain // 99,73% max draw // 104,97% relative performance (score per watt consumed)
  • PL -5% // Undervolt -80mV -> 103,75% gain // 97,52% max draw // 106,39% relative performance (score per watt consumed)
Raising more the PL till +10% lead to worst results with the same undervolt. I need to understand more stuff, this is my real first time trying to tune an AMD card since 2010 or so, lol

edit: adjusted percentages
Registered as been searching for someone with the same behaviour on a 9070XT. Mine is also a Pulse. Any power limit increase causes regression. My sweet spot at least in steel nomad is the 304W stock limit. No matter where the uV offset is any PL increase sees less performance than 0% PL.

Got to 7442 I think it was at 304W. Gaming I also apply -300 negative offset that sees better than stock FPS for 230-250W draw.

But yeh very odd that it does not like any more power. Temps are good for me also compared to what I see othera posting it's largely the same or better, especially mem.

Did you do any more tinkering or finding a way of tuning the curve itself?
 
Maybe there is some hard coded limitation for power draw? It won't allow to go past +5% due to having only 2x8pin connectors?
I know 75W can be drawn from PCIe slot. Still, maybe they limit this power draw to be less than 320W at all costs.

A slightly OCed version of Pulse is Pure. It has +40 MHz on core clocks and 315W TGP.
 
Maybe there is some hard coded limitation for power draw? It won't allow to go past +5% due to having only 2x8pin connectors?
I know 75W can be drawn from PCIe slot. Still, maybe they limit this power draw to be less than 320W at all costs.

A slightly OCed version of Pulse is Pure. It has +40 MHz on core clocks and 315W TGP.
I don't think it's this as reported power matches the increased %
Around 335W at 10%. Temps go up as well indicating it is drawing that much.
 
Registered as been searching for someone with the same behaviour on a 9070XT. Mine is also a Pulse. Any power limit increase causes regression. My sweet spot at least in steel nomad is the 304W stock limit. No matter where the uV offset is any PL increase sees less performance than 0% PL.

Got to 7442 I think it was at 304W. Gaming I also apply -300 negative offset that sees better than stock FPS for 230-250W draw.

But yeh very odd that it does not like any more power. Temps are good for me also compared to what I see othera posting it's largely the same or better, especially mem.

Did you do any more tinkering or finding a way of tuning the curve itself?

Nice to hear I'm not alone on this! And yes, I did some more tinkering, logging voltage and frequency in different scenarios and generating graphs to compare objective vs effective clocks.

I continued to see some gains at +5% PL, but beyond that, especially near the +10% mark, results started getting worse. As a wild idea, just to test, aside from undervolting, I also tried reducing the power limit. Interestingly, a -5% PL gave better results than +5% on the same -80mV undervolt, achieving 7–8% improvements in some areas instead of just 5–6%.

As for temps, I set a mildly aggressive fan curve, and so far everything's been good, temps are well under control w/o noticeable noise from the GPU. At the moment, I've settled on a stable OC: -70mV, -40MHz, and -2% PL. That gives me a solid 5–6% performance boost, full gaming stability, and lower overall power consumption at 215W TBP limit.

PS: Sorry for the possible spam and off-topic in your thread @LittleBro but would you like to join the club as well? https://www.techpowerup.com/forums/threads/rx-9000-series-gpu-owners-club.333786 Everybody's welcome! :) - Also, I think it can be useful to cross-reference this post also after the list, so ppl can go directly to OC/tuning stuff on the 9070, let me know! :toast:
 
Back
Top