• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

5600X safe voltage for 4.85Ghz

Joined
Jun 29, 2019
Messages
136 (0.06/day)
Processor AMD Ryzen 5 5600X @4.8Ghz PBO
Motherboard MSI B550-A PRO
Cooling Hyper 212 Black Edition
Memory 4x8GB Crucial Ballistix 3800Mhz CL16
Video Card(s) Gigabyte RTX 3070 Gaming OC
Storage 980 PRO 500GB, 860 EVO 500GB, 850 EVO 500GB
Display(s) LG 24GN600-B
Case CM 690 III
Audio Device(s) Creative Sound Blaster Z
Power Supply EVGA SuperNOVA G2 650w
Mouse Cooler Master MM711 Matte Black
Keyboard Corsair K95 Platinum - Cherry MX Brown
Software Windows 11
Hi,
I wanted to run my 5600X at 4.85Ghz for 1T tasks and whatever that can be achieved at nT with reasonable, long-term adequate voltage that won't degrade the CPU in any noticeable way in the next like 2-3 years.

I have PBO2 enabled with curve optimizer. Currently I have the following setup:
  • 125 PPT
  • 75 TDC
  • 100 EDC
  • 1X Scalar
  • Curve optimizer: -20 on core 1,2,4,5 /// -16 core 3 /// -12 core 6
  • +200Mhz Fmax override
When playing some games (GTA Online in the screenshot below), the highest voltage I see is 1.350 (peak - doesn't last). It sits around 1.337-1.344. Temps are in the mid 50s (52-56c) and power draw is at 60-70w.
Example:
4.85 - Copy.png


Idle voltage is between 1.269-1.288.
It seems stable so far. Cinebench, CPUZ, AIDA64 and Linpack seem to be stable in short runs (like a couple minutes max and I refuse to use prime95).

The question is, is this safe for long term use (use case from the screenshot)? can I optimize this a bit more? I'd appreciate any tips from people who know better.
Another question, what happens to core 3 and 6 that have higher curve optimizer values which means higher voltage, does that mean they will degrade faster than the rest? Core 6 is the fastest according to Ryzen Master followed by Core 3. I can only check the individual VIDs and they seem to be the same, how come is that the case when cores 3 and 6 have different curve optimizer values? (unstable otherwise)

Thanks.
 
If you want the cpu to run cooler you could try 90watt PPT and 75 EDC but just running PBO you've voided the warranty (sad but true) so if your computer isn't restarting under heavy load/no load your curve is probably ok.

Voltage curve -10 -10 -5 -10 -10 -5 if cores 3 and 6 are the best should be completely stable, might even be able to do -15 instead of -10 but more than that generally I find isn't stable. I also run Prime95 blend to make sure there are no issues with the curve (a necessary evil to check if your system is stable under load) because it will fail workers if there is any instability with ram or cpu.

1.35v peak on auto is ok, it's normal to see that during a game.
 
If you want the cpu to run cooler you could try 90watt PPT and 75 EDC but just running PBO you've voided the warranty (sad but true) so if your computer isn't restarting under heavy load/no load your curve is probably ok.

Voltage curve -10 -10 -5 -10 -10 -5 if cores 3 and 6 are the best should be completely stable, might even be able to do -15 instead of -10 but more than that generally I find isn't stable. I also run Prime95 blend to make sure there are no issues with the curve (a necessary evil to check if your system is stable under load) because it will fail workers if there is any instability with ram or cpu.

1.35v peak on auto is ok, it's normal to see that during a game.
I think the temps are perfectly fine actually. Even with Linpack I saw a peak temp of like 78c. For a tiny ass Hyper 212 Black Edition (with 2 fans) that's pretty good.
CPUs very rarely actually die so even if I lost the warranty, shouldn't be a big deal. (Besides, how can they prove I was using PBO?)

I had -16 on all cores and -8 and -6 on core 3 and 6 for the stock 4.65Ghz and it has been rock solid for a couple months now.
Sorry but Prime95 is just stupid. It draws an insane amount of power and represents no real world use case at all, not to mention that it has the potential to degrade the CPU already just by testing. I just avoid it.
 
whatever pbo decides is fine
as long as you are not using manual or positive offset voltages
don't concern yourself with what the voltage is
 
whatever pbo decides is fine
as long as you are not using manual or positive offset voltages
don't concern yourself with what the voltage is
I thought if 4.85Ghz is only stable at relatively high voltages (idk if 1.344 is) I will simply take it down a couple notches to 4.75Ghz or something like that although I'd hate to lose the killer 1T performance uplift from 4.85.
 
I thought if 4.85Ghz is only stable at relatively high voltages (idk if 1.344 is) I will simply take it down a couple notches to 4.75Ghz or something like that although I'd hate to lose the killer 1T performance uplift from 4.85.
hehehe look at the SVI2 TFN sensor thats the real core voltage
try more like 1.5v :)
remember voltage scales back with cpu load when using pbo don't worry about what it is the FIT scaler will handle it
 
hehehe look at the SVI2 TFN sensor thats the real core voltage
try more like 1.5v :)
remember voltage scales back with cpu load when using pbo don't worry about what it is the FIT scaler will handle it
SVI2 TFN is what's being shown with my OSD in the screenshot above btw. But it doesn't show individual cores' voltages, you only see that with VIDs (which I assume are not very accurate?).
 
Last edited:
SVI2 TFN is what's been shown with my OSD in the screenshot above btw. But it doesn't show individual cores' voltages, you only see that with VIDs (which I assume are not very accurate?).

VID is only a voltage request. Neither AMD nor Intel have products that can vary Vcore for individual cores. CO gives you the ability to change the clock of an individual core for a given Vcore value by changing V-F, but Vcore applies to all cores equally.
 
VID is only a voltage request. Neither AMD nor Intel have products that can vary Vcore for individual cores. CO gives you the ability to change the clock of an individual core for a given Vcore value, but Vcore applies to all cores equally.
except for chips with FIVR
 
except for chips with FIVR

Huh?

If you meant VID for Haswell, sure, but VID is not relevant in that way for Ryzen.

There is a single Vcore (VDDCR_CPU) value that applies chip-wide. Even for Alder Lake that advertises some "per core adaptive" control, applied Vcore does not change per core.

I thought if 4.85Ghz is only stable at relatively high voltages (idk if 1.344 is) I will simply take it down a couple notches to 4.75Ghz or something like that although I'd hate to lose the killer 1T performance uplift from 4.85.

The 4.85 you see in your OSD doesn't mean much. 4850 is simply the max PBO ceiling for a 5600X (+200). On Ryzen if you only watch the "Core Clock" metric without turning on Snapshot Polling in HWInfo, or keeping an eye on Effective clock instead, it always paints an unreasonably optimistic picture of clocks (ie. making you think it's running 4.85GHz all-core). Not *really* lying per se, but most likely the cores aren't hitting 4850 constantly or for any meaningful amount of time, unlike what it looks like.

Either of those two options will give you a better picture of what the CPU is doing. CPU package power and per-core HWInfo power can also give you an idea of the extent the CPU is really being loaded.

125W is a fair bit for a 5600X, but I mentioned package power because unless you bench balls to the wall all-core 125W all the time, aggressive PBO shouldn't be worrying for CPU longevity. Ryzen is more afraid of current, and in games you're probably far short of 100W package power usually.

@freeagent is the one who benches his 5600X within an inch of its life :laugh: he can probably tell you more about longevity
 
Last edited:
The 4.85 you see in your OSD doesn't mean much. 4850 is simply the max PBO ceiling for a 5600X (+200). On Ryzen if you only watch the "Core Clock" metric without turning on Snapshot Polling in HWInfo, or keeping an eye on Effective clock instead, it always paints an unreasonably optimistic picture of clocks (ie. making you think it's running 4.85GHz all-core). Not *really* lying per se, but most likely the cores aren't hitting 4850 constantly or for any meaningful amount of time, unlike what it looks like.

Either of those two options will give you a better picture of what the CPU is doing. CPU package power and per-core HWInfo power can also give you an idea of the extent the CPU is really being loaded.

125W is a fair bit for a 5600X
I'm aware of the difference between the usual clock speed readings and effective clock speeds. Like I said I'm not really all that interested in the all core frequency, I want the better single core performance with this +200Mhz boost more than anything else. I could've increased EDC even more along with PPT but I capped it at 100 amps and 125 watts respectively for safety reasons, even if that means lower all core clocks. In cinebench for example I sure cannot hold 4.85, not even 4.75 and I'm perfectly fine with that as long as whatever boost I'm getting doesn't come with unsafe voltages/amps. 100 amps EDC is only +10 amps from stock, so I assume it should be fine even if ran at 100 amps (correct me if I'm wrong pls). In cinebench it absolutely does reach 100 amps EDC btw.

125W is never really reached unless I'm running like Linpack or cinebench or something (110-120W). I thought I set the limit a bit high to account for microsecond peaks.
you bench balls to the wall all-core 125W all the time
I actually avoid benchmarking/stress testing as much as possible. That's why I said I only tested with AIDA and cinebench...etc only for a short period at a time. I actually only ran AIDA stress testing for 15 seconds lol. And Prime95 will never be downloaded to my PC ever again after I used it on my previous Haswell CPU.
and in games you're probably far short of 100W package power usually.
Yeah not even close. The absolute highest I've seen tonight is with Cyberpunk 2077, pulling 75+ watts.

Do you perhaps know what EDC value is good for long term use? (regardless of what my limit is set to)
 
set it to :all of it: and forget it about
if its not running at 90c+ who cares
 
Give it the beans with PBO, you won’t hurt anything. I have seen 5600X @ 155w PPT, Ryzen is pretty tough. Static clocks aren’t so good if you are reaching for the upper end, it draws hella current when you get to the top of the clock range. PBO is nice and safe, no matter your settings.. but when you get to the upper regions of your power limits the cpu really starts to shine, but you have to keep it cool to see the benefits, or else it will just protect itself.
 
I'm aware of the difference between the usual clock speed readings and effective clock speeds. Like I said I'm not really all that interested in the all core frequency, I want the better single core performance with this +200Mhz boost more than anything else. I could've increased EDC even more along with PPT but I capped it at 100 amps and 125 watts respectively for safety reasons, even if that means lower all core clocks. In cinebench for example I sure cannot hold 4.85, not even 4.75 and I'm perfectly fine with that as long as whatever boost I'm getting doesn't come with unsafe voltages/amps. 100 amps EDC is only +10 amps from stock, so I assume it should be fine even if ran at 100 amps (correct me if I'm wrong pls). In cinebench it absolutely does reach 100 amps EDC btw.

125W is never really reached unless I'm running like Linpack or cinebench or something (110-120W). I thought I set the limit a bit high to account for microsecond peaks.

I actually avoid benchmarking/stress testing as much as possible. That's why I said I only tested with AIDA and cinebench...etc only for a short period at a time. I actually only ran AIDA stress testing for 15 seconds lol. And Prime95 will never be downloaded to my PC ever again after I used it on my previous Haswell CPU.

Yeah not even close. The absolute highest I've seen tonight is with Cyberpunk 2077, pulling 75+ watts.

Do you perhaps know what EDC value is good for long term use? (regardless of what my limit is set to)

Usually you figure out the EDC that works best for you through trial+error, which one gives you the best performance. Even using the same CPU SKU and same board, same PPT/TDC/EDC don't have the same result.

If you don't live for the benchmarking and just want ST perf, I wouldn't even daily a 5600X past 100W honestly. Maybe set the usual 88/60/90, a bit more headroom than stock 76W.

As to EDC value iirc AGESA is currently kinda bugged so even on 2CCD there isn't much point in going past stock (140A). I'd just leave it unless you see pronounced ST gains or MT gains from raising it.

If you want better ST performance, run some corecycler and read your effective clocks for each core to get an idea of where they stand, then see if you can influence those clocks with Curve Optimizer settings. Not much else will affect ST perf.
 
To be fair, the only time I see high PPT is with Linpack or the like. ST also enjoys the high limits :cool:

Edit:

I bought my 5600X 2/25/21 and has been run like that since I figured how to get it to scale, and I got my 5900X 5/24/21 and has been run hard since I got it.

Since I do run Linpack, and have run it a lot.. I can see the effects of power limits on GFlop output. Lower power= low GFlops.

Edit again:

Basically, what I am saying is, if you want high performance, you have to double the numbers on the box. If the box says 65w, you want at least 120w, if the box says 105w, you want at least 220-230w.

With a curve of course :)
 
Last edited:
I run mine at 4.80-4.85GHz all cores with PBO (varies a bit with load) and the average voltage of 1.319v. Peak voltage is 1.356v. I am water cooled so temps are no problem for me, it's about 60ºc under full load. Probably should try to push a bit more, but it's running so good I don't want to mess with it. As others have said, with PBO let'er rip!
 
If I run a static 4700MHz with 5600X, I will get OTP (maybe it was OCP...) at roughly 70-75c with a hardcore load :D 4600 is ok for me at ~1.325v.. good for all loads, I think... somewhere around there anyways could be 1.35 :laugh:

My 5900X for Linpack and static OC, 4500MHz at 1.225v is my limit, after that I can't control the temps. I can fold for weeks at 4600 1.35v though, no problem.
 
I think effective clock is most meaningful on all core (100%) loads. On low/middle threaded like gaming you can only observe which core(s) are loaded more by effective.
In such situation maybe you can only use the avg discrete clock to see roughly the sustainability of boost.

Below is gaming on 5900X for 28~29min. See avg core clocks. Highest is at 4.8GHz for 1st core, 4.75GHz for 2nd, 4.55Ghz for 3rd and so on (for CPPC order). Max clock says 4.96GHz
Yet max effective is 4.9GHz. Avg SVI2 core voltage ~1.40V
Avg effectives are too low because threads are not stable on same cores through out the game session. Threads only tend to stay on CPPC order.
Average Effective Clock sensor right below individual eff clocks may tell something for the CPU as a whole.
Thread avg usage is 81% and the avg total CPU usage 13~14%
Active core count 3.4 (out of 12) which actually agrees with total active (C0) core(s) state at ~28%.

Not easy to conclude what really is happening on a Ryzen. But you can compare your results with a different configuration of your own on the exact same load. On games is not easy.

The following is with Snapshot CPU Polling enabled.
Thermal CPU limit is 75C, hence the red on 70+C.
+50MHz on boost and -CO (8~18).

HWiNFO64_30min_FarCryND_5900X_5700XT_capped_60FPS.png
 
lol i just put it to offset +0.15 mV and ratio 4.8, scalar 2X... ASUS Prime B450M-A II.
 
lol i just put it to offset +0.15 mV and ratio 4.8, scalar 2X... ASUS Prime B450M-A II.
Out of curiosity… Why do you need the over voltage? Both offset and scalar?
In fact scalar is a bypass to silicon health management of the CPU.
X2 is not all that bad from the X1 default and definitely it’s not X5~10 (which I don’t know why it’s even exist so high), but I would like to understand the thought behind the over voltage in general.

Ryzen5000 are boosting better/higher with some undervoltage, but not by using the offset. Curve Optimizer is very nice tool for that.
 
Are all CPU errors caused by too low voltage or something manifested in blue screens or a hard crash?
Say a game crashes on me but the pc is still running fine, is it possible that my PBO settings are the problem?
 
Are all CPU errors caused by too low voltage or something manifested in blue screens or a hard crash?
Say a game crashes on me but the pc is still running fine, is it possible that my PBO settings are the problem?
It may be the cause if you run too much negative Curve Optimizer or too high boost override or too high FCLK/UCLK/MCLK speeds.
You can check everything with Windows Event Viewer for any WHEA Errors/Warnings (Event ID18 or 19)
 
It may be the cause if you run too much negative Curve Optimizer or too high boost override or too high FCLK/UCLK/MCLK speeds.
You can check everything with Windows Event Viewer for any WHEA Errors/Warnings (Event ID18 or 19)
There are no WHEA errors or warnings at all. The same game crashed just now and I went to check the notifications on Techpowerup.
 

Attachments

  • events.png
    events.png
    16.7 KB · Views: 193
Last edited:
There are no WHEA errors or warnings at all. The same game crashed just now and I went to check the notifications.

WHEA errors don't always show reliably in logs, especially for crashing/reboot, as that's a fatal error. imo BSODs are also not particularly reliable - I've only had a lot of BSODs in blatantly unstable memory benching, or bad hardware (board/CPU).

For Curve Optimizer there are a number of stress testing methods out there. OCCT has a dedicated feature now, and there's always corecycler script, and some others.

Games are not a substitute for CO testing and memtesting.
 
Last edited:
Sorry for reviving this thread but I gotta agree with tabascosauz. My curve optimizer offsets were indeed unstable and using CoreCycler for a couple nights I found that out after getting random crashes in games (no bluescreens whatsoever). If anyone else is asking the same questions I was, check out derBauer's experiment with two 5600Xs and a 5800X running them for 6 months 24/7 at 1.45v:

TL;DW: 1.35v should be perfectly fine even as a static voltage.
 
Back
Top