• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PLL Tuning *really* helps with DDR5

Joined
Jul 29, 2023
Messages
48 (0.07/day)
System Name Shirakami
Processor 7800X3D / 2000 & 2000 IF & UCLK
Motherboard Gigabyte B650i AORUS Ultra
Cooling Corsair iCUE H100i ELITE CAPELLIX w/ ultra slim 2x120mm fans
Memory 2x24GB Hynix M-Die @ 8000 - 38-48-48-40 - 40k tREFI & tuned subtimings
Video Card(s) 6900 XT Reference / -120 mV @ 2.4 GHz
Storage 1 & 2 TB NVMe - 1 TB SATA SSD
Display(s) LG 34GN850 (3440x1440) 160 Hz overclock
Case Lian Li Q58
VR HMD Reverb G2 V2
Software Fedora Linux 41 operating system (daily)
Just wanted to throw out some advice to anyone potentially trying to push XMP or higher speeds on their memory kit. Setting the MC (memory controller) PLL voltage to 1.00v, 1.05v, or 1.10v seems to really help. I've had my 2x24GB kit since they were released but found myself incapable of reaching the stated 7200 XMP speeds until my recent attempt at PLL tuning. I've had to run speeds of 6600 to 6800, since speeds above 7000 would only be stable for a short period of time.

If you're struggling to achieve stability, I definitely recommend pushing your MC PLL anywhere from 1.00v to 1.10v. I believe the default is 0.90v on all boards. I was also able to stabilize 50x cache rather than 48x by setting the Ring PLL to 1.05v.
20240405_132502.jpg
 
What effect does this have on the RAM lifetime?
 
What effect does this have on the RAM lifetime?
For the most part (and from what I understand), the most dangerous voltage for DDR5 is the VDDQ voltage. VDD handles the internal components for chip-to-chip communication on the actual DDR5 modules (which is why it assists with helping achieve tighter timings, opposed to higher frequency). Setting VDD to 1.6v or higher doesn't have an effect on temperatures to a meaningful degree. VDDQ is the voltage for powering the actual flash modules however, and has a massive effect on temperature. Setting VDDQ higher than 1.45v without active cooling is not safe for the lifetime of the module. You'll also likely encounter instability before it becomes an issue of the module's life, due to how temperature sensitive DDR5 currently is.

As for the MC PLL voltage, this should theoretically have a non-existent effect upon the RAM's lifetime. Moving from 0.90v to 1.00v ~ 1.10v is almost margin of error. Many people have been running 1.20v and higher on the xOC forums without issue since 13th gen's launch. You'll see voltage rollover before it becomes an issue of safety. I personally tested where the voltage rollover point for stability was, and I encountered memtest errors at just 1.25v PLL.
 
I wonder what the PLL limit is before damage the CPU.

Your also on Z690 Motherboard, but 2-Dimm so upper limit is going to be around 7800.
 
If the default value is 0.9V then >20% increase to 1.1V seems awfully excessive, I'd be careful with that.
 
If the default value is 0.9V then >20% increase to 1.1V seems awfully excessive, I'd be careful with that.
Thanks! I'll keep an eye out for degradation and try to minimize the PLL's as much as I can. I know that prior generations were happy to accept 1.25v & higher but ADL and RPL do seem more fragile.

@Vya Domus yep. When people cite XOC forums, you know it's probably not safe long term.
Yeah.. these people have their chips delidded and with massive water blocks as well. I'm just running an AIO with a power limit and strict temperature limit. I wish DDR5 weren't so temperamental with current generation boards. My IMC is fine but the board itself is only rated for 6800, and it shows.
 
Gonna try this out, if I fry it will be an excuse to buy zen 5. DDR 5 with these IMCs loves to be stable for days at a time and then not. Really hard to get these stable.
 
Gonna try this out, if I fry it will be an excuse to buy zen 5. DDR 5 with these IMCs loves to be stable for days at a time and then not. Really hard to get these stable.
In my experience, setting the IMC PLL to 1.05 has helped tremendously. I did more tuning today and was able to go from 6800 prior to PLL tuning, to 7600.

My settings are as follows:
36-46-36-38-84 @ 7600 (2x24GB)
tCKE needed to be set at 16 (was 8 prior)
tRRD_L needed to be set at 12 (was 8 to 10 prior. you may need 14 depending on lottery)
tREFI is at 70,000

1.45 VDD
1.45 VDDQ

1.45 IMC (might be able to run 1.40)
1.10 SA (probably can be reduced to 1.05)

You might be able to get away with a lower VDDQ or VDD, so I would use my settings as a starting point and lower from there. I receive voltage rollover on the IMC at 1.50v, so 7800 isn't attainable. The memory also rolls over at 1.50 VDDQ.

For what it's worth, I've been running 1.05 across most of my PLL's (except ring, which is 1.00) for around 3 weeks now with no issues. I'll certainly update you if anything occurs.

20240405_132502.jpg
GKoWFGmXkAA75LL.png
 
Last edited:
I used to own this board, my only gripe is its really too cramped up down there, the heatsinks made by Asus for the board are too chunky, in terms of overclock, the highest I was able to get was 7400mhz with an SP 84 13600K. My PLL's were at 1.1 for SA and MC, my ring bus is running 4800mhz and was just using 0.99. I ran this build for almost a year without issues when booted up (memory training during cold boots are a hit or miss at some times)
 
I used to own this board, my only gripe is its really too cramped up down there, the heatsinks made by Asus for the board are too chunky, in terms of overclock, the highest I was able to get was 7400mhz with an SP 84 13600K. My PLL's were at 1.1 for SA and MC, my ring bus is running 4800mhz and was just using 0.99. I ran this build for almost a year without issues when booted up (memory training during cold boots are a hit or miss at some times)
Yep, that's my experience as well. Cold boots at anything beyond 6800 are very troublesome. Disabling retraining only causes errors to form.

I also noticed a massive power draw reduction going from 7200 down to 6000 with tight timings. 220W full load (DDR5 6000) vs 248w full load (7200). But the Timespy CPU score only changed by around ~3%.

I know this is obsessing over single digit percentage points, but I find it very interesting how large the power draw difference is when the only thing changed is the memory speed and a few primaries. The performance benefit from running higher speeds doesn't seem worth the power draw, especially when benchmarks show tuned 7200 vs tuned 6000 is almost margin of error.

I've gone down to 6000 CL26 and it opened up enough thermal headroom for me to run 5.5 with hyperthreading, rather than having to disable HT.

1712704305912.png
 
I know this is obsessing over single digit percentage points, but I find it very interesting how large the power draw difference is when the only thing changed is the memory speed and a few primaries.
Higher frequency + lower CAS = more voltage.

I haven't observed a higher power draw with faster memory, but I also haven't been looking. Im always sitting at 265-287w for 5.5 P/ 4.3 E
The performance benefit from running higher speeds doesn't seem worth the power draw, especially when benchmarks show tuned 7200 vs tuned 6000 is almost margin of error.
It would be interesting to know how scientific / precise these numbers are. Like for example is the CPU freq lock, ring ratio locked, C-state disabled. Is the secondary values MB preference or manually set, tREFi value, etc. Is this 1 benchmark run or a average of 3-5?.

It could be a clear 4 FPS win or a +/- 4 FPS margin of error.
 
Higher frequency + lower CAS = more voltage.

I haven't observed a higher power draw with faster memory, but I also haven't been looking. Im always sitting at 265-287w for 5.5 P/ 4.3 E

It would be interesting to know how scientific / precise these numbers are. Like for example is the CPU freq lock, ring ratio locked, C-state disabled. Is the secondary values MB preference or manually set, tREFi value, etc. Is this 1 benchmark run or a average of 3-5?.

It could be a clear 4 FPS win or a +/- 4 FPS margin of error.
I just did the testing in Timespy and found the difference to be 3.45% in CPU score, from 6000 CL26 (base) to 6800 CL32 (+3.45%).
But the power draw really goes up when the memory is at 6800 or higher. I'm beginning to wonder if the reason people have unstable silicon from the factory (14900K / KS) is due to running high memory speeds, creating larger vdroop..

Here's 3 images from hwinfo64.
Same voltages, same clockspeeds, the only difference is the memory. 6000, 6400, and 6800 respectively. This was taken during 1 run of R20 each, so hardly scientific, but it does show an upwards trend - one that only gets larger disparities with heavier workloads. The biggest I've seen is about 30 watts in OCCT's Extreme CPU stability test.

As a note, these were taken at 5.3P / 4.2E, so mostly stock - just undervolted to 1.220v load in all 3 scenarios.

6000 mts.png
6400 mts.png
6800 mts.png
 
I also noticed a massive power draw reduction going from 7200 down to 6000 with tight timings. 220W full load (DDR5 6000) vs 248w full load (7200). But the Timespy CPU score only changed by around ~3%.

I don't really concern myself with power consumption, sweet spot for these chips are around that area 6800-7200mhz, so if the board can do it, why not, though I don't use it for benchmarking, I have another monster in my closet for that purpose.
 
Just wanted to throw out some advice to anyone potentially trying to push XMP or higher speeds on their memory kit. Setting the MC (memory controller) PLL voltage to 1.00v, 1.05v, or 1.10v seems to really help. I've had my 2x24GB kit since they were released but found myself incapable of reaching the stated 7200 XMP speeds until my recent attempt at PLL tuning. I've had to run speeds of 6600 to 6800, since speeds above 7000 would only be stable for a short period of time.

If you're struggling to achieve stability, I definitely recommend pushing your MC PLL anywhere from 1.00v to 1.10v. I believe the default is 0.90v on all boards. I was also able to stabilize 50x cache rather than 48x by setting the Ring PLL to 1.05v.
View attachment 342411

Not sure if you're still here but I believe the max PLL possible is 1.125v. Because the trim is in 15mv values and can only go up to 0x0f (hex), which is 15 decimal, so 900mv + 15mv*15=225mv=1.125v).
This is well defined on Arrow Lake, since you enter in an offset in decimal (0-15) there, but there is a sub value that also has a 0-15 range (0x0f maximum), raptor doesn't have those sub values. Shamino mentioned these in his arrow lake guide (and I think I did in my guide I wrote on ROG and OCN).

Entering a value higher than 1.125v and rebooting will use the last previously set valid value from the last boot. Turning off the computer and then powering on with a BIOS value already higher than 1.125v will use 0.9v (the default value). This is much different than PLL Termination/CPU Standby (these seem linked somehow, not setting both to the same value can cause 00 until hard power cycle).
 
I've had to run speeds of 6600 to 6800, since speeds above 7000 would only be stable for a short period of time.
The board only supports upto 7200mt/s, that's why there's some struggle there while using slower kits like we do, OC 6000mt/s to 6800+...

My TUF Z790 acts exactly the same way. However, I use a looser timing set due to earlier release Samsung chips, but PLL at .9v seems ok.
 
Thanks! I'll keep an eye out for degradation and try to minimize the PLL's as much as I can. I know that prior generations were happy to accept 1.25v & higher but ADL and RPL do seem more fragile.


Yeah.. these people have their chips delidded and with massive water blocks as well. I'm just running an AIO with a power limit and strict temperature limit. I wish DDR5 weren't so temperamental with current generation boards. My IMC is fine but the board itself is only rated for 6800, and it shows.
What can you do? The higher the operating frequency, the more the quality and length of the traces factor in :(
 
Back
Top