• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Overclocking RAM on Sabertooth X99

Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Geez man you are gonna torture the living shit out of your RAM. Is it really worth it to squeeze that tiny amount of performance? I mean X99 is not RyZen, it does not benefit a whole lot from faster RAM.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
Torture is when you run it under insane voltages and clocks. Mine sort of has the clocks due to timings, but it's running at very conservative voltages. Besides, this is the first time ever that I actually overclocked RAM and I've owned OC ready systems for 15 years. Let me have the fun :D
 
Joined
Feb 2, 2015
Messages
2,707 (0.80/day)
Location
On The Highway To Hell \m/
I'm wondering about the 4th number 13-13-13-32. In general, it's a sum of first three numbers, but sometimes it's lower. Is there any general rule of how low it's good to go? I've tried it and it goes all the way down to 13, making it 13-13-13-13 which just doesn't make sense. But it worked lol
tRAS too low can erase data from the memory before it has a chance to be found/used. Which can waste data, if it's deleted before it can be found/used. tRAS too high can preserve data in the memory for longer than it needs to be stored. Which can waste time, by preventing the memory from being refilled more quickly with data.
tRAS Timing: Min RAS Active Time. The amount of time between a row being activated by precharge and deactivated. A row cannot be deactivated until tRAS has completed. The lower this is, the faster the performance, but if it is set too low, it can cause data corruption by deactivating the row too soon.

tRAS = tCL + tRCD + tRP (+/- 1) so that it gives everything enought time before closing the bank.

e.g.: 2.5-3-3-8 The bold “8” is the tRAS timing.(The 2.5-3-3-8 figure is just an example for memory timings.)

https://www.techpowerup.com/articles/overclocking/AMD/memory/131
 
Joined
Mar 24, 2010
Messages
5,047 (0.98/day)
Location
Iberian Peninsula
SPD.jpg
Updated image! On my X99 tRAS readings in default profiles are mostly 2x the previous SPD value plus 2 or 4, not the sum of all, so we have 15-15-15-35. I think based on what MrGenius said, this should be kept on the conservative side.
It is a little bit over or under 2x... Check the XMP value (22->42) compared to SPD (15->35, 14->34).

I myself I am still tweaking, mainly because when I find a fast (Aida bench) and stable (Realbench), after some weeks I notice something I dislike... and back again... I haven't even started to seriously look into System Agent (auto, because gurus say turn it up or down until you find the best, well... mmm) or DRAM voltage (going over 1.35 doesn't seem to do much)
 
Last edited:
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
It used to be that way (sum) with older RAM like DDR2 or DDR3. 4-4-4-12 or 5-5-5-15. Those were the usual timings. It changed with DDR4. So, you're saying in my particular case, 13-13-13-26 would make somewhat the most sense? Maybe up to 13-13-13-28 ?
 
Joined
Mar 24, 2010
Messages
5,047 (0.98/day)
Location
Iberian Peninsula
Somehow, my screenie did not included the XMP values. Update it above. I would just say, regarding the 4the value, it should not be equal to the rest (not 13-13-13-13) bur more like what you say: 13..26, 13..28, yes. Makes sense.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
Yeah, I tried 13-13-13-13 when I was decreasing it and observing how far down it would go. And it went all the way down because it's not affecting the actual "tightness" of RAM, just how often to refresh the rows. Which can go as low as you want to set it. First 3 values are not so generous and they always stop at some point.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
AS I might have mentioned in another thread, pay attention to page faults, especially when playing with tRAS. You may also want to try looser, because as @MrGenius says, this can make the data be "held open" longer. Either way, when the data cannot be accessed as required, this can cause a hard fault, and a slight delay as the row is "activated" again so the data can be accessed.

Think of memory as a page of paper written on, with ink that disappears over time. It needs to be refreshed periodically. The refresh also has to sync with reads and writes; each kind of done by three people. If not coordinated correctly, these three people are going to get in each other's way, and the faster they work, the less margin for error there is, making their little dance that much more difficult. There will be times when only one is touching the page, or two, or three, and there are delays (timings) added into what they do in order to ensure that this delicate balance remains working.

Reads = data outbound

Writes = data inbound

Refresh = well, that's obvious.

So, with that in mind, changing one timing isn't always the best approach. When you do, you are merely playing with the window that one timing has in relation to all the other timings, so when you then adjust another timing, your previous work might become null and void.

Also, even more important than these first four timings are the tertiary timings, which go into fine details on what each timing is doing. Think of those three arms writing on the paper; do they go up, down, left, right, and how long do they have in order to make those moves? That's what the tertiary timings are for. Adjusting these can have a much larger impact on performance, since it will tighten up the movements, rather than making sure all three operations (read, write, refresh) sync right (with respect to the clockspeed).


I hope that maybe helps you put a perspective on what it is you are changing, and how they are all important and linked together. ASUS was one of the first brands to offer us access to these timings way back when, and as such, they do tend to have a leg up on other brands when it comes to memory tweaking, but at the same time, they also then understand how this is a feature that some are willing to pay for, so not all boards offer the "flexibility" required.

To get the most out of DDR4 actually requires far higher voltages that you are pushing, but your board isn't one designed to offer that flexibility to get the most out of your sticks, but it still will allow you to play lots. Just understand that sometimes you'll run into hard walls that you simply won't be able to jump over, because the BIOS isn't always willing. So, when the BIOS isn't willing, you gotta whip it into shape and tell it what to do! That means playing with these other timings!
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
Secondary and tertiary timings are already set as low as they can go using DRAM CLK Clock setting which basically adjusts all 2nd and 3rd timings to a specific DRAM speed timings with just one setting.

As for the clocks, I'm actually super happy with what I have. I literally don't know anyone with 6 core CPU running at such clock and such insane low voltage. And the outcome with RAM is also pretty sweet. 2666 MHz with such timings is pretty impressive imo. Plus, as I've researched, 2666 MHz actually makes the biggest gains on X99, only to be beaten by 3200MHz in one instance, I think it was Cinebench. So, even if it's not absolute extreme, I'm really happy coz I've gained a lot without even pushing hardware at all in terms of voltages. Like I've said before, all this is achieved with voltages that don't even get colored even yellow in BIOS. That's pretty impressive by itself imo.
 
Joined
Mar 24, 2010
Messages
5,047 (0.98/day)
Location
Iberian Peninsula
BRAVO! Cadaveca: man! That mus be the best lesson i have ever heard about RAM!
I wish we had a place here to creat a Knowledge Base. For now I have created a OneNote book for this kind of masterclasses :)

I said voltage doe snot help, totally wrong, for sure. But I have loaded some of the "ROG RAM profiles in the BIOS for single-sided Samsung 8x4 sticks, 1,5 v, 3200Mhz" and it ups voltages, lowers all settings, but finally I do not get better benchmarks, and such extreme settings make me afraid of BSOD's or worse for 24/7 use.
 
Joined
Mar 24, 2010
Messages
5,047 (0.98/day)
Location
Iberian Peninsula
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
The main problem with cache is degradation. And it's not voltage, but clock induced. Which means no matter what you do, it'll degrade faster. Work on CPU clock instead, it's more important for most things and contributes to CPU degradation the least.
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
How do cache hits effect the life of the cpu more than the main clock on the cpu?
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
Secondary and tertiary timings are already set as low as they can go using DRAM CLK Clock setting which basically adjusts all 2nd and 3rd timings to a specific DRAM speed timings with just one setting.
That's the sort of thing you need to take over yourself. DRAM CLK period should be set according to CAS, and will change with each IC in use. Some are more tolerant than others. But again, that's simply set some automatic profiles rather than "going full manual". It's like the difference between an automatic car, and a manual (this may be the one and only time that a car analogy actually truly suits PCs). Most car enthusiasts will prefer a manual car; it puts you in more control of the power (speed) at any given moment.

It's good that you are close to satisfied with your current set-up, but what I'd suggest is buying a completely different set of sticks and then trying again. Not to get better performance, but simply to do it all over again in a different way. Micron, Hynix and Samsung ICs all have slightly different tolerances in timings and voltage that actually make them very very different.

The main problem with cache is degradation. And it's not voltage, but clock induced. Which means no matter what you do, it'll degrade faster. Work on CPU clock instead, it's more important for most things and contributes to CPU degradation the least.

Have you killed any chips yourself? Or just taking someone else's info and regurgitating it? :p Once you have a few chips (of different SKUs), you'll see that the cache speed is not the same for all of them. So... there really isn't such a thing, or those chips with 3500 MHz cache speeds instead of 3000 wouldn't have a higher speed... they'd be more likely to die!

How do cache hits effect the life of the cpu more than the main clock on the cpu?

cache is an area of he chip without any monitoring, and because of that, it is hard to tell what's going on here until you have a CPU that is starting to die, and you then figure out what voltage domain needs boosting in order to increase stability. Often, with my own chips, it has been CPU cache that goes first, not the cores. Why or how... I dunno.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
I literally don't understand what you just said for the second quote about cache... Because then you just confirmed what I said in the third one...
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
I literally don't understand what you just said for the second quote about cache... Because then you just confirmed what I said in the third one...
All chips in HEDT space are the same CPU, with some parts disabled. if it was speed alone that caused cache to degrade, then Intel would not have some chips with higher cache speeds than others... it would be the same across every CPU, since they are all the same physical chip.

However, cache does seem to be the first to go, but I do not believe it is due to clock alone. What causes electro-migration (which is what kills chips) is not the clock... it's the current. Sure, you could argue that the current drawn is a result of the clock, but it is not exactly that simple.

Also, cache voltage is not the same for every chip. it varies, just as core voltage and VCCSA does.


Every intel CPU I have killed (at least one from each generation) has been due to cache-related problems. Some of these CPUs have been at stock, some have been overclocked. I actually had two 3770K CPUs die within weeks of each other, with one overclocked, and one at stock (they were from the same batch). Both were resurrected for a short while by increasing cache voltage.

Cache voltage isn't just for CPU cache though, it is also for the ringbus that connects the CPU parts together. The L3 in these Intel CPU designs is part of the ringbus, but I actually think it is that bus that dies, not the cache. IT just so happens that to the end user, these are one-and-the-same thing.

(pic stolen: http://www.qdpma.com)

 
Last edited:
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
I got all of this stuff locked to a fixed voltage now. And it's a stock voltage. Are you saying locking all voltages to low values should give you ability to OC cache with far lower chance of killing it? Because in my case, nothing can go higher because nothing is set to AUTO anymore, it can only result in a system instability or crash.
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
That is the same concept with anything... yes. IN GENERAL, more voltage should yield faster clocks. There are of course other variables involved.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
No, I'm always thinking the other way around. How much can I yield out of something with minimal voltages possible. Which is why I'm wondering what's the reason for cache degradation everyone is warning about. At first I thought it's just voltage, but then someone here said it's just the clock of the cache. Might as well be cadaveca, not sure. So, I'm unsure what it is exactly now. Of course I want to clock the hell out of the cache as well, because why the hell not, I got everything overclocked. But I really don't want to kill it too soon because I have intention to run it for a while.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
I got all of this stuff locked to a fixed voltage now. And it's a stock voltage. Are you saying locking all voltages to low values should give you ability to OC cache with far lower chance of killing it? Because in my case, nothing can go higher because nothing is set to AUTO anymore, it can only result in a system instability or crash.
Why do you think, when Intel has these chips run with certain settings, that overriding these settings is a good idea?

TO make testing easier, OK. Because you want to OC like past platforms, OK. I understand these. But they're the wrong approach.

When you set a voltage, what you are really doing is changing the waveform that the signaling is done on. Running at higher speeds can cause this waveform to collapse, so we increase the voltage (and thereby the shape of the waveform), so when current draw causes the wave to drop, it doesn't fall so much that the CPU can't read it.

See this picture to illustrate:





You see that waveform? That what the signaling for CPUs and memory looks like. The peaks and valleys are the 1's and 0's. Like shown in that pic, as you increase speed, you increase the jitter/noise. Although that is for graphics memory, the same applies to System ram, to cache, and to CPUs. I am most definitely NOT saying what you posted above. I am saying EXACTLY what I meant. Anyway, increasing the voltage makes it easier to "see" the peaks and valleys, through the noise. However, because the noise is so high, you hit a limit pretty quickly, and voltage doesn't help.

The correct approach to cache/ring clocking is to leave the default voltage behaviors in place, let it scale the multi and voltage according to usage, since that will minimize electro-migration. You can adjust the speed and voltage a bit higher, sure, but keep in mind, cache is memory too! As such, you can't just increase the speed, like with system ram. There are timings that need adjustment as well! Yet we do not have access to those timings, so what we can do is very limited, just like with system ram, should we not adjust timings at all.

IS this getting more complex, and harder to understand? IT should be, since understanding what's going on is really a very complicated thing.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
It's not complicated, I understand the basics of all this stuff. What I'm asking is just about cache degradation everyone is always freaking out about when you mention it. What conditions contribute to its degradation the most? Is it just voltage or is it clock speed or mix of both. What I'm asking is if degradation effect is almost non existent if you run it at higher clock without any voltage increase. Of course there will be some since it's running at higher clock, like everything. But how significant would it be. That's what I'm asking. Cache is no different than other things when it comes to overclocking. You get a specific clock range to work with at given voltage. It's why I have 4.5 GHz on all 6 cores using just 1.125V. I could just set it to 1.3V and have at that. But I went the length of testing how low would it go to still be fully stable. It's cooler, uses less power, but runs at same high clock.

Well, I'd apply same logic to cache then. Keep voltage as low as possible and try to get cache clock higher than stock 3 GHz with that. Now, as you say, when clock is too high and voltage too low, distinction between waveforms becomes a problem and you experience stability issues. But what if I don't? Now, this is what I'm asking here in relation to cache degradation. Do you get what I'm saying here? I'd say voltage is the biggest contributor, but I'm still asking here...
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
Well, I'd apply same logic to cache then. Keep voltage as low as possible and try to get cache clock higher than stock 3 GHz with that. Now, as you say, when clock is too high and voltage too low, distinction between waveforms becomes a problem and you experience stability issues. But what if I don't? Now, this is what I'm asking here in relation to cache degradation. Do you get what I'm saying here? I'd say voltage is the biggest contributor, but I'm still asking here...


Like I said above, I am not sure what causes what seems to be cache death. It could simply be a single ring-stop that ceases function, maybe a specific area.. without advanced microscopes and crap to see what's really going on, there is no real way to know, but what I can say is that for all these dead chips, boost ring/cache voltage let them live a bit longer. And like I said, even the stock-clocked CPU died in similar fashion. So it's hard to avoid something you are unaware of...

What is obvious is that the cache is a large portion of the chip's physical surface, and does consume a fair bit of power (which you can easily see by the power increases when clocking it up), but we don't have much in the way of monitoring this domain, so we don't know exact temps or anything, and without any sort of feedback other than stability, it is nigh on impossible to know how far is too far. With 3 GHz stock, going up to 3.6 GHz is a healthy 20% boost, 4 GHz is 33%... how often can you clock anything else up 30+%? And does this 30% increase have a tangible benefit? That's where I like to stop off... when there's no benefit. Most things will do 10%. GPUs, CPUs, memory, that's pretty standard. So decide how far you want to push. You know there is a risk, and that there is no real way to measure that risk, so then maybe you understand why my suggestions about cache clocking seem rather conservative.
 
Joined
Oct 2, 2004
Messages
13,791 (1.93/day)
boost ring/cache voltage let them live a bit longer

Aaaaand now I'm confused again... How is the cache the only thing I've ever come across that's suppose to live longer with more volts? Or are you saying that becomes the case when it degrades and you have to compensate it with even more volts? Less voltage usually means longer life of components. I'm trying to get this straight so I could know if I can start working on undervolting cache to lower heat and extend life further or not. I'm planning on keeping it stock clocked, but would try to play with voltage if it makes sense. Cores do run at voltages lower than CPU does at stock in AUTO mode so...
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
Or are you saying that becomes the case when it degrades and you have to compensate it with even more volts?

This.... and only this. I've never had cores or memory controller degrade/die, so they seem very safe to push.

BTW, lowering voltage doesn't mean something will live longer. It could lead to higher current draw, and current is deadly. Intel themselves have said that under-volting is bad, too. You don't want to run the lowest voltage possible... you want to have a bit of overhead. Temperature isn't a bad thing either, as stated by Intel. Overheating CPU cores will lead to throttle and eventual shut-down long before a CPU reaches a dangerous level. That's why they sell CPUs will coolers that can have 90c load temps.

On some boards (ie, ASUS), when you OC, current limits are removed automagically, and that is bad. Intel has settings for this for a reason... to prevent current overdraw.
 
Top