• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Mismatched Memory Speeds for upcoming CPU Reviews?

Use?

  • Same memory speed for all (DDR5-6000 CL28 or CL38)

    Votes: 26 41.9%
  • Mismatched but fair memory speeds (actual freq and timing TBD)

    Votes: 32 51.6%
  • I don't care

    Votes: 4 6.5%

  • Total voters
    62

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,665 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
In the past we've run all processors in CPU reviews at the same memory speed.

For DDR5, this was DDR5-6000 36-36-36-76, and people have complained that these speeds are not good.

So for this round I'm thinking, give every platform what works well with it, but still make it as fair as possible:

image_2025_03_07T11_56_16_614Z.png

This would be DDR5-6000 CL30 scaled to other frequencies, while keeping the actual nanoseconds timings the same

I think this is slightly unfair to AMD though, because Intel gets higher frequency, and still same nanoseconds. On the other hand, AMD could have engineered their MC to be able to run higher speeds? But Intel could have engineered their CPU to work better with lower frequency?

How about this?

firefox_7PY8OvgbDL.png
 
No matter what you do, people are going to whine that you are biased against either AMD or Intel. The correct thing to do is instruct the mods to give those people a vacation.
 
IMO going for the same frequency makes more sense, specially given that you often use frequencies that are "standard" in the sense of that's what people will buy and use out of the box.

Those who can tune their memories or are interested in better performance can look into overclocking avenues and have an idea of the performance uplift they may get with faster memory.

A note on faster memories with one or 2 extra tests to show potential uplifts is already more than enough, and that's also that you already do as well.

The only thing I can think of is trying to increase the frequency from 6000 to 6400 to all new products and leave it at that, as long as it works without fiddling much with settings.
 
from 6000 to 6400 to all new products and leave it at that
it will be impossible on my AMD Zen 5 CPUs to run at that speed with 1:1, without increases in voltage (which hurts their efficiency score). Also no way to ensure a lucky sample for pre-launch seeding. Even 6200 will be challenging on some, so 6000 is the only option that will realistically work on all CPUs, and near-future ones
 
Price might also be worth considering, especially in lower tier CPU reviews. I don't think anyone buying an i5 is gonna spend almost double on ram to get 7600 instead of 6000 (and rightfully so imo).
 
I personally don't care either way because I think there's arguments both ways.

On one hand, Running Intel systems at a far lower frequency than they can easily handle may be misrepresenting their performance a bit, especially in non-gaming applications. In games, this probably matters a bit less since the latency is typically more important there, and you seem to have tried to keep that similar.

On the other hand, how many people care about the higher speed results if those speeds have prices that make few choose them?

It seems to come down to which market of buyers you are more concerned with; the minority who may min-max tweak RAM on an Intel Core i9/nVidia x90 system, or the "buy the sweet spot price/frequency and set XMP and call it a day" masses?
Price might also be worth considering, especially in lower tier CPU reviews. I don't think anyone buying an i5 is gonna spend almost double on ram to get 7600 instead of 6000 (and rightfully so imo).
This is a consideration too; faster RAM becomes more unlikely in the case of mainstream SKUs, yet if you test those with slower speeds and higher SKUs with faster speeds, it may create an imbalance in Intel's own lineup results.
 
You're stuck between a rock and a hard place, so "fair" in terms of the same price might be the easiest way out.

Intel definitely benefits from DDR5-8000 but that stuff costs so much that it seriously distorts the cost of the CPU being reviewed because you cannot buy a CPU and run it using no RAM.

Testing Intel with 6000 and 8000 is probably the best answer but that's a lot of extra work and as mentioned already, it's irrelevant to the overwhelming majority of buyers interested in performance/$ at any SKU except the flagships.

Whatever answer you pick, you're not going to please everyone, and you shouldn't have to - just say why you picked what you did and ignore the haters. Making it fair in terms of timings is definitely not fair in terms of price, but as long as you point that out in the conclusion and testing methodology I don't think anyone complaining has a leg to stand on.
 
Last edited:
IMHO the quote-unquote "perfect" approach is to do all of this:

• Test with the most common RAM speed for this price range (5600 C38 for low i5/Ultra 5s; 7000+ for top tier Intel CPUs; 6000 C30 for AM5 SKUs).
• Test with the fastest RAM compatible with any CPU (6000 C30 or something).
• Test with RAM at JEDEC speeds.
• Test with ridiculously fast RAM.

But it has a major flaw: it'll cost you hundreds upon hundreds extra hours worth of work.

If I were in your shoes I'd just only do the middle ground testing and add "YMMV if you run different RAM" in the conclusion. People whining about you testing X but not Y will emerge no matter what and 6000 C30 tests are already informative enough.
 
I think this is slightly unfair to AMD though, because Intel gets higher frequency, and still same nanoseconds. On the other hand, AMD could have engineered their MC to be able to run higher speeds? But Intel could have engineered their CPU to work better with lower frequency?
Why make that your problem? Both AMD and Intel made their own decisions based on their own marketing goals.

I agree with @Assimilator and no matter what you do, someone will complain. So I say, do nothing. Test the CPUs at their factory default settings.

How can Ford, Chevy, and RAM all claim to build the #1 pickup truck and all still be right? Because one can tow the heaviest trailer, another can carry the most bricks, and the other gets the best gas mileage. Oh, and then Toyota can claim #1 because they have the best 0 - 60mph times. No clue about Nissan but no doubt they have a claim too.

There is NO WAY you can really do a proper, "fair" or "perfect" A/B comparison here anyway. The only way that would be possible is if you could test on the exact same platform. That is, with the exact same motherboard with both AMD and Intel processors. And everyone knows that is impossible. At least with the trucks, you can run them on the same test track, tow the same trailer or carry the same load of bricks.

So IMO, the best you can do is "try" to pick equivalent motherboards, use the same RAM, same drives, same graphics solution (unless integrated) and same applications for your comparison tests. Then if there are any users who complain it is not fair, send them to the CPU maker.
 
Yeah, not possible, because it takes too much time
Exactly.

I'm more okay with sticking to DDR5-6000 for all platforms because that's the speed that people are actually buying. 32GB DDR5-6000 CL30 is like $80 right now, compared to $170-250 for 8000. Nobody is buying that unless they are "money no object, don't care what I'm buying" people, and those people don't read reviews anyway, they just buy the most expensive stuff that is compatible without caring about its value.

If Intel's application/game performance scaled more significantly with the move from 6000 to 8000, I'd be tempted to say that more expensive RAM would be more relevant, but whilst the gains are measurable, they completely break the performance/$ charts and unless you're already buying an i9 Ultra9, your money is better spent on jumping to a higher processor SKU than trying to squeeze more out of an Ultra5 or Ultra7 by throwing $90-170 at faster RAM.
 
it will be impossible on my AMD Zen 5 CPUs to run at that speed with 1:1, without increases in voltage (which hurts their efficiency score). Also no way to ensure a lucky sample for pre-launch seeding. Even 6200 will be challenging on some, so 6000 is the only option that will realistically work on all CPUs, and near-future ones
Eh, then keep it as it is, this would totally go against my idea of "not fiddling much".
Most users will just enable XMP/EXPO and leave it at that. Going too much into specifics would deviate from how most people use their systems. IMO your results are most representative of what the average joe will get.

Yeah, not possible, because it takes too much time
That'd be the closest you would get to good, and people would still complain that you did not test their specific 7333MHz setup with really specific timings :p
 
That'd be the closest you would get to good, and people would still complain that you did not test their specific 7333MHz setup with really specific timings :p
This is why it's not worth worrying too much about. You'll never please everyone and you'll kill yourself trying to...
 
Maybe for AMD high memory speeds, just clearly state the IF ratio reduction 1:2.

However, the Fabric clocks is really important to AMD performance. Just as CPU cache is important for Intel memory latency.

Perhaps the reviews for AMD and Intel should just simply differ instead of using similar settings. Not exactly apples to apples comparisons anyways.

My 2 pennies.

PS. I liked the reviews either way. Straight to the point with non bias comments. Perfect.
 
Two different kits for both, one at 6000 MT/s and one high-speed, maybe 7600 or higher. You might just need a clone of yourself to catch up, no big deal ;).
 
I agree that this is a “damned if you do, damned if you don’t” situation. I know a lot of people would argue to push every piece of HW as far as you can to see “maximized” performance, but that just isn’t what most consumers do. I enjoy your reviews specifically because they are very representative of actual common use scenarios. Nothing outrageous and extreme to the point of being unrealistic. As such, I think keeping what’s there is fine. It might hobble Intel chips a tad, but it’s not like 6000 on Intel is completely terrible and it keeps the playing field even while also being somewhat plausible.
 
Am I right in thinking that the only application that really benefits hugely from bandwidth on Arrow Lake is AI inference?
So yes, it matters for AI inference but why are you running that on a CPU and not a GPU?!
 
No matter what you do, people are going to whine that you are biased against either AMD or Intel. The correct thing to do is instruct the mods to give those people a vacation.
Not only that, I can state with the utmost confidence @W1zzard is biased AMD and Intel. And Qualcomm. And Apple. And even Cyrix :P
 
For me I’m doing two kits new and old Micron ICs not intentionally because I thought a stick of the older RAM had started to become unstable. Turned out it was W11 24H2 after release. I threw the old sticks in which stresses the IMC so the maximum I can get is 5866 with tightened primary timings of 34-42-42-76. It doesn’t seem to bother games, and it has the added benefit of approaching the theoretical bandwidth because of the extra ranks, and banks?
 
given that amd benfits so greatly from memory timings 6000 CL32 seems like a good compromise perhaps consider throwing the timings and speed out the window and run some benchmarks to get the raw read/write speeds and latency and just go with whatever works for both thats in the same range
 
B6146Xa31TSAAAAAElFTkSuQmCC.png
ubLdnKjjHlwAAAAASUVORK5CYII.png
With a X3D chip RAM clocks, and timing hardly make any difference. @ir_cow showed this on the discord server.
 
6000 is the most realistic and universally accepted use case and that's what should be used in my opinion. It works on all DDR5 CPU generations and platforms and is fairly representative of stock performance on all vendors.
 
With a X3D chip RAM clocks, and timing hardly make any difference. @ir_cow showed this on the discord server.
I was surprised it was so little. Once the review drops, you'll see all the other games too. Everything could be considered margin of error for fps. Though each game is run 3 times for a average.

The only areas I saw improvement is blender, Cinebench. Still it isn't enough make a major impact.

3DMark and AID64 are also distinctly different per memory. which doesn't really matter because it's synthetic benchmark. This would indicate that yes 6000 CL26 is Superior for X3D, but we need much more powerful video cards to see that advantage. I don't think the RTX 5090 is enough. Certainly not the RTX 4090.

If you have a 9800X3D, buy the cheapest ram you can find and spend more on the video card. I know that's not what memory vendors want me to say, but it's the truth.
 
B6146Xa31TSAAAAAElFTkSuQmCC.png
ubLdnKjjHlwAAAAASUVORK5CYII.png
With a X3D chip RAM clocks, and timing hardly make any difference. @ir_cow showed this on the discord server.
CPU reviews are not about games only. Plenty other workloads will run out of the 3D cache and be sensitive to RAM settings.
 
Back
Top