• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

FPS Scaling with DDR5 - Sharing Results

ir_cow

Staff member
Joined
Sep 4, 2008
Messages
5,064 (0.83/day)
Location
USA
I did a few tests today to see what the benefits may be to run high speed kits. While I only did the full data set for 2 games, it ends up being more or less game dependent. Just like DDR4. There tends to be only talk about GPU bottlenecks and CPU bottlenecks. However, memory can play a roll in the 1% lows and overall frame rate.

This is a very basic test at stock settings. After I did this, I re-ran 6000 CL30 with fixed 5.3 GHz and found no gains in Cyberpunk and 4~ fps AVG in FarCry 5. This means, clock speeds and GPU plays a vital roll in the overall frame rate more than memory. If you are trying to squeeze out the last bit in competitive gaming, it makes sense to spend $$$ on ram. Otherwise, a better video card would ultimately yield a higher frame rate until you become CPU bound. At that point the cycle continues.

If anyone was wondering what 4K would but like. It makes absolutely no difference what the memory speed is. It is completely GPU bound as expected.

DDR5-6800 32-42-42-72 1T
DDR5-6600 32-39-39-76 1T
DDR5-6400 32-39-39-76 1T
DDR5-6000 30-36-36-76 1T
DDR5-6000 40-40-40-76 2T

Cyberpunk 2077 1920 x 1080 - Ultra Preset
i9-12900K (Stock). NVIDIA RTX 3080 Founders Edition (Stock)
6800 CL32XMP OC 6600 CL32XMP 6400 CL326000 CL306000 CL404800 CL40
AVG124.73123.97123.52123.77124.29121.8
Min88.4487.3986.0187.1686.7385.91
Max176.15170.81168.57170.22168.01164.7
Time64.2464.2464.2564.2664.2564.25
Frames801379647942795379867826

FarCry 5 1920 x 1080 - High preset
i9-12900K (Stock). NVIDIA RTX 3080 Founders Edition (Stock)
6800 CL32XMP OC 6600 CL32XMP 6400 CL326000 CL306000 CL404800 CL40
AVG216216215214210198
Min174176168173171152
Max268269270270268265
Time
Frames127521272612692126471241011694
 
Last edited:
I did a few tests today to see what the benefits may be to run high speed kits. While I only did the full data set for 2 games, it ends up being more or less game dependent. Just like DDR4. There tends to be only talk about GPU bottlenecks and CPU bottlenecks. However, memory can play a roll in the 1% lows and overall frame rate.

This is a very basic test at stock settings. After I did this, I re-ran 6000 CL30 with fixed 5.3 GHz and found no gains in Cyberpunk and 4~ fps AVG in FarCry 5. This means, clock speeds and GPU plays a vital roll in the overall frame rate more than memory. If you are trying to squeeze out the last bit in competitive gaming, it makes sense to spend $$$ on ram. Otherwise, a better video card would ultimately yield a higher frame rate until you become CPU bound. At that point the cycle continues.

If anyone was wondering what 4K would but like. It makes absolutely no difference what the memory speed is. It is completely GPU bound as expected.

DDR5-6800
DDR5-6800
DDR5-6800

Cyberpunk 2077 1920 x 1080 - High preset
i9-12900K (Stock). NVIDIA RTX 3080 Founders Edition (Stock)
6800 CL32XMP OC 6600 CL32XMP 6400 CL326000 CL306000 CL404800 CL40
AVG124.73123.97123.77124.29121.8
Min88.4487.3987.1686.7385.91
Max176.15170.81170.22168.01164.7
Time64.2464.2464.2664.2564.25
Frames80137964795379867826

FarCry 5 1920 x 1080 - High preset
i9-12900K (Stock). NVIDIA RTX 3080 Founders Edition (Stock)
6800 CL32XMP OC 6600 CL32XMP 6400 CL326000 CL306000 CL404800 CL40
AVG216216215214210198
Min174176168173171152
Max268269270270268265
Time
Frames127521272612692126471241011694
This is great -- I've noticed some interesting results with TRFC tuning - any way to test those?
 
This is great -- I've noticed some interesting results with TRFC tuning - any way to test those?
I don't have time right now. I'll look into it though. On paper is seems like it makes a difference. I'm not sure in real-world applications it would be detectable. Maybe games more sensitive to memory like Far Cry5 and Borderlands 3?
 
Is there not roughly -7% difference in AVG/Min in FC5 from 4800 CL40 to the rest though? I'm sure more games would yield similar results, though on the whole there is little difference however those who wish to squeeze every last bit of performance from their systems will argue it's worthwhile overclocking RAM and tweaking timings, and it is also fun* (sometimes) :roll:
 
Is there not roughly -7% difference in AVG/Min in FC5 from 4800 CL40 to the rest though? I'm sure more games would yield similar results, though on the whole there is little difference however those who wish to squeeze every last bit of performance from their systems will argue it's worthwhile overclocking RAM and tweaking timings, and it is also fun* (sometimes) :roll:
The way I see it, is the more you are CPU bound, the more memory helps. This means to get the most out of it, you needs to lower the graphical settings or a video card that is not being fully utilized in that scenarios. Basically completive gaming. These 360hz 1080P monitors exist. You can have a monster video card and still be limited in frames because of the CPU. Once the you max out the CPU clock speed, you are only left with memory to get that last few fps.
 
You are mistly GPU bound, try medium details and ram would scale more :)
 
Why not try 1440p instead? That would give better perspective on how the RAM settings scale. You need to make sure the GPU is utilized 100% otherwise is false data.
You can actually tell that's the case.
CL 30 6000 config has lower frames than CL40 6000 in CP. At first glance.
 
Why not try 1440p instead? That would give better perspective on how the RAM settings scale. You need to make sure the GPU is utilized 100% otherwise is false data.
You can actually tell that's the case.
CL 30 6000 config has lower frames than CL40 6000 in CP. At first glance.
No, 1440p makes it even more GPU bound. 720p at low details are much better if you want to test ram/cpu-scaling, but 1080p medium is more real life.
 
Seems to be gpu bottlenecked.
Could you post a aida mem bench of that 6800 1T on your Tachyon? Nice settings =)
 
No, 1440p makes it even more GPU bound. 720p at low details are much better if you want to test ram/cpu-scaling, but 1080p medium is more real life.
If that is the case then the test does not make a lot of sense. Memory with tighter timings is worse than the one with loose timings. Weird.
 
If that is the case then the test does not make a lot of sense. Memory with tighter timings is worse than the one with loose timings. Weird.
I think it's variation from run to run. He hasn't tweaked subs, that can have an effect aswell, some might be slower at high speed and auto.
 
Why not try 1440p instead? That would give better perspective on how the RAM settings scale. You need to make sure the GPU is utilized 100% otherwise is false data.
You can actually tell that's the case.
CL 30 6000 config has lower frames than CL40 6000 in CP. At first glance.
If the GPU is utilized 100% then there will be 0 scaling from memory. It's like overclocking the CPU, if the GPU is your bottleneck then it won't do anything.

If you are completely CPU bound there should be a 20+% difference between no xmp 4800c40 and fully tuned 6000c30.

No, 1440p makes it even more GPU bound. 720p at low details are much better if you want to test ram/cpu-scaling, but 1080p medium is more real life.
Actually low resolution ultra details is the way to go. Some settings affect the CPU by a lot, turning everything to low skews the results. A prime example is RT, it pummels the CPU hard. What I run for cpu / memory benching on cyberpunk is everything ultra + RT on 720p DLSS Ultra performance.
 
Last edited:
If the GPU is utilized 100% then there will be 0 scaling from memory. It's like overclocking the CPU, if the GPU is your bottleneck then it won't do anything.

If you are completely CPU bound there should be a 20+% difference between no xmp 4800c40 and fully tuned 6000c30.


Actually low resolution ultra details is the way to go. Some settings affect the CPU by a lot, turning everything to low skews the results. A prime example is RT, it pummels the CPU hard. What I run for cpu / memory benching on cyberpunk is everything ultra + RT on 720p DLSS Ultra performance.
Yes, you are right, some settings can be CPU dependent, and some are very GPU-dependent. Resolution itself is what scales most with GPU.
 
Good sciencing there.

If I could be bothered I'd graph absolute latency in nanoseconds against FPS but I suspect the results would be "lowest latency wins".

Even the lowest-bandwidth config you tested (4800MHz) has more bandwidth than games can use, so it just comes down to latency, same as DDR4 vs DDR5, provided you use DDR4 fast enough to exceed the bandwidth requirements of the application/game being tested. IIRC DDR4-3600 is enough bandwidth that you need to start hunting around to find things which respond to more bandwidth.
 
Quite a few factors that determine where in the system a bottleneck lies... not to mention, it's constantly shifting depending on what the software and environment is doing.

As long as it's smooth and doesnt stutter, the additional few FPS here and there are not noticeable.
 
People this was a quick test for "causal" gamers. If we really wanted to be precise. Lock the CPU so its constant, lock the GPU as well. Lowest resolution, lowest graphics. I think its would be unplayable. Yet I know people who play R6S like that just to get the frames.

The takeaway is still that once you become CPU BOUND, memory does provide measurable improvements in frame rate in some games.

Ive done this before with Ryzen and DDR4. 720p is just for academics and worthless beside showing what is the "best". I personally play at 2k max settigns, which is still GPU bound in most games.
 
Last edited:
People this was a quick test for "causal" gamers. If we really wanted to be precise. Lock the CPU so its constant, lock the GPU as well. Lowest resolution, lowest graphics. I think its would be unplayable. Yet I know people who play R6S like that just to get the frames.

The takeaway is still that once you become CPU BOUND, memory does provide measurable improvements in frame rate in some games.

Ive done this before with Ryzen and DDR4. 720p is just for academics and worthless beside showing what is the "best". I personally play at 2k max settigns, which is still GPU bound in most games.
And realistically, if you're not GPU bound anyway, you should be buying a better monitor so that you are GPU-bound.

GPUs are the stupidly-expensive bit these days, don't waste them!
 
And realistically, if you're not GPU bound anyway, you should be buying a better monitor so that you are GPU-bound.

GPUs are the stupidly-expensive bit these days, don't waste them!
ah yes. If you can output the frames, but the monitor isn't doing so, what's the point eh?
 
You guys should revive this thread for Zen 4 and Alder/Raptor Lake.
 
People this was a quick test for "causal" gamers. If we really wanted to be precise. Lock the CPU so its constant, lock the GPU as well. Lowest resolution, lowest graphics. I think its would be unplayable. Yet I know people who play R6S like that just to get the frames.

The takeaway is still that once you become CPU BOUND, memory does provide measurable improvements in frame rate in some games.

Ive done this before with Ryzen and DDR4. 720p is just for academics and worthless beside showing what is the "best". I personally play at 2k max settigns, which is still GPU bound in most games.
720p can be quite good for mrasuring FSR/DLSS-performance :)
 
Back
Top