• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DDR5 Memory Performance Scaling with Alder Lake Core i9-12900K

Looking forward to those 8000Mhz CL40 modules....
 
Hmmm so an apples to apples would be ddr4-3200 22-22-22 (jedec 1.2v) vs ddr5-3200 22-22-22??
And same number of ranks. And same command rate (1T or 2T). Aaand same gear ratio. And even then, one thing is inevitably different: two 64-bit vs. four 32-bit channels. That's *IF* Alder Lake IMC actually makes use of four independent channels - it's not mandatory, and it's not to be taken for granted.
 
Best review out there yet on DDR4 vs DDR5, very nice. Looks like we have to wait for DDR5-6000+ with some lower latencies than current to become readily available for it to really be worth it over a good DDR4 kit. Making a guess, 2H 2022 for 2nd generation of DDR5.
 
Give DDR5 another year then I'll switch.
 
Excellent review. I'm pleasantly surprised by the results of the DDR5-4800 CL30 configuration.
 
And same number of ranks. And same command rate (1T or 2T). Aaand same gear ratio. And even then, one thing is inevitably different: two 64-bit vs. four 32-bit channels. That's *IF* Alder Lake IMC actually makes use of four independent channels - it's not mandatory, and it's not to be taken for granted.
Indeed. Shouldn't that in theory yield more performance if implemented properly?
 
Hmmm so an apples to apples would be ddr4-3200 22-22-22 (jedec 1.2v) vs ddr5-3200 22-22-22??

Reminds me of the old AMD cpu that had a memory controller that could do 2 gens of ram.
Even just a ddr4-2400-c36 (and/or ddr4-2800-c36, ddr4-3200-c36) as further scientific like-for-like would've been interesting, tbh.
 
Seeing this really does make it apparent how hard a good DDR5 kit will be to get for a while. Its not the first place that struggling to get much more beyond the 6000 CL36 mark.

I hope when I get some time I'll see what is needed to get me to Gear1 and if it benefits much. Im sure it'll be better, but if its actually worth the trouble is another story. Either way, DDR4 was the right call for now.
Still it's fairly comparative to like DDR4 3000MHz CL18 in triple or quad channel from a relative standpoint with how DDR5 is setup relative to DDR4 with the IMC. It's actually somewhat better than I was expecting at this early stage for DDR5.
 
In reference to the article topic:
I'm going to seem like a smart-a$$ here, but really, who called it? This happens with every new generation of memory. DDR4 will be the best choice for at least 18 to 24 months until DDR5 is refined enough and costs have come down.

People stick with DDR4 for now. Enjoy!
 
In reference to the article topic:
I'm going to seem like a smart-a$$ here, but really, who called it? This happens with every new generation of memory. DDR4 will be the best choice for at least 18 to 24 months until DDR5 is refined enough and costs have come down.

People stick with DDR4 for now. Enjoy!
Someone's got to get the ball rolling though; like with AMD adopting PCIe 4.0 and leading to major investments into consumer grade PCIe 4.0 hardware despite PCIe 3.0 having been "good enough".

This time, it's Intel's turn (esp. after getting some flak for not having PCIe 4.0 mobos out well after AMD's shift). AMD looks to be playing the waiting game for at least a year, if their timelines hold up, and will release a DDR5 CPU as DDR5 matures. Given that most DDR4/5 comparisons show a relative dead heat when including costs and tuning time/effort, AMD can afford that leadership (first to include whatever new tech) loss.
 
I have a 2x16 GB 3600 CL16 dual rank kit. I hope I can stick with it for a while.

I wish you did a 3600 CL16 and 4800 CL30 frametime comparison. That would be most valuable.


Overall it seems that current games just do not need that much bandwidth. And I do not expect that to change soon, considering console specs and the fact that most games are targeting 60 (even 120) FPS on those machines.
 
Last edited:
Someone's got to get the ball rolling though; like with AMD adopting PCIe 4.0 and leading to major investments into consumer grade PCIe 4.0 hardware despite PCIe 3.0 having been "good enough".

This time, it's Intel's turn (esp. after getting some flak for not having PCIe 4.0 mobos out well after AMD's shift). AMD looks to be playing the waiting game for at least a year, if their timelines hold up, and will release a DDR5 CPU as DDR5 matures. Given that most DDR4/5 comparisons show a relative dead heat when including costs and tuning time/effort, AMD can afford that leadership (first to include whatever new tech) loss.
There is always going to be the section of buyers that want the best, can afford to pay for it and will be the ones who drive tech forward. For everyone else, it's worth their time to balance out cost vs benefit.
 
Good review, though I feel the graphs should be easier to read. In any case, I feel that most people won't feel/observe any tangible difference between DDR5 and a decent spec DDR4. While the charts may make it sounds like a double digit % difference, that may be a small difference if looking at the amount of time to complete a task. Considering DDR5 is (1)too rare and (2) too expensive, I feel it makes sense to wait till second gen or late next year. The only platform that uses DDR5 is Intel Alder Lake now, and it is not exactly an easy transition.
 
Computer technology progress is getting so slow these days. I have a DDR3 system from 2013 that can easily compete in some of these benchmarks with both DDR4 and DDR5. DDR3 was the last major performance leap we had in PC memory performance.
 
Thanks man. As always very excessive but "typical German" :D ... Love the general overview and it shows/proves what was expected for the first few months and the switch from one generation of system-memory to the next :)

None from me. I think they're fine. Anyone who takes the time to read the legend that is provided to establish context will never have any problem understanding the graphs.
Same. I think it's fine. You need by far less time to read/understand what's shown here than Wizz needed to accumulate and showcase the data :D
 
Computer technology progress is getting so slow these days. I have a DDR3 system from 2013 that can easily compete in some of these benchmarks with both DDR4 and DDR5. DDR3 was the last major performance leap we had in PC memory performance.
I would not be so sure of that. DDR3 might be enough for the CPU you have, but a Haswell i5 will not do you much good in modern games.

And you seem to be misunderstanding the phrase "performance leap". If you look at memory bandwidth benchmarks, the performance leap is huge (basically linear), way bigger than with any previous memory, which always started out at low clocks.
The problem is that pretty much nothing can utilize this performance. Bandwidth has to be utilized, just like cores in a CPU. A game might be able to use all 16 cores in a CPU, but it will not get more performance from doing so, because there is simply not enough workload.

DDR4 was not very useful with quad core CPUs at launch. But then came more demanding games that started utilizing 6 and 8 cores, and faster DDR4 made a huge difference to minimum framerates in those games.
Zen 3 and Alder Lake also have a huge amount of cache, which reduces the benefit of faster memory.

DDR5 will become useful, but it will take some time.
 
I would not be so sure of that. DDR3 might be enough for the CPU you have, but a Haswell i5 will not do you much good in modern games.

And you seem to be misunderstanding the phrase "performance leap". If you look at memory bandwidth benchmarks, the performance leap is huge (basically linear), way bigger than with any previous memory, which always started out at low clocks.
The problem is that pretty much nothing can utilize this performance. Bandwidth has to be utilized, just like cores in a CPU. A game might be able to use all 16 cores in a CPU, but it will not get more performance from doing so, because there is simply not enough workload.

DDR4 was not very useful with quad core CPUs at launch. But then came more demanding games that started utilizing 6 and 8 cores, and faster DDR4 made a huge difference to minimum framerates in those games.
Zen 3 and Alder Lake also have a huge amount of cache, which reduces the benefit of faster memory.

DDR5 will become useful, but it will take some time.
Higher memory bandwidth in newer memories comes at the cost of huge increase in memory latency. overall DDR5 and DDR4 are better than DDR3 but in practice in most scenarios real world performance is not that much better.
Let me explain to you with an example. in 2013 a 2400mhz cl10 was an average memory speed and dirt cheap(much faster DDR3 memories existed at that time). now in 2021 an average DDR5 memory is around 4800mhz CL40. after 8 years ram speed has increased 100 percent but on the other hand memory latency has increased a whopping amount of 300 percent. it means in action performance can not be that much better. in my opinion that's disappointing.
 
Last edited:
The math is not simple, though. Bandwidth and latency are two different things. DDR5 has similar overall latency because of much higher clock speeds. But higher bandwidth does not help in applications that do not need it.
Similar reason why new versions of PCI-Express are pretty much useless in gaming. And why VRAM bandwidth on modern graphics cards is not that helpful in old games. And why NVMe SSDs are marginally faster than SATA ones.

There is always some bottleneck that limits other components. System memory is not a bottleneck with current hardware and software.

DDR4 provided a huge improvement to frametimes and 1% lows a few years after launch. Same will happen with DDR5, but not any time soon.
 
The problem is that pretty much nothing can utilize this performance. Bandwidth has to be utilized, just like cores in a CPU.
Very true. Looking at the graphs to find out which applications scale linearly with bandwidth (4000 vs. 4800), the only ones I can find are Comsol and 7-zip compression. Both very well multithreaded, I suppose. But both also behave in a very weird way: when RAM speed drops from 4800 to 2400, the performance drops to less than one half.

Let me explain to you with an example. in 2013 a 2400mhz cl10 was an average memory speed and dirt cheap(much faster DDR3 memories existed at that time). now in 2021 an average DDR5 memory is around 4800mhz CL40. after 8 years ram speed has increased 100 percent but on the other hand memory latency has increased a whopping amount of 300 percent. it means in action performance can not be that much better. in my opinion that's disappointing.
Like it or not, latency in nanoseconds hasn't improved much since ... ever. New museum-grade modules that you can buy today (much easier to buy than DDR5, hah) are DDR-333 CL 2.5 or DDR-400 CL 3 or SDR-133 CL 2. All of those calculate to 15 ns.
With that in mind, it's a little unfair to say it's increased by 300%. DDR5 is again around 15 ns, your DDR3-2400 CL10 example is 8.3 ns, making DDR5 80% slower. As for DDR4, it becomes really costly once you get to 8.7 ns or below.

Indeed. Shouldn't that in theory yield more performance if implemented properly?
Probably - or it wouldn't be worth the added complexity.

And yet, in a way, DDR5 is twice as slow, or half as fast, at same clock speed. How so? The minimum unit of data transfer between CPU and RAM is one cache line, which is 64 bytes. This amount of data is moved in:
- 8 transfers, or 4 clock cycles in DDR4, or 2 ns in DDR4-4000, which has a 64-bit channel
- 16 transfers, or 8 clock cycles in DDR5, or 4 ns in DDR5-4000, which has a 32-bit channel (if implemented properly).
I'm sure that a specific microbenchmark could be devised that could measure a significant difference in favour of DDR4. It would need to have a very bad pattern of memory access, keeping just one 32-bit channel active, while the other one(s) would be idle.
 
The math is not simple, though. Bandwidth and latency are two different things. DDR5 has similar overall latency because of much higher clock speeds. But higher bandwidth does not help in applications that do not need it.

I think the single use benchmarks have for a long time missed common scenarios. It's extremely difficult to find for example, a gaming + streaming benchmark. Or a gaming + encoding to disk bench. Or lets look at something more practical - using Snagit to record video from a MS Teams meeting.

I think some of the differences between processors as well as memory subsystems are far more significant than people think. Optimum Tech (youtube) is about the only one that I know that does some of what it calls 'Hybrid Workload' testing, though this mostly revolves around gaming + a few methods of encoding/saving video and streaming.

The results really show the power of DDR5 under some circumstances. You are looking at an 87% increase in performance using DDR5 vs DDR4 for this use case.

I think this type of thing calls into question the validity of a lot of single use benchmarks. I for one, never ever just have one thing going on my PC, and there are many variations on that from something simple like streaming iTunes music or Spotify + browser + outlook + gaming to much more intense scenarios. That's quite normal for many folks I think.


1638215081273.png
 
The question is: Who does that? Why not just use your GPU to do realtime transcoding?
(Yes, there's a time and place for software trancoding since it produces much smaller files, but you usually don't want to do that in realtime ...)

What makes you think he is doing this on the CPU?

CPU encode would probably require *less* main memory bandwidth than streaming the data to the GPU, since the GPU is going to be like 20x faster consuming that data. Most likely he is using Adobe Premiere.
 
Back
Top