• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7000 "Raphael" to Ship with DDR5-5200 Native Support

Your post is absolutely correct. The problem for those Xeon/EPYC users is that they're all using registered memory, so the best they can manage right now is something like 3200C22. And not all of their programs would benefit either.
Yeah, very few real-world applications are limited by memory bandwidth. Massive SQL databases *can* push bandwidth, but more commonly storage IO is the bottleneck, then. Custom applications or big-data are all potentially viable candidates but the only time I ever really run into bandwidth limitations is when the hardware is a host for VDI and multiple users are all working on large image/media applications like Premiere/After Effects

The slower ECC is definitely worse from a performance standpoint, but when it's bandwidth that's the problem slower 2133MHz ECC isn't a problem because the server has (typical Xeon Silver/Gold) two 6-channel memory controllers joined by at least 3 10GT/s QPI interconnects. It's not quite as good as having 12 memory channels but realistically there is 6x more bandwidth than a typical dual-channel consumer solution, and so running 2133 instead of 4000MT/s RAM isn't the end of the world, it's still close to 2-3x more bandwidth than the fastest dual-channel consumer platform that money can buy despite the pedestrian ECC 2133 clockspeeds.

People often cite photoshop as a bandwidth-heavy application, and they're not wrong; Photoshop filters and transforms will use all the bandwidth available. It's just that the operation takes fractions of a second, so having lower bandwidth means that the operation you perform half a dozen times an hour takes 0.5 seconds to run, instead of 0.3 seconds to run. Yes, the bandwidth makes it measurably quicker, but not in a way that impacts anyone in the real world.
 
Yeah, very few real-world applications are limited by memory bandwidth. Massive SQL databases *can* push bandwidth, but more commonly storage IO is the bottleneck, then. Custom applications or big-data are all potentially viable candidates but the only time I ever really run into bandwidth limitations is when the hardware is a host for VDI and multiple users are all working on large image/media applications like Premiere/After Effects

The slower ECC is definitely worse from a performance standpoint, but when it's bandwidth that's the problem slower 2133MHz ECC isn't a problem because the server has (typical Xeon Silver/Gold) two 6-channel memory controllers joined by at least 3 10GT/s QPI interconnects. It's not quite as good as having 12 memory channels but realistically there is 6x more bandwidth than a typical dual-channel consumer solution, and so running 2133 instead of 4000MT/s RAM isn't the end of the world, it's still close to 2-3x more bandwidth than the fastest dual-channel consumer platform that money can buy despite the pedestrian ECC 2133 clockspeeds.

People often cite photoshop as a bandwidth-heavy application, and they're not wrong; Photoshop filters and transforms will use all the bandwidth available. It's just that the operation takes fractions of a second, so having lower bandwidth means that the operation you perform half a dozen times an hour takes 0.5 seconds to run, instead of 0.3 seconds to run. Yes, the bandwidth makes it measurably quicker, but not in a way that impacts anyone in the real world.
True. I often laugh at people who tell me they upgraded something. And I'm like 'No, you did NOT upgrade, you moved your bottleneck elsewhere.'
 
True. I often laugh at people who tell me they upgraded something. And I'm like 'No, you did NOT upgrade, you moved your bottleneck elsewhere.'
people adding more cores to gaming systems, wooo you upgraded something you dont even USE!
 
people adding more cores to gaming systems, wooo you upgraded something you dont even USE!
Well, adding more cores to a dual-core...
Though I know that's not what you meant ;)
 
people adding more cores to gaming systems, wooo you upgraded something you dont even USE!
Would you consider more cores if you were going to stream/record gameplay? I've read from other user posts that would be a benefit.

But I do think people should do more than just game on a PC. There's so much to learn!
 
Would you consider more cores if you were going to stream/record gameplay? I've read from other user posts that would be a benefit.
True! And this is why I was looking forward to the 5900X3D like they shown with the prototype. I'm not willing to settle for an 8core.
But I do think people should do more than just game on a PC. There's so much to learn!
Also true!
 
Would you consider more cores if you were going to stream/record gameplay? I've read from other user posts that would be a benefit.

But I do think people should do more than just game on a PC. There's so much to learn!
When streaming, are you using CPU or GPU encoding? The answer lies there

I use a GPU encoding, so i dont need or use the extra cores most of the time.
That said, i do occasionally rip and encode DVD's i own (kids shows that arent available online) so an 8 core made sense for me as a secondary benefit there
 
Not sure if it's smart to not bring a DDR4 option. :confused: DDR5 is still at least double expensive as DDR4.
DDR5 (32GB) starts at around 200€, DDR4 (32GB) starts at around 100€.

There will be tons of people who go with a Intel platform because of the cheaper DDR4 kits.
 
It's common sense to running your RAM and FCLK closer or at 1:1 to get the most out of it. Any higher seems negligible or gives you worst performance as faster RAM has higher latency/looser timings. But I guess it depends on the use case and what kind of work you're doing.
 
You can't compare clockspeeds anyway because AMD and Intel use very different memory timings which dramatically affect access latency.

The clockspeed gives you total theoretical bandwidth but neither AMD nor Intel platforms ever managed to reach those theoretical numbers with DDR4, not even with purely synthetic bandwidth tests.
and now we cannot compare? when Intel released Alder Lake with DDR5 and compare with AMD Ryzen with DDR4 only... and now we cannot compare? it is comparable now they've both have DDR5
 
and now we cannot compare? when Intel released Alder Lake with DDR5 and compare with AMD Ryzen with DDR4 only... and now we cannot compare? it is comparable now they've both have DDR5
He's not saying you can't compare them, he's saying it's not a direct apples to apples comparison - like with DDR4, intel preferred high clock speeds while ryzen preferred lower latency
(and when Intel users used to ran about how intel was better for lower latencies in Aida64, until that flipped and ryzen has lower latency while intel has higher low...)


Anyway, we all certainly will compare the shit out of them - it's just not going to be simple
 
He's not saying you can't compare them, he's saying it's not a direct apples to apples comparison
Yeah, that's what I meant.
I guess I should have said "you can't compare performance on memory clockspeed alone"
 
So finally DDR5 will move in the right direction… 5200 is still not enough, to be honest, but this would means 6000+ kits will be supported at ease.
I’m still wondering what the “sweet spot” will be for DDR5…

So... intel people?


Is this a good start for DDR5?
I know Zen1 was a bit iffy even reaching 3200, but Zen2/3 settled on DDR3 3800 in the end - while Intel users can zoom a bit higher

Hows this compare to the first gen intel DDR5 (stock/OC?)
Yes it is, but coming almost one year later to the game, this is hardly unexpected.
Problem is, at the current price point a DDR5 only platform will be hard to swallow for many.

people adding more cores to gaming systems, wooo you upgraded something you dont even USE!
That’s not going to last forever. Games are moving into multithreaded approach with new engines. It still take a while, but it is something already happening.
 
Not sure if it's smart to not bring a DDR4 option. :confused: DDR5 is still at least double expensive as DDR4.
DDR5 (32GB) starts at around 200€, DDR4 (32GB) starts at around 100€.

There will be tons of people who go with a Intel platform because of the cheaper DDR4 kits.


When you're waiting an entire year after Alder Lake, the DDR5 availability will have calmed-down.

AM4 had the same single-memory type to support (too bad it took them until Zen 2 before they supported LPDDR4X, but after that power consumption was tamed!)

The fact that Zen 3 + notebook refresh are all using DDR5 at the same rough power levels mean we shouldn't expect any initial problems with AM5 power consumption
 
True! And this is why I was looking forward to the 5900X3D like they shown with the prototype. I'm not willing to settle for an 8core.

Also true!

with how 5800x3d is being received and selling, kind surprised they didn't want to do a 5950x3d.
 
Where AMD is likely going to have a good advantage with 5200mhz base is the timings. Seems logical that they would tighten them up a bit for such a speed.

JEDEC spec for DDR5 is 16.25 ns for tCL, tRCD, and tRP, that's not going to change regardless of frequency.
It'll be DDR5-5200 42-42-42
At DDR5-6500 we'll hit 52-52-52

He's not saying you can't compare them, he's saying it's not a direct apples to apples comparison - like with DDR4, intel preferred high clock speeds while ryzen preferred lower latency
(and when Intel users used to ran about how intel was better for lower latencies in Aida64, until that flipped and ryzen has lower latency while intel has higher low...)


Anyway, we all certainly will compare the shit out of them - it's just not going to be simple

Wut? You always want the highest possible frequency, as you're overclocking the entire memory subsystem that way, and not just the DIMMs. The primary timings will complete in similar amounts of time regardless of frequency most of the time, it's hardly unreasonable to expect B-die kit capable of 1800 MHz 14-14-14 to also run 2200 MHz 17-17-17

The reason "AMD overclockers" think timings matter more than frequency is because a multitude of reasons
  • They've only overclocked Ryzen CPUs of the non-Cezanne and -Renoir kind.
  • Vermeer and Matisse can't have the IMC running faster than FCLK, meaning that if you try to push memory beyond the FCLK limit you end up halving the IMC's frequency (UCLK).
  • Summit Ridge and Pinnacle Ridge have slow IMCs that are often incapable of running at high memory frequency, even though they exhibit the typical scaling of higher memory frequency being universally better for performance.
As for comparing AMD and Intel, it'll be perfectly viable. The names of subtimings will certainly differ, but the primary timings will remain the same.

Judging by how limited AM4 overclocking ended up becoming after Matisse, the broken FCLK implementation beyond 1900 MHz on Vermeer, and the significant limitations for the 5800X3D, I doubt AM5 will improve overclocking in any significant manner, but I hope I'm wrong.
 
I was really looking forward to the 5900X3D that AMD showed off originally.

Yeah, that would have sold out as well. Wonder if we'll get something in afew months, I mean there must be rejects right?
 
We'll wait and see if the difference is worth the wait. Also, I doubt AMD is gonna be selling them at competitive price knowing how many consumers they've pissed off with a revised R7 5800X3D and a slew of non-X CPUs that was meant to be released around the same time when the 5000 Series CPUs lineup was new.

That's what happens if there is no competition in the market. Intel also held prices in the sky while there was no Zen.

However, you also forget to mention the much longer socket/mobo support than Intel and the option to build an even cheaper system with the use of B series mobos in which Zen CPUs are able to be OCd.
 
Yeah, that would have sold out as well. Wonder if we'll get something in afew months, I mean there must be rejects right?
I think they discovered it had no benefits for gaming, and the extra heat lowered the all core performance (look at all the hate the 5800x3d got in the few tests it was slower due to being 100Mhz less)

Multiple CCX with this tech will come with Zen5 IMO, with the revised heatspreader controlling the temperatures better
 
So... intel people?


Is this a good start for DDR5?
I know Zen1 was a bit iffy even reaching 3200, but Zen2/3 settled on DDR3 3800 in the end - while Intel users can zoom a bit higher

Hows this compare to the first gen intel DDR5 (stock/OC?)
I've wasted over a month of my life trying to get DDR5 stable even just at XMP. One kit of G Skill 6000C36 wasn't even stable at 3600C40, another was stable at 6000C36 after finding the extremely narrow sweet spots for voltages. My 6400C32 kit was stable after a lot of tweaking... for like 3 months and now it's only stable at 6000C30.
 
I've wasted over a month of my life trying to get DDR5 stable even just at XMP. One kit of G Skill 6000C36 wasn't even stable at 3600C40, another was stable at 6000C36 after finding the extremely narrow sweet spots for voltages. My 6400C32 kit was stable after a lot of tweaking... for like 3 months and now it's only stable at 6000C30.
And what happens when you run it at the default settings(removing XMP and letting the motherboard automatically set the RAM settings)?
 
Back
Top