Wednesday, October 17th 2018

Cadence, Micron Update on DDR5: Still On Track, 1.36x Performance Increase Over DDR4 at Same Data Rate

DDR5 will be the next step in DDR5 memory tech, again bringing increased transfer speeds over the previous JEDEC (the standards body responsible for the DDR specifications) specification. The new memory technology will also bring the customary reductions in operating voltage - the new version will push the 64-bit link down to 1.1V and burst lengths to 16 bits from 1.2V and 8 bits. In addition, DDR5 lets voltage regulators ride on the memory card rather than the motherboard. CPU vendors are also expected to expand the number of DDR channels on their processors from 12 to 16, which could drive main memory sizes to 128 GB from 64 GB today.

DDR5 is being developed with particular attention to the professional environment, where ever-increasingly gargantuan amounts of addressable memory are required. One of the guiding principles over DDR5's development is a density increase (to allow 16 Gbit chips) that would allow for larger volumes of memory (and thus data processing) in the environments that need that. Reduced power consumption also plays a role here, but all of this will have a cost: latency. For end-users, though, this increased latency will be offset by the usual suspects (DDR memory companies such as Crucial, Corsair, just to name some started with the letter C) in tighter timings and increased operating frequencies. JEDEC's specification for DDR5 is set at 4800 MT/s, but it's expected the memory tech will scale to 6400 MT/s, and you know overclocking and performance-focused companies will walk all over the standard.
Some other, performance and stability-centric features of DDR5 include use two independent 32/40-bit channels per module (without/or with ECC); improved command bus efficiency, with per-channel 7-bit Address (Add)/Command (Cmd) buses, better refresh schemes, and an increased bank group for additional performance. There will also be support for on-die termination (a particularly important feature for Ryzen's memory stability and overclocking prowess), which enables cleaner signals with less electrical disruption and feedback, with improved stability at higher data rates. Furthermore, high-end DDR5 DIMMs will have their own voltage regulators and PMICs, thus improving power delivery and other variables.
Cadence says that while comparing DDR4 3200 vs DDR5 3200, there is already an increase in bandwidth of 1.36X - yes, at the same data rate. Add in the frequency increase (and consider the density increase as well), and there's a 1.87x increase in performance when comparing DDR4 3200 to DDR5 4800. Ramp for server/datacenter and other professional environments will start in 2019, with the technology entering the consumer market in 2020 - interestingly, the same longevity AMD themselves gave their Zen architecture. Sources: Cadence, via AnandTech
Add your own comment

32 Comments on Cadence, Micron Update on DDR5: Still On Track, 1.36x Performance Increase Over DDR4 at Same Data Rate

#1
john_
Why not just have one type of memory? With memory technologies moving much faster on the GPU market, companies should be looking at bringing GDDR6 on the desktop as the main system memory. That custom APU that AMD created for Subor, proved that GDDR5 can be used as system memory, not just in consoles like PS4 and XBox but also in PCs. So GDDR6 should have been the type to go for the future desktop systems, because it is here now and will be used in graphics cards for years.
Posted on Reply
#2
Fx
Good stuff. This sounds like a solid advancement all the way around.
Posted on Reply
#3
Frick
Fishfaced Nincompoop
john_, post: 3925016, member: 137560"
Why not just have one type of memory? With memory technologies moving much faster on the GPU market, companies should be looking at bringing GDDR6 on the desktop as the main system memory. That custom APU that AMD created for Subor, proved that GDDR5 can be used as system memory, not just in consoles like PS4 and XBox but also in PCs. So GDDR6 should have been the type to go for the future desktop systems.
I assume they would cost a lot more. I'd rather have 8GB slow memory than 4GB speedy memory, and we need prices to go down, not up.
Posted on Reply
#4
R-T-B
Frick, post: 3925022, member: 23907"
I assume they would cost a lot more. I'd rather have 8GB slow memory than 4GB speedy memory, and we need prices to go down, not up.
Pretty much.

Dual channel is 128-bit IIRC. GDDR6 needs large channel widths to work at great performance in a video card. It would probably suffer being used as main memory with a dual channel 128-bit controller, you'd need at least quad channel.
Posted on Reply
#5
efikkan
john_, post: 3925016, member: 137560"
Why not just have one type of memory? With memory technologies moving much faster on the GPU market, companies should be looking at bringing GDDR6 on the desktop as the main system memory. That custom APU that AMD created for Subor, proved that GDDR5 can be used as system memory, not just in consoles like PS4 and XBox but also in PCs. So GDDR6 should have been the type to go for the future desktop systems, because it is here now and will be used in graphics cards for years.
Because different use cases have different requirements.
Desktop CPUs access memory in 64 byte blocks scattered across the address space, so their memory is latency optimized.
GPUs access memory more through continuous streams of data, which is why GDDR is bandwidth optimized.
Posted on Reply
#6
theoneandonlymrk
john_, post: 3925016, member: 137560"
Why not just have one type of memory? With memory technologies moving much faster on the GPU market, companies should be looking at bringing GDDR6 on the desktop as the main system memory. That custom APU that AMD created for Subor, proved that GDDR5 can be used as system memory, not just in consoles like PS4 and XBox but also in PCs. So GDDR6 should have been the type to go for the future desktop systems, because it is here now and will be used in graphics cards for years.
Yeah I see what you are saying but it's a bit short sighted, the same argument was said at ddr4 introduction, not possible.
First there is not enough ddr6 production to accommodate it, second ,jedec standards for memory are what enable Any buyer to buy ddr4 and have it work.
With new memory types, they're still in the special stage where the interconnection standards have not been set or finalized fully so only companies with the r and d time and knowledge are able to use them in their products.
The Subor uses ddr5 ,3 nah 5 years after ddr4 introduction, and after bigdog sony ,ddr4 is well known and qualified at this point.
Posted on Reply
#7
bug
How does bandwidth increase at the same data rate? Isn't data rate freq x bits transferred?
Posted on Reply
#8
CheapMeat
bug, post: 3925089, member: 157434"
How does bandwidth increase at the same data rate? Isn't data rate freq x bits transferred?
I believe it's because each DDR5 DIMM can now Read and Write at the same time unlike DDR4 and below that could only do one, wait and the other. To me, this is a huge awesome part of DDR5.
Posted on Reply
#9
bug
CheapMeat, post: 3925127, member: 174204"
I believe it's because each DDR5 DIMM can now Read and Write at the same time unlike DDR4 and below that could only do one, wait and the other. To me, this is a huge awesome part of DDR5.
That would be something in the 2x ballpark, not 1.36x. But who knows, maybe you're right.
Posted on Reply
#10
efikkan
bug, post: 3925136, member: 157434"
That would be something in the 2x ballpark, not 1.36x. But who knows, maybe you're right.
It's about transferring multiple bits per "clock", like DDR is Double Data Rate. Most of the gains in GDDR5 and GDDR6 is by increasing this data rate while retaining the clock, making the higher "effective clock".
Read about Quad data rate
The image in the article is a visualization of these signals:
Posted on Reply
#11
lexluthermiester
R-T-B, post: 3925058, member: 41983"
you'd need at least quad channel.
Seems doable to me.
Posted on Reply
#12
Prima.Vera
Meanwhile I'm still rocking with DDR3@2133Mhz :D:oops:
Posted on Reply
#13
Toothless
Tech, Games, and TPU!
Prima.Vera, post: 3925331, member: 98685"
Meanwhile I'm still rocking with DDR3@2133Mhz :D:oops:
I mean, there are people rocking DDR.
Posted on Reply
#14
CheapMeat
Hah, same. I have one system with 16GB, two systems with 32GB and one with 64GB of DDR3 2133 also. I want everything as a hardware enthusiast but I hold back. I read TPU & Anand daily. But I'm still on Z77 and X79 systems, just waiting for a few reasons. I think when DDR5 is available, it'll be my time to get new systems to tinker with. I do think DDR5 is a much more significant change than DDR3 to DDR4 because there's a lot more, like mentioned, besides just MHz & GB bump.

I have 6 addon cards that use DDR1 RAM also. :P
Posted on Reply
#15
R-T-B
lexluthermiester, post: 3925310, member: 134537"
Seems doable to me.
And I'm not even sure that would work, but I'm no expert. Only certain thing is it'd drive controller cost up.
Posted on Reply
#16
bug
efikkan, post: 3925144, member: 150226"
It's about transferring multiple bits per "clock", like DDR is Double Data Rate. Most of the gains in GDDR5 and GDDR6 is by increasing this data rate while retaining the clock, making the higher "effective clock".
Read about Quad data rate
The image in the article is a visualization of these signals:

Still doesn't make sense to me. You mean where DDR4 transfers 8 bits, DDR5 transfers 11?
I'm sure this is explained in the proposed spec somewhere, but since (shockingly) I don't have the time to scour the spec for one detail, I was hoping someone more knowledgeable was lurking these forums instead.
Posted on Reply
#17
lexluthermiester
R-T-B, post: 3925356, member: 41983"
And I'm not even sure that would work, but I'm no expert. Only certain thing is it'd drive controller cost up.
By how much though? The PS4 has 8GB of GDDR5 for system ram and look at it's price. It doesn't seem like it would be all that pricey to do if Intel and AMD just did it. AMD already has so they have an edge.
Posted on Reply
#18
bug
lexluthermiester, post: 3925423, member: 134537"
By how much though? The PS4 has 8GB of GDDR5 for system ram and look at it's price. It doesn't seem like it would be all that pricey to do if Intel and AMD just did it. AMD already has so they have an edge.
I'd argue the memory controller is the one part that gets redesigned/tweaked even when the memory type doesn't change. Rearranging some transistors and clearing up the signal path will be negligible, looking at the whole package.
I mean, we've been through this more times than I care to remember. Whether the memory controller was on the motherboard or in the CPU, we never had to deal will significant price increases.
Posted on Reply
#19
lexluthermiester
bug, post: 3925428, member: 157434"
Whether the memory controller was on the motherboard or in the CPU, we never had to deal will significant price increases.
Exactly. So jumping up to a new memory standard is likely a trivial effort.
Posted on Reply
#20
bug
lexluthermiester, post: 3925435, member: 134537"
Exactly. So jumping up to a new memory standard is likely a trivial effort.
It's not trivial per se, but it is trivial in the grand scheme of things. It's been done before, the know-how is there.
Posted on Reply
#21
StrayKAT
2020 sounds interesting. New CPU arches? New mem.. new PCIe maybe? And Intel entering the GPU game.
Posted on Reply
#22
bug
StrayKAT, post: 3925441, member: 174092"
2020 sounds interesting. New CPU arches? New mem.. new PCIe maybe? And Intel entering the GPU game.
I believe the PCIe 4.0 standard was finalized this year ;)
Posted on Reply
#23
XiGMAKiD
StrayKAT, post: 3925441, member: 174092"
2020 sounds interesting. New CPU arches? New mem.. new PCIe maybe? And Intel entering the GPU game.
Pretty much all that
Posted on Reply
#24
las
The timings looks horrible on DDR5, will take a few years before it's worth changing from high-end DDR4.

DDR4 also sucked on launch.
Posted on Reply
#25
lexluthermiester
las, post: 3926022, member: 111974"
The timings looks horrible on DDR5, will take a few years before it's worth changing from high-end DDR4.

DDR4 also sucked on launch.
Rubbish. Done right, GDDR5 would be an excellent performer, as it already is for a few systems and has been for video cards.
Posted on Reply
Add your own comment