• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

DDR6 Memory Arrives in 2027 with 8,800-17,600 MT/s Speeds

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,267 (1.12/day)
The semiconductor industry has officially accelerated its next-generation memory development, with DDR6 standard now coming soon. Although enthusiasts won't find these modules available until 2027, key players, Samsung, Micron, and SK Hynix, have already moved past prototype stages and embarked on rigorous validation cycles. In partnership with Intel, AMD, and NVIDIA, they're targeting an initial throughput of 8,800 MT/s, with plans to scale up to a staggering 17,600 MT/s, almost doubling the ceiling of today's DDR5. This increase is driven by DDR6's 4×24-bit sub-channel architecture, which requires entirely new approaches to signal integrity. Additionally, it also differs from DDR5's current 2x32-bit sub-channel structure. To overcome the physical limits that DIMM form factors faced at higher speeds, the industry is betting on CAMM2. Early indication is that server platforms will lead the change, with high‑end notebooks following suit once manufacturing ramps up.

Behind the scenes, timelines are being mapped: platform validation is slated for 2026, server deployments in 2027, and broader consumer availability thereafter. This phased rollout mirrors the DDR5 journey; however, analysts predict that DDR6's architectural leap could accelerate adoption in AI and high-performance computing environments. Of course, cutting-edge technology comes with a premium: initial DDR6 modules are expected to carry price tags reminiscent of DDR5's 2021 debut, potentially limiting early adoption to hyperscale data centers and AI research labs. Yet with HPC and AI appetite for bandwidth, memory makers are targeting launch as soon as possible to satisfy the massive deployment of compute. By 2027, CAMM2‑based modules running at DDR6 speeds may well define the new standard for high‑performance systems.



View at TechPowerUp Main Site | Source
 
Hopefully DDR6 fixes the one of the main problem with DDR5 which has been stability with all slots occupied.
It seems like we are moving to CAMM2, so...
Unless motherboards only supports single channel CAMM2 modules, the current design only allows for a single module.
I guess CPU memory controllers could change and support two modules, maybe one on each side of the motherboard?
 
a staggering 17,600 MT/s, effectively doubling the ceiling of today's DDR5
Mmm, sorry but no. There have offered is shops DDR5 9600 in stock. If doubling this will have 19200.
 
If CAMM2 is going to go mainstream then it would be nice if they could improve the mounting design. Some latches of some sort rather than screws.

Also the RGB potential on them is huge, I bet all the gaming brands are already counting the profits.
Couldn't it bring potential issues with the "Compression Attached" part of CAMM2? Overtightening them screws would as well, I guess...
 
I wish we could have longer-lasting standards (or somewhat backwards compatibility), as it makes for much easier trouble-shooting when other systems have compatible RAM.

Hopefully DDR6 fixes the one of the main problem with DDR5 which has been stability with all slots occupied.
The problem is signal integrity with high frequencies, low voltage, long traces and sensitive sub-timings for the DIMMs, all of which will only get more challenging with faster memory. There will probably be more complex signal encoding too, like with GDDR7. Running multiple DIMMs per channel is going away.

I hope this CAMM2 standard doesn't get established for desktop and server. It's takes up far too much space with anything more than 2 channels.
 
There is a really good chance I may adopt this in 2028 if the world calms down with all the AI sensationalism and overclocking gets exciting again.
Not even the massive operational fires in cutthroat companies, just this pompous idea that engineers are obsolete and we need this hardware.
Hardware has been evolving in very weird directions for a while now. It's obvious corners are being cut to get around stumbling blocks and quotas.
I'm currently on: 2GB DDR2, 16GB DDR3, 64GB DDR4. I'm skipping DDR5.

Whatever the job, I have all the memory that I need to be productive from an entry level Ryzen, FX or Athlon.
We are users first and main small general purpose computers to get through the day. AI inference isn't a priority.
The push for bigger and faster memory at this moment in time is constant hamfisting of AI in places nobody asked.
In fact, stripping out such features is the main question that ordinary users have whenever this is all forced on them.

If CAMM2 gets adopted on server and desktop, those better be some big honkin chonkin 256GB octo-rank modules.
You want to romance adoption? Do that and fix stability.
 
Too soon, too fast. I still have DDR4 motherboard, simply because DDR5 ones are expensive. Changing CPU, MB and RAM to AM5/DDR5 costs as much as 3/4 of my current setup. DDR6 is going to push the price range even further.
 
It seems like we are moving to CAMM2, so...
Unless motherboards only supports single channel CAMM2 modules, the current design only allows for a single module.
I guess CPU memory controllers could change and support two modules, maybe one on each side of the motherboard?
As long as CAMM2 has density equivalence to 4 slots occupied with highest capacity DIMMs that should be a decent compromise. Only advantage CAMM2 offers is slimness and improved airflow but otherwise it takes up too much space on boards.
 
Camm.2 didn't matter for me my last computer before next which possible be eventually hybrid with classical/quantum PC, will be with DDR5 next generation not sure Zen 6 or Nova Lake.
 
Too soon, too fast. I still have DDR4 motherboard, simply because DDR5 ones are expensive. Changing CPU, MB and RAM to AM5/DDR5 costs as much as 3/4 of my current setup. DDR6 is going to push the price range even further.

Agreed on the pricing.

The one saving grace on the cpu/mobo/ram side is at least the components at the mid-low end of the curve are pretty powerful in respect to software.

A used 12600k system or a 76000 (non-x) setup still slaps hard and can do everything -- great for the money whereas GPUs are completely the opposite, anything below a 4070 and you're in menus in settings to try and get things to run things smoothly.
 
The problem is signal integrity with high frequencies, low voltage, long traces and sensitive sub-timings for the DIMMs, all of which will only get more challenging with faster memory. There will probably be more complex signal encoding too, like with GDDR7. Running multiple DIMMs per channel is going away.

I hope this CAMM2 standard doesn't get established for desktop and server. It's takes up far too much space with anything more than 2 channels.
With DDR5 workstation platforms have already dropped 2 slots/channel and even support for ECC/non ECC DIMMs in favour of RDIMMs and even desktops on Intel platform have support for Clocked DIMMs mitigating some of the timing issues.
 
There is a really good chance I may adopt this in 2028 if the world calms down with all the AI sensationalism and overclocking gets exciting again.
Not even the massive operational fires in cutthroat companies, just this pompous idea that engineers are obsolete and we need this hardware.
Hardware has been evolving in very weird directions for a while now. It's obvious corners are being cut to get around stumbling blocks and quotas.
I'm currently on: 2GB DDR2, 16GB DDR3, 64GB DDR4. I'm skipping DDR5.

Whatever the job, I have all the memory that I need to be productive from an entry level Ryzen, FX or Athlon.
We are users first and main small general purpose computers to get through the day. AI inference isn't a priority.
The push for bigger and faster memory at this moment in time is constant hamfisting of AI in places nobody asked.
In fact, stripping out such features is the main question that ordinary users have whenever this is all forced on them.

If CAMM2 gets adopted on server and desktop, those better be some big honkin chonkin 256GB octo-rank modules.
You want to romance adoption? Do that and fix stability.
OC is never coming back :D reason is simple $$$
 
With DDR5 workstation platforms have already dropped 2 slots/channel and even support for ECC/non ECC DIMMs in favour of RDIMMs and even desktops on Intel platform have support for Clocked DIMMs mitigating some of the timing issues.
Both Xeon W and Threadripper still supports 2DPC, but more and more motherboards are dropping it.

Clock drivers help achieve higher clocks, but still doesn't allow for 2DPC at those clocks.

Workstation and server platforms uses RDIMMs, which already have clock drivers. The "replacement" for 2DPC is MRDIMM, which allows for even higher bandwidth and higher capacity.

The big disadvantage of only one DIMM per channel is that if you want to upgrade memory, you basically have to replace all of it.
 
I hope this CAMM2 standard doesn't get established for desktop and server. It's takes up far too much space with anything more than 2 channels.
I believe CAMM modules should be flat enough to be mounted on both sides of the motherboard.
 
I believe CAMM modules should be flat enough to be mounted on both sides of the motherboard.
Yet another ridiculous idea; mounting stuff on the back side of the motherboard. And we need special cases to make it possible to service the memory etc.
And soon cases needs to be twice as large to have enough airflow on both sides…
 
Too soon, too fast. I still have DDR4 motherboard, simply because DDR5 ones are expensive. Changing CPU, MB and RAM to AM5/DDR5 costs as much as 3/4 of my current setup. DDR6 is going to push the price range even further.

Not too soon or too fast. I am on DDR4 for the exact same reasons. This might get prices down finally. Then again we can just as well wait for AM6 and DDR6.
 
Yet another ridiculous idea; mounting stuff on the back side of the motherboard. And we need special cases to make it possible to service the memory etc.
And soon cases needs to be twice as large to have enough airflow on both sides…
There already are motherboards with an M.2 slot on the backside, for what it's worth.

1753270231418.png 1753270386995.png
 
There already are motherboards with an M.2 slot on the backside, for what it's worth.
I know, I've had one. Practically disassembling the whole computer to change a SSD.
Just because the idea isn't new, doesn't change the fact that it's a terrible idea.
 
If CAMM2 is going to go mainstream then it would be nice if they could improve the mounting design. Some latches of some sort rather than screws.

Also the RGB potential on them is huge, I bet all the gaming brands are already counting the profits.
JEDEC mentioned in early documentation that work on the mounting is needed to make it more consumer friendly, but so far nothing seems to have happened.
It's going to be hard to get the right pressure on the "shim" that goes between the motherboard and the CAMM2 module though and it might even require the right torque, as to not damage the pins on the shim.

As long as CAMM2 has density equivalence to 4 slots occupied with highest capacity DIMMs that should be a decent compromise. Only advantage CAMM2 offers is slimness and improved airflow but otherwise it takes up too much space on boards.
As I said, single channel CAMM2 modules can be stacked. This will likely cause some issues though, unless it just means getting two "shims" at different heights, vs a single, wider shim for dual channel CAMM2 memory. Not sure how the mounting will work though, as the single channel setup looks like it would need additional supports.

1753272891501.png
 
Last edited:
I guess CPU memory controllers could change and support two modules, maybe one on each side of the motherboard?
17000 MT memory with no heatsink jammed in between the motherboard and motherboard tray? I don't think that works.
 
4×24-bit sub-channel architecture
Am I only one who finds this 24bit weird? The 32bit channel for DDR5 made a lot of sense... but here, no usual 2^n accesses are divisible by 24. For dual channel 4*24*2=192 bits per transaction, that's odd 3x 64bit words...
 
Back
Top