• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

SK Hynix Throws a Jab: CAMM is Coming to Desktop PCs

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,049 (1.08/day)
In a surprising turn of events, SK Hynix has hinted at the possibility of the Compression Attached Memory Module (CAMM) standard, initially designed for laptops, being introduced to desktop PCs. This revelation came from a comment made by an SK Hynix representative at the CES 2024 in Las Vegas for the Korean tech media ITSubIssub. According to the SK Hynix representative, the first implementation is underway, but there are no specific details. CAMM, an innovative memory standard developed by Dell in 2022, was certified to replace SO-DIMM as the official standard for laptop memory. However, the transition to desktop PCs could significantly disrupt the desktop memory market. The CAMM modules, unlike the vertical DRAM sticks currently in use, are horizontal and are screwed into a socket. This design change would necessitate a complete overhaul of the desktop motherboard layout.

The thin, flat design of the CAMM modules could also limit the number that can be installed on an ATX board. However, the desktop version of the standard CAMM2 was announced by JEDEC just a month ago. It is designed for DDR5 memory, but it is expected to become mainstream with the introduction of DDR6 around 2025. While CAMM allows for higher speeds and densities for mobile memory, its advantages for desktops over traditional memory sticks are yet to be fully understood. Although low-power CAMM modules could offer energy savings, this is typically more relevant for mobile devices than desktops. As we move towards DDR6 and DDR7, more information about CAMM for desktops will be needed to understand its potential benefits. JEDEC's official words on the new standard indicate that "DDR5 CAMM2s are intended for performance notebooks and mainstream desktops, while LPDDR5/5X CAMM2s target a broader range of notebooks and certain server market segments." So, we can expect to see CAMM2 in both desktops and some server applications.



View at TechPowerUp Main Site | Source
 
Yeah nope not until DDR6 I bet. As it is Intel & AMD are having (some)issues moving the latest DDR5 based parts & this will only exasperate that situation!
 
Last edited:
Yes if this not make the dims or the mobos more expensive and we get better latency compared to standard full size dims
 
Yes will use, as long as you can find some without A@&!ingRGBon tham.
 
So instead of a slot, there's a socket and screws. It sounds great for laptops, but what difference does it make for desktop?
 
It certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
 
Most importantly, what is the cost\pref of CAMM vs DDR4\5 ?
 
It certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
moving the modules to the back side of the board is the only way i could see this working on desktop.
 
Given that some mATX and ITX boards already move M2s to the back, adding CAMMs to the back of all mobos and potentially shortening the traces to the CPU that way could be an option, also bringing potentially lower latency that way too. It would also clear up some topside estate to move M2s closer to the CPU, also reducing trace lengths while permitting more redrivers or adding in M2 style WiFi/BT options in the former spaces between PCIe slots. Or maybe capitalize on ever faster NVMe as cache/RAMDisk using the M2 closest to the CPU, while still using the current first NVMe M2 slot as the main drive. Or they can pull a page from ASUS's Strix ITX board and just create an M2 sandwich stack next to the CPU, or M2 "RAM-cards" like ASUS already does with their regular flagship boards.

This would also allow for larger CPUs onto existing mATX and ATX standards, such as newer Threadripper mATX or ATX board, or slightly larger next-gen CPUs with more lanes as add-in cards are coming back into vogue; streaming card, storage cards, audio card maybe, future dedicated AI card (or a second GPU used for AI purposes), etc.

Sure, the only loss would be in RGB details since no more RGB or thematic RAM heatsinks, but with in-computer LCD/LED/OLED screens apparently becoming the newest, hottest trend and cheap enough to implement on various fans and cooler tops, the extra real-estate topside would allow for larger waterblocks with screens that could cool just the CPU, or the CPU and VRMs, or CPU, VRMs, and the NVMe drive next to the CPU.
 
back side of the board is the only way i could see this working on desktop.
And that generally wouldn't work with desktops because the physical distance between CPU & memory needs to be relatively short!
 
And that generally wouldn't work with desktops because the physical distance between CPU & memory needs to be relatively short!
Which means they could be arranged around the back of the CPU socket. Can't get shorter than a thru-board via connection!
 
Unless it comes with super duper RGB it will fail.
 
You'll probably also have to get new cases then & thicker(?) boards. Can't imagine installing new memory as with the current board layouts!
 
No thanks, cooling requirements for faster kits and or overclocking will be very VERY difficult to meet by slapping them on the rear of the motherboard.
 
It certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
Why not do this on all motherboards? You could place one on each side of the CPU socket and move the CPU socket slightly further away from the power regulation circuitry. No need for the CPU coolers to have clearance for the RAM any more.

You can see it in this video, although he doesn't remove the "shim".


So instead of a slot, there's a socket and screws. It sounds great for laptops, but what difference does it make for desktop?
There's no socket, instead there's a "shim" with connectors that connects the pads on the CAMM to the pads on the motherboard.

Most importantly, what is the cost\pref of CAMM vs DDR4\5 ?
Apparently the latency is improved. Cost shouldn't be any higher and it's using DDR5 chips in this instance, so no difference there either.

Which means they could be arranged around the back of the CPU socket. Can't get shorter than a thru-board via connection!
That won't work mechanically.

No thanks, cooling requirements for faster kits and or overclocking will be very VERY difficult to meet by slapping them on the rear of the motherboard.
Why? RAM doesn't get very hot and it would be super easy to put a heatsink on the CAMM modules, just like on normal DIMMs.
 
Last edited:
Why? RAM doesn't get very hot and it would be super easy to put a heatsink on the CAMM modules, just like on normal DIMMs.

Unless its a dual chamber case theres little to no airflow on the backside of the motherboard tray, heat radiating off the back of the CPU socket is going to cook the ic’s, the more they go over 50c the less stability, less frequency etc.

I’d be willing to bet no modern or revised DDR5 to come in the next year would be able to provide high frequency kits without crippling latency in this kind of format; even if this is meant to be implemented much further down the road. Without much more efficient dram heat will definitely be a problem on top of being a frequency limiting factor.
 
At CES, some motherboard manufacturers showed boards with all the power connectors on the back of the board.
These require new cases, so I'd expect such cases to change to accomodate cooling for rear-mounted RAM if that's the way things go.
 
You can see it in this video, although he doesn't remove the "shim".
Actually he does, accidentally:
1705589411743.png
 
Actually he does, accidentally:
View attachment 330288
I didn't watch that far :p
At least that makes it very clear that the part that has the biggest chance of getting accidentally damaged can be swapped out.

Unless its a dual chamber case theres little to no airflow on the backside of the motherboard tray, heat radiating off the back of the CPU socket is going to cook the ic’s, the more they go over 50c the less stability, less frequency etc.

I’d be willing to bet no modern or revised DDR5 to come in the next year would be able to provide high frequency kits without crippling latency in this kind of format; even if this is meant to be implemented much further down the road. Without much more efficient dram heat will definitely be a problem on top of being a frequency limiting factor.
Yeah no, if that was the case, then laptops would be dying every five minutes.

The latency isn't about the chips themselves, but rather between the memory module and the CPU socket. Two different things. Sorry if that wasn't clear.
 
I didn't watch that far :p
At least that makes it very clear that the part that has the biggest chance of getting accidentally damaged can be swapped out.


Yeah no, if that was the case, then laptops would be dying every five minutes.

The latency isn't about the chips themselves, but rather between the memory module and the CPU socket. Two different things. Sorry if that wasn't clear.

Show me a laptop running ddr5 8000 at c36 1.45v, you’re clearly misunderstanding the premise.

Dont forget the camm modules would be sitting next to a 100-250w cpu instead of one running at 6-50w
 
There's also the fact there'd be fewer modules? So more heat/density & less frequency or margin for OCing, if any!
 
Given that some mATX and ITX boards already move M2s to the back, adding CAMMs to the back of all mobos and potentially shortening the traces to the CPU that way could be an option, also bringing potentially lower latency that way too. It would also clear up some topside estate to move M2s closer to the CPU, also reducing trace lengths while permitting more redrivers or adding in M2 style WiFi/BT options in the former spaces between PCIe slots. Or maybe capitalize on ever faster NVMe as cache/RAMDisk using the M2 closest to the CPU, while still using the current first NVMe M2 slot as the main drive. Or they can pull a page from ASUS's Strix ITX board and just create an M2 sandwich stack next to the CPU, or M2 "RAM-cards" like ASUS already does with their regular flagship boards.

This would also allow for larger CPUs onto existing mATX and ATX standards, such as newer Threadripper mATX or ATX board, or slightly larger next-gen CPUs with more lanes as add-in cards are coming back into vogue; streaming card, storage cards, audio card maybe, future dedicated AI card (or a second GPU used for AI purposes), etc.

Sure, the only loss would be in RGB details since no more RGB or thematic RAM heatsinks, but with in-computer LCD/LED/OLED screens apparently becoming the newest, hottest trend and cheap enough to implement on various fans and cooler tops, the extra real-estate topside would allow for larger waterblocks with screens that could cool just the CPU, or the CPU and VRMs, or CPU, VRMs, and the NVMe drive next to the CPU.
NAND only has high bandwidth; its latency, compared to DRAM, is abysmal. In fact, bandwidth is actually lower than DRAM too as the bandwidth relies on accessing many NAND devices in parallel whereas DRAM's bandwidth can be utilized from a single device. NAND can never be a RAM disk. As far as latency is concerned, decreasing the distance will help, but not as much as you might think. Propagation delay is a much smaller contributor to DRAM latency than other factors inherent to DRAM. Methods to improve the average latency of DRAM (link to PDF) have been proposed, but as far as I know, they haven't been implemented.
 
Show me a laptop running ddr5 8000 at c36 1.45v, you’re clearly misunderstanding the premise.

Dont forget the camm modules would be sitting next to a 100-250w cpu instead of one running at 6-50w
No, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?

NAND only has high bandwidth; its latency, compared to DRAM, is abysmal. In fact, bandwidth is actually lower than DRAM too as the bandwidth relies on accessing many NAND devices in parallel whereas DRAM's bandwidth can be utilized from a single device. NAND can never be a RAM disk. As far as latency is concerned, decreasing the distance will help, but not as much as you might think. Propagation delay is a much smaller contributor to DRAM latency than other factors inherent to DRAM. Methods to improve the average latency of DRAM (link to PDF) have been proposed, but as far as I know, they haven't been implemented.
CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.
 
No, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?


CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.
I concur; CXL is unlikely to show up for consumers anytime soon, because consumers don't require that much DRAM. If I recall correctly, CXL uses PCIe; that would increase latency substantially compared to regular DIMMs, but for large memory footprint applications, more memory, even if it's slower, would increase performance.
 
Last edited:
No, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?


CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.

None of that is in the same realm of desktop, ddr5 up to 7000? No mention of timings (which are likely abysmal)?

The cooling scenario is entirely different with parts consuming a fraction of what desktop parts use all while having everything in a laptop strapped to a unified heatpipe/vaporchamber cooler with blower fans making your ears bleeds as soon as you put a load thats going to max the available tdp.

Format is a terrible idea for desktops, heat will undoubtably be an issue. Comparing DDR5 7000 with loose timings c48+ at low 1.1-1.2v isn’t the same thing as a desktop setup. Go put a gen4 nvme drive on the back of an itx board and see what happens to temps.
 
Back
Top