• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
32-gigabyte DDR4 UDIMMs are a reality. Samsung recently announced the development of a 32 GB DDR4 dual-rank UDIMM, using higher density DRAM chips. Those chips, however, are unlikely to be available anytime soon, compounded by Samsung's reported scumbaggery in the making. In the midst of all this, motherboard major ASUS designed its own non-JEDEC UDIMM standard, called "Double Capacity DIMM" or DC DIMM, with the likes of G.Skill and Zadak designing the first models. The utility of these modules is to max out the CPU memory controller's limit despite having fewer memory slots on the motherboard. Possible use-cases include LGA1151 mini-ITX motherboards with just one slot per memory channel (2 slots in all), or certain LGA2066 boards with just four slots (one slot per channel).

There is no word on the memory chip configuration modules, but it's highly likely they are dual-rank. The first DDR4 DC modules could be 32 GB, letting you max out the memory controller limit of 8th gen and 9th gen Core processors with just two modules. ASUS is heavily marketing this standard with its upcoming motherboards based on Intel's Z390 Express chipset, so it remains to be seen if other ASUS motherboards (or other motherboards in general) support the standard. Ironically, the Zadak-made module shown in ASUS marketing materials use DRAM chips made by Samsung.



View at TechPowerUp Main Site
 
Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still
 
Last edited:
Seems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities
 
Seems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities

Because it's hard to do this and retain the signal integrity. They most likely need to use some kind of buffer chip, similar to RDIMMs.
 
Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still

Well... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
 
This would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
 
Well... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
 
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
Genomics like our buddy @xkm1948 does?
 
Genomics like our buddy @xkm1948 does?
I could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.
 
You are already at 64GB on most mainstream boards (4x16GB), and 128GB on HEDT. Its edge case scenario's that need more than this, edge cases that are likely already fulfilled by Workstation and Server markets.

Shrug.
 
Is it possible to have dual rank on a single sided card?
 
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.
But are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
 
But are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
Then why don't they mount ram slots on the back of the board where they can be closer.
 
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
google: "in-memory database"
 
They will only cost and arm and a leg
That's not enough, you most likely have to throw in a kidney and part of your liver with the current memory pricing...

Then why don't they mount ram slots on the back of the board where they can be closer.
Because, space? Also, have you ever looked at the back of the a PCB? You actually have quite a lot of components that goes through all of the layers of the PCB, so you can't just place things where you'd like to. It's not impossible to do what you're suggesting though, look at some of the NAS boards, they have the memory slots on the rear of the PCB, but only SO-DIMMs, not full size DIMMs.
Also, the cooling on that side will suck in an ATX case.
 
Then why don't they mount ram slots on the back of the board where they can be closer.
Given the design of most modern computer cases, this may be a good idea. Most motherboard mounts have a large hole around the back of the CPU which would allow easy access.

phanteks_evolve_x_2_sm.jpg
 
This would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
Why not just angle the slots downward, like laptops do? You could get them closer to the CPU socket and have better cooler compatibility.
 
Could these be slots to leverage Optane DIMMs or something of the sort?
 
Then why don't they mount ram slots on the back of the board where they can be closer.
Because of the standard. You have limited space behind the mobo.
Also, the back side of mobo usually has hardly any airflow, which would be a disaster for RAM. It's bad enough we started putting NVMe drives there.

But sure, in most devices (especially passively cooled) both sides of PCB are used.
With the advent of hyper fast SSD's why would this be needed?
Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
 
Last edited:
Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
Yes but that's server work. I'm talking about and referring to desktop workflow scenarios.
 
I could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.


Nah you are wrong. Would be nice if I can have 1TB~2TB DRAM per local CPU access. In bioinformatics, especially with huge data sets the more ram the better. I was constantly out of ram when performing a 17 samples (in triplicates) microbiome analysis and i constantly maxed out my 128GB RAM
 
Yes but that's server work. I'm talking about and referring to desktop workflow scenarios.
No. In-memory DBs are often (if not most of the time) deployed on workstations. They're perfect for advanced analytics, machine learning and so on.
You don't want such load on a database used by more people.

You don't use in-memory databases for storing data, especially on production system. It's RAM after all.
 
Back
Top