• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
32-gigabyte DDR4 UDIMMs are a reality. Samsung recently announced the development of a 32 GB DDR4 dual-rank UDIMM, using higher density DRAM chips. Those chips, however, are unlikely to be available anytime soon, compounded by Samsung's reported scumbaggery in the making. In the midst of all this, motherboard major ASUS designed its own non-JEDEC UDIMM standard, called "Double Capacity DIMM" or DC DIMM, with the likes of G.Skill and Zadak designing the first models. The utility of these modules is to max out the CPU memory controller's limit despite having fewer memory slots on the motherboard. Possible use-cases include LGA1151 mini-ITX motherboards with just one slot per memory channel (2 slots in all), or certain LGA2066 boards with just four slots (one slot per channel).

There is no word on the memory chip configuration modules, but it's highly likely they are dual-rank. The first DDR4 DC modules could be 32 GB, letting you max out the memory controller limit of 8th gen and 9th gen Core processors with just two modules. ASUS is heavily marketing this standard with its upcoming motherboards based on Intel's Z390 Express chipset, so it remains to be seen if other ASUS motherboards (or other motherboards in general) support the standard. Ironically, the Zadak-made module shown in ASUS marketing materials use DRAM chips made by Samsung.



View at TechPowerUp Main Site
 
Joined
Nov 25, 2011
Messages
166 (0.04/day)
Location
Australia
Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still
 
Last edited:

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,731 (3.42/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Seems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,056 (2.26/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/5za05v
Seems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities

Because it's hard to do this and retain the signal integrity. They most likely need to use some kind of buffer chip, similar to RDIMMs.
 
Joined
Aug 13, 2010
Messages
5,384 (1.08/day)
Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still

Well... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
 
Joined
Jan 2, 2018
Messages
273 (0.12/day)
This would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
 
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Well... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
Genomics like our buddy @xkm1948 does?
 
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Genomics like our buddy @xkm1948 does?
I could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.
 
Joined
Sep 26, 2012
Messages
860 (0.20/day)
Location
Australia
System Name ATHENA
Processor AMD 7950X
Motherboard ASUS Crosshair X670E Extreme
Cooling Noctua NH-D15S, 7 x Noctua NF-A14 industrialPPC IP67 2000RPM
Memory 2x32GB Trident Z RGB 6000Mhz CL30
Video Card(s) ASUS 4090 Strix
Storage 3 x Kingston Fury 4TB, 4 x Samsung 870 QVO
Display(s) Alienware AW3821DW, Wacom Cintiq Pro 15
Case Fractal Design Torrent
Audio Device(s) Topping A90/D90 MQA, Fluid FPX7 Fader Pro, Beyerdynamic T1 G2, Beyerdynamic MMX300
Power Supply ASUS THOR 1600T
Mouse Xtrfy MZ1 - Zy' Rail, Logitech MX Vertical, Logitech MX Master 3
Keyboard Logitech G915 TKL
VR HMD Oculus Quest 2
Software Windows 11 + OpenSUSE MicroOS
You are already at 64GB on most mainstream boards (4x16GB), and 128GB on HEDT. Its edge case scenario's that need more than this, edge cases that are likely already fulfilled by Workstation and Server markets.

Shrug.
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
Is it possible to have dual rank on a single sided card?
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.
But are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
 
Joined
Oct 22, 2014
Messages
13,210 (3.81/day)
Location
Sunshine Coast
System Name Black Box
Processor Intel Xeon E3-1260L v5
Motherboard MSI E3 KRAIT Gaming v5
Cooling Tt tower + 120mm Tt fan
Memory G.Skill 16GB 3600 C18
Video Card(s) Asus GTX 970 Mini
Storage Kingston A2000 512Gb NVME
Display(s) AOC 24" Freesync 1m.s. 75Hz
Case Corsair 450D High Air Flow.
Audio Device(s) No need.
Power Supply FSP Aurum 650W
Mouse Yes
Keyboard Of course
Software W10 Pro 64 bit
But are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
Then why don't they mount ram slots on the back of the board where they can be closer.
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
google: "in-memory database"
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,056 (2.26/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/5za05v
They will only cost and arm and a leg
That's not enough, you most likely have to throw in a kidney and part of your liver with the current memory pricing...

Then why don't they mount ram slots on the back of the board where they can be closer.
Because, space? Also, have you ever looked at the back of the a PCB? You actually have quite a lot of components that goes through all of the layers of the PCB, so you can't just place things where you'd like to. It's not impossible to do what you're suggesting though, look at some of the NAS boards, they have the memory slots on the rear of the PCB, but only SO-DIMMs, not full size DIMMs.
Also, the cooling on that side will suck in an ATX case.
 
Joined
Mar 22, 2011
Messages
213 (0.04/day)
Location
USA
System Name Liquid 2022
Processor Intel i7-12700k
Motherboard Asus Strix Z690-A GAMING WIFI D4
Cooling Custom loop with 9x120mm radiator area
Memory Team 16GB (2x8GB) DDR4@4133 C18-18-18
Video Card(s) EVGA GeForce RTX 2080ti on nickel Heatkiller IV block with Aluminum backplate
Storage 10TB SSD: Samsung 970 PRO 512GB (OS), Samsung 980 PRO 2TB, ADATA SX8200 PRO 2TB/500GB, 4TB/1TB MX500
Display(s) Dell S2716DG 27" 1440p G-SYNC, Samsung Odyssey
Case Phanteks ENTHOO 719 (grey)
Audio Device(s) Creative Sound BlasterX AE-5, Logitech Z906 5.1 speaker system
Power Supply Cooler Master V1200, custom sleeved white cables
Mouse Logitech G502
Keyboard Corsair K70 Lux RGB
Software Windows 10 Pro 64-bit (maybe 11 soon?)
Then why don't they mount ram slots on the back of the board where they can be closer.
Given the design of most modern computer cases, this may be a good idea. Most motherboard mounts have a large hole around the back of the CPU which would allow easy access.

phanteks_evolve_x_2_sm.jpg
 
Joined
Dec 28, 2012
Messages
3,478 (0.84/day)
System Name Skunkworks
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software openSUSE tumbleweed/Mint 21.2
This would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
Why not just angle the slots downward, like laptops do? You could get them closer to the CPU socket and have better cooler compatibility.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Could these be slots to leverage Optane DIMMs or something of the sort?
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
Then why don't they mount ram slots on the back of the board where they can be closer.
Because of the standard. You have limited space behind the mobo.
Also, the back side of mobo usually has hardly any airflow, which would be a disaster for RAM. It's bad enough we started putting NVMe drives there.

But sure, in most devices (especially passively cooled) both sides of PCB are used.
With the advent of hyper fast SSD's why would this be needed?
Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
 
Last edited:
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
Yes but that's server work. I'm talking about and referring to desktop workflow scenarios.
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
I could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.


Nah you are wrong. Would be nice if I can have 1TB~2TB DRAM per local CPU access. In bioinformatics, especially with huge data sets the more ram the better. I was constantly out of ram when performing a 17 samples (in triplicates) microbiome analysis and i constantly maxed out my 128GB RAM
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
Yes but that's server work. I'm talking about and referring to desktop workflow scenarios.
No. In-memory DBs are often (if not most of the time) deployed on workstations. They're perfect for advanced analytics, machine learning and so on.
You don't want such load on a database used by more people.

You don't use in-memory databases for storing data, especially on production system. It's RAM after all.
 
Top