News Posts matching #Tall Form Factor

Return to Keyword Browsing

Micron Shows Off "Tall Form Factor" 256 GB DDR5-8000 MCRDIMM

Micron representatives showcased new products at last week's NVIDIA GTC event—one eye-catching DIMM is all set for deployment within next-generation servers. Tom's Hardware spent some time at Micron's booth—they found out that the "Tall Form Factor" 256 GB DDR5-8800 Multiplexer Combined Ranks (MCR) DIMM is being prepared for future enterprise processor platforms, including Intel's Xeon Scalable "Granite Rapids" family. A lone "tall" prototype module was exhibited, but company representatives indicated that standard height MCRDIMMs are in development. Tom's Hardware found out that these will compact enough to fit in 1U-sized server systems. According to their in-person experience: "(Micron's) 256 GB MCRDIMMs are based on monolithic 32 Gb DDR5 ICs, but the tall one places 80 DRAM chips on both sides of the module, whereas the standard one uses 2Hi stacked packages, which means that they run slightly hotter due to less space for thermal dissipation. In any case, the tall module consumes around 20 W, which isn't bad as Micron's 128 GB DDR5-8000 RDIMM consumes 10 W at DDR5-4800."

In a recent earnings call, Micron CEO Sanjay Mehrotra, commented on his company's latest technology: "we (have) started sampling our 256 GB MCRDIMM module, which further enhances performance and increases DRAM content per server." Next-gen Intel Xeon platforms are expected to support 12 or 24 memory slots per processor socket. Enabled datacenter machines could be specced with total 3 TB or 6 TB (DDR5-8000) memory capacities. AnandTech has summarized the benefits of Micron's new part: "Multiplexer Combined Ranks (MCR) DIMMs are dual-rank memory modules featuring a specialized buffer that allows both ranks to operate simultaneously. This buffer enables the two physical ranks to operate as though they were separate modules working in parallel, which allows for concurrent retrieval of 128 bytes of data from both ranks per clock cycle—compared to 64 bytes per cycle when it comes to regular memory modules—effectively doubling performance of a single module." The added complexity is offset by significant performance boons—ideal for advanced server-side AI-crunching in the future.
Return to Keyword Browsing
May 11th, 2024 09:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts