• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Micron Updates Roadmap, Promises 32 Gbit DDR5 and GDDR7 for 2024

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
18,472 (2.47/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
During yesterday's HBM3 Gen2 memory products yesterday, Micron also shared an updated roadmap with select media and partners. The most interesting details on that roadmap were updates to DRAM and GDDR memory products, with increases in capacity coming for both types of memory. Micron is aiming to launch 32 Gbit or 4 GB DDR5 memory ICs somewhere in the beginning of 2024, which means we can look forward to 32 GB single sided DIMMs with a single DRAM die per memory IC. This should, in theory at least, enable cheaper 32 GB DIMMs, but as always, it's unlikely that the cost saving will be passed on to the end customer. As far as server customers goes, Micron is planning 128 GB DIMMs for 2024, followed by 192 GB DIMMs in 2025 and 256 GB DIMMs in 2026.

When it comes to GDDR, Micron will be launching JEDEC standard GDDR7 memory with 16 and 24 Gbit dies, or 2 and 3 GB capacity, the latter could be the highest capacity GDDR7 memory IC on the market and could see some interesting graphics card configurations. Micron is promising speeds of up to 32 Gbps per pin or 128 GB/s per chip, which is a big jump up from its current best GDDR6X memory which tops out at 24 Gbps per pin or 96 GB/s per chip. GDDR7 differs from Micron's proprietary GDDR6X by using PAM-3 rather than PAM-4 signalling, although this is simply something that the likes of AMD and NVIDIA would have to design their GPUs around. Micron doesn't appear to have any plans for GDDR7X at this point in time. The company is also working on several new iterations of HBM memory over the coming years, with the company expecting to hit 2 TB/s sometime in 2026 or later.



View at TechPowerUp Main Site | Source
 
12x4GB = 48GB video cards in 2024? I'd be thrilled with 32GB. The 1GB chips are obsolete though are an excuse to keep manufacturing cards with 12GB and 8GB capacities.

Here is a 7900XTX breakdown showing how many GDDR6 chips are on the PCB:
 
Wait...there are already 128GB DDR5 dimms for servers....what's so special about Micron announcing 128GB dimms?

Also, does anyone know enough about memory ICs to explain why 16Gb GDDR7 has a maximum capacity of 2GB?
 
12x4GB = 48GB video cards in 2024? I'd be thrilled with 32GB. The 1GB chips are obsolete though are an excuse to keep manufacturing cards with 12GB and 8GB capacities.
Where did you get 4GB from? The article mentions 2GB and 3GB. So assuming standard 4, 8, 12 and 24 chip configuration this would mean capacities ranging from 8-12GB at the low end with four 3GB chips (like 4060). 16-24GB in the high end with eight 3GB chips or 24-36GB with twelve 3GB chips at the enthusiast class. I think 36GB will be max.

72GB would be possible with clamshell design and 12+12 chips but aside from 3090 no gaming GPU has used it in the high end lately. 3090 Ti moved to 2GB chips on one side. So i think 72GB will be used on workstation models only.

Assuming Nvidia finally decides to stop sabotaging their cards we could see something like this in 2025:
3050 8GB
5060/Ti 12GB
5070/Ti 16GB
5080/Ti 20/24GB
5090 36GB
 
Wait...there are already 128GB DDR5 dimms for servers....what's so special about Micron announcing 128GB dimms?
The existing ones are extremely expensive. They cost 2 to 4 times as much per gigabyte compared to 64 GB or 96 GB RDIMMs with ECC. It must be due to TSV stacking that is required to make larger modules.
See for yourself at Wiredzone (US) or a Supermicro reseller in EU.
1690491777394.png
 
Not a word on the latencies on this one...
Aw man, did you really HAVE to drop the "L" word ?

Hopefully, they won't send those guys with the really black suits & mini phat guns over to your house at 2:37:31:54.2916 am :D /s

And just out of curiosity, why would a reseller (supermicro) need to "certify" anything, ram or otherwise, for performance or reliability ?
I would think that would be the mfgr's responsibility, yes ?
 
"...32 Gbps or 128 GB/s which is a big jump up from its current best GDDR6X memory which tops out at 24 Gbps or 96 GB/s."
@TheLostSwede , are you sure the numbers are correct?
32 Gbps = 4 GB/s
24 Gbps = 3 GB/s
 
Aw man, did you really HAVE to drop the "L" word ?

Hopefully, they won't send those guys with the really black suits & mini phat guns over to your house at 2:37:31:54.2916 am :D /s

And just out of curiosity, why would a reseller (supermicro) need to "certify" anything, ram or otherwise, for performance or reliability ?
I would think that would be the mfgr's responsibility, yes ?
A man needs his numbers!
 
And just out of curiosity, why would a reseller (supermicro) need to "certify" anything, ram or otherwise, for performance or reliability ?
I would think that would be the mfgr's responsibility, yes ?
Anyway, Supermicro is not a reseller but the largest independent maker of server mobos, so they keep their own QVL lists, just like Asus etc. do.

"...32 Gbps or 128 GB/s which is a big jump up from its current best GDDR6X memory which tops out at 24 Gbps or 96 GB/s."
@TheLostSwede , are you sure the numbers are correct?
32 Gbps = 4 GB/s
24 Gbps = 3 GB/s
32 Gbps is per pin.
 
Sure. How do we get to 128 GB/s from there?
Usually GDDR comes in 32-bit devices so 32Gbps*32/8 comes to 128 GB/s per device. A RX 8600 or 5060 would probably use 4 devices for a 128-bit wide memory interface with a total bandwidth of 512 GB/s.
 
Not a word on the latencies on this one...
It's a roadmap, not a product announcement.

"...32 Gbps or 128 GB/s which is a big jump up from its current best GDDR6X memory which tops out at 24 Gbps or 96 GB/s."
@TheLostSwede , are you sure the numbers are correct?
32 Gbps = 4 GB/s
24 Gbps = 3 GB/s
Well, it's what Micron put on the roadmap. I was sort of scratching my head as well.

32 Gbps is per pin.
That makes more sense, updated the article to reflect that.
 
I
Wait...there are already 128GB DDR5 dimms for servers....what's so special about Micron announcing 128GB dimms?
Micron's modules will be faster and will bring much necessary competition to Samsung, to drive prices down.
Also, does anyone know enough about memory ICs to explain why 16Gb GDDR7 has a maximum capacity of 2GB?
16Gb divided by 8 = 2GB
 
Assuming Nvidia finally decides to stop sabotaging their cards we could see something like this in 2025:

3 GB capacities right.

128 bit - 8 or 12GB
192 bit - 12 or 18GB

5060 12GB 128 bit 512GB/s a glorified 4070
5070 18GB 192 bit 768 GB/s roughly a 4080.
5080 24GB 256 bit 1024GB/s but it should be faster than 4090, certainly 50% faster than a 4080. so that may be the 5070 Ti
going for 400 600 800 and $1200
 
Back
Top