• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Samsung Sampling 24 Gbps GDDR6 Memory Chips

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,771 (7.41/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Samsung has started sampling high-speed 24 Gbps-rated GDDR6 memory chips. Just to be clear, these are standard GDDR6 chips built to JEDEC-specifications, and not GDDR6X, a derivative standard co-developed by NVIDIA and Micron leveraging PAM4 signaling. The 24 Gbps chips by Samsung can be used by both NVIDIA and AMD, if their GPU designs can handle the data-rates. The specific part number for the chip is "K4ZAF325BC-SC24." This chip has a density of 16 Gb (2 GB), which means 8 of these make up 16 GB across a 256-bit wide memory bus, and 12 of these make 24 GB across a 384-bit bus.

At 24 Gbps, these chips offer 50% more bandwidth than 16 Gbps, and 71% more than 14 Gbps. A hypothetical 6 nm refresh of the "Navi 21" paired with these chips, would hence have 768 GB/s of memory bandwidth on top of its Infinity Cache bandwidth, compared to 512 GB/s on the current Radeon RX 6800 XT. Since the chip is sampling, it's likely that both AMD and NVIDIA have their hands on it. There's no word on when the chip hits mass-production, but this could definitely happen within 2022.



View at TechPowerUp Main Site
 
Is this finally going to drop the incredible power consumption of GDDR6x? The reuse of the same signaling as Micron (except clocked higher) makes me fear-for-the-worst!
 
how do those compare to the latest GDDR6X ?
 
Thank god.. gddr6x is shat lol
I'd say not optimal, but it allowed the GA102 products to get the bandwidth they needed at the time, and it's allowed me to mine and pay for a 3080 twice over, so I'm not at all upset about the inclusion.

Decision made at a point in time to bridge a gap.
 
Decision made at a point in time to bridge a gap.
The first card that used 16Gbps GDDR6 was 2080 Super in July 2019, running them at 15.5Gbps for some reason.
The first card running 16Gbps GDDR6 proper was 6800XT in November 2020.
RTX3080 was released in September 2020 with 19Gbps GDDR6X.
 
Nvidia really got REKT with this one. Spent untold millions for Micron to develop GDDR6X and got forced into buying it all up, put out cards with 350W TDP as a result, and now plain old GDDR6 is pushing far beyond the data rates possible with GDDR6X. Hilarious really.

And for those skeptical, no, Ampere did not need GDDR6X, it could have benefited from it if not for the terrible power efficiency. The A6000 is consistently faster than the 3090 FE in games -- and while yes, it is a higher bin -- it also has a TDP of only 300W and only 768GB/s bandwidth, yet still clocks higher and performs better. Further proof can be found in the joke that is the 3070 Ti.
 
Nvidia really got REKT with this one. Spent untold millions for Micron to develop GDDR6X and got forced into buying it all up, put out cards with 350W TDP as a result, and now plain old GDDR6 is pushing far beyond the data rates possible with GDDR6X. Hilarious really.

And for those skeptical, no, Ampere did not need GDDR6X, it could have benefited from it if not for the terrible power efficiency. The A6000 is consistently faster than the 3090 FE in games -- and while yes, it is a higher bin -- it also has a TDP of only 300W and only 768GB/s bandwidth, yet still clocks higher and performs better. Further proof can be found in the joke that is the 3070 Ti.
Nothing but truth... Nvidia got scammed by micron lol.. this Samsung gddr6 absolutely destroys gddr6x nvidia must be embarrassed
 
Nvidia really got REKT with this one. Spent untold millions for Micron to develop GDDR6X and got forced into buying it all up, put out cards with 350W TDP as a result, and now plain old GDDR6 is pushing far beyond the data rates possible with GDDR6X. Hilarious really.
I must have missed the part where these cards with GDDR6X flew off shelves and helped make them untold millions in profits. Hilarious really.

The A6000 is consistently faster than the 3090 FE in games
Got a source on that? everything I've seen shows it trailing a 3090 or at best batching it depending on the game.
 
Nvidia really got REKT with this one. Spent untold millions for Micron to develop GDDR6X and got forced into buying it all up, put out cards with 350W TDP as a result, and now plain old GDDR6 is pushing far beyond the data rates possible with GDDR6X. Hilarious really.

And for those skeptical, no, Ampere did not need GDDR6X, it could have benefited from it if not for the terrible power efficiency. The A6000 is consistently faster than the 3090 FE in games -- and while yes, it is a higher bin -- it also has a TDP of only 300W and only 768GB/s bandwidth, yet still clocks higher and performs better. Further proof can be found in the joke that is the 3070 Ti.
Excuse me to contact you this way Mister Ghazi but you the same "al-ghazi" from the WCCFtech Disqus??
 
Back
Top