• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.17/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
This news may really not come as such; it's more of a statement in logical, albeit unconfirmed facts rather than something unexpected. AMD is working (naturally) on a GDDR6 memory controller, which it's looking to leverage in its next generations of graphics cards. This is an expected move: AMD is expected to continue using more exotic HBM memory implementations on its top tier products, but that leaves a lot of GPU space in their product stack that needs to be fed by high-speed memory solutions. With GDDR6 nearing widespread production and availability, it's only natural that AMD is looking to upgrade its controllers for the less expensive, easier to implement memory solution on its future products.

The confirmation is still worth mention, though, as it comes straight from a principal engineer on AMD's technical team, Daehyun Jun. A Linked In entry (since removed) stated that he was/is working on a DRAM controller for GDDR6 memory since September 2016. GDDR6 memory brings advantages of higher operating frequencies and lower power consumption against GDDR5 memory, and should deliver higher potential top frequencies than GDDR5X, which is already employed in top tier NVIDIA cards. GDDR6, when released, will start by delivering today's GDDR5X top speeds of roughly 14 Gbps, with a current maximum of 16 Gbps being achievable on the technology. This means more bandwidth (up-to double over current 8 Gbps GDDR5) and higher clock frequency memory. GDDR6 will be rated at 1.35 v, the same as GDDR5X.





SK Hynix, Samsung, and Micron have all announced their GDDR6 processes, so availability should be enough to fill NVIDIA's lineup, and AMD's budget and mainstream graphics cards, should the company choose to do so. Simpler packaging and PCB integration should also help in not lowering yields from usage of more complex memory subsystems.

View at TechPowerUp Main Site
 
Looks like next generation will be ditching HBM2 in a hope for better supply and less issues... I mean, GDDR5X turned out great for NVIDIA's high end...
 
Looks like next generation will be ditching HBM2 in a hope for better supply and less issues... I mean, GDDR5X turned out great for NVIDIA's high end...

I doubt AMD will be ditching the HBM2 implementation for its top tier cards. Availability, packaging and yields can only improve with time, and all the work for the controllers is already done. Seems counterproductive to send all that work to the gutter.

I reserve myself the right to be wrong, though :)
 
There is a reason for AMD and Nvidia to use GDDR6 over the shortcommings of the HBM2 supply.
And further some more reasons.

This is more about Computing, but it shows a direction in general.
http://images.nvidia.com/events/sc15/pdfs/SC_15_Keckler_distribute.pdf

upload_2017-12-6_11-41-57.png
 
Does HBM even make sense outside computing? (where nVidia is also using it)
 
Does HBM even make sense outside computing? (where nVidia is also using it)

I'd say it does. That Intel+Vega thing has HBM and there it makes perfect sense.
 
There is a reason for AMD and Nvidia to use GDDR6 over the shortcommings of the HBM2 supply.
And further some more reasons.

This is more about Computing, but it shows a direction in general.

Well that's Nvidia for you. They bash things they themselves do not use. But hype it to 11 when they start using it. Remembe why HBM was created in the first place! - because GDDR was consuming ever more power. Granted G5X and by the looks of it G6 will keep power consumption at bay due to lower voltages but HBM still has a massive advantage.

Plus the slide speculates about future HBM standards we have not even seen yet. I highly doubt future HBM standards up the operating voltage. At worst they keep it at the same level. HBM is also highly scalable in terms of density and die area.
 
There is a reason for AMD and Nvidia to use GDDR6 over the shortcommings of the HBM2 supply.
And further some more reasons.

This is more about Computing, but it shows a direction in general.
http://images.nvidia.com/events/sc15/pdfs/SC_15_Keckler_distribute.pdf

View attachment 94545

I wonder how they worked that out , HBM-type memory is inherently more power efficient and faster.

Does HBM even make sense outside computing? (where nVidia is also using it)

It sure as hell does, all those GFLOPS are useless if you can't access huge chunks of data fast enough. And this includes rendering , as far as current GPU architectures are concerned there is no distinction between compute and classical graphics processing. They use the same resources and are bound by the same limitations.
 
Last edited:
It sure as hell does, all those GFLOPS are useless if you can't access huge chunks of data fast enough.

IQ0Zk4C.png


The main difference between the two that is very useful in compute is lower lag of HBM.
Bandwidth alone is already there with GDDR5x. GDDR6 beats that.
 
GDDR6 beats that.

It doesn't and when it gets close it needs a bus and memory chips that span the entire area of the card.

Maximum you can achieve on a 384bit bus (typical of today's cards) with GDDR6 is 768 GB/s according to the current predicted spec.

V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
 
Last edited:
It doesn't and when it gets close it needs a bus and memory chips that span the entire area of the card.

Maximum you can achieve on a 384bit bus (typical of today's cards) with GDDR6 is 768 GB/s according to the current predicted spec.

V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".


Lower cost of implementing.
Interposer is expensive.
AMD has shown even with HBM power supply die side is still huge, so saving 20W where cooling density isn't an issue...... isn't the issue.
 
Yes it's costly and might make manufacturing more difficult but it outperforms traditional on-board memory in every way.

But make no mistake , it's not going away.
 
Yes it's costly and might make manufacturing more difficult but it outperforms traditional on-board memory in every way.

But make no mistake , it's not going away.


It will eventually be on the same die.
 
Available and not a year and a half late would also help!
 
Available and not a year and a half late would also help!

There are different companies currently buying up the HBM2 chips. Amd has a contract where it's guaranteed to have a good percentage, but simply not enough to ensure the constant delivery of cards.

You have Intel buying up the HBM2. You have IBM buying up the HBM2. And alot more then just AMD or Nvidia. They can only produce an X amount of chips every month. And AMD gets a percentage out of that.
 
That's all great, but the point still stands.

Nvidia's P100 HBM2 chip hit the market long ago, but I guess they were sensible (or as some would say... shady).
 
That's all great, but the point still stands.

Nvidia's P100 HBM2 chip hit the market long ago, but I guess they were sensible (or as some would say... shady).
P100 was also a low volume high price part.
 
V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
You were literally responding to a post with memory bandwidth benchmark showing Vega 64 (HBM2) is on par with 1080Ti (GDDR5x).
GDDR6 has more bandwidth than that.

The only metric HBM2 in Vega 64 is beating GDDR5x in 1080Ti is latency. Which, you know, is not really that important in gaming (unlike compute, where nvidia is using it too)
 
You were literally responding to a post with memory bandwidth benchmark showing Vega 64 (HBM2) is on par with 1080Ti (GDDR5x).
GDDR6 has more bandwidth than that.

The concepts of memory type and bus width seem alien to you which is why you can't see that you are wrong. What you are describing has nothing to do with HBM2 vs GDDR5X vs GDDR6 vs whatever.

Come back when you have a better understanding of these things. Just an advice.
 
The concepts of memory type and bus width seem alien to you which is why you can't see that you are wrong. What you are describing has nothing to do with HBM2 vs GDDR5X vs GDDR6 vs whatever.
Oh, please.
There surely are various configurations possible.
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
 
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.

Good luck connecting GDDR6 chips with a 4096 bit wide bus.
 
I feel like HBM is the rambus of gfx card memory. Stats look great, but so little adoption.
 
Back
Top