Wednesday, December 6th 2017

AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators

This news may really not come as such; it's more of a statement in logical, albeit unconfirmed facts rather than something unexpected. AMD is working (naturally) on a GDDR6 memory controller, which it's looking to leverage in its next generations of graphics cards. This is an expected move: AMD is expected to continue using more exotic HBM memory implementations on its top tier products, but that leaves a lot of GPU space in their product stack that needs to be fed by high-speed memory solutions. With GDDR6 nearing widespread production and availability, it's only natural that AMD is looking to upgrade its controllers for the less expensive, easier to implement memory solution on its future products.

The confirmation is still worth mention, though, as it comes straight from a principal engineer on AMD's technical team, Daehyun Jun. A Linked In entry (since removed) stated that he was/is working on a DRAM controller for GDDR6 memory since September 2016. GDDR6 memory brings advantages of higher operating frequencies and lower power consumption against GDDR5 memory, and should deliver higher potential top frequencies than GDDR5X, which is already employed in top tier NVIDIA cards. GDDR6, when released, will start by delivering today's GDDR5X top speeds of roughly 14 Gbps, with a current maximum of 16 Gbps being achievable on the technology. This means more bandwidth (up-to double over current 8 Gbps GDDR5) and higher clock frequency memory. GDDR6 will be rated at 1.35 v, the same as GDDR5X.
SK Hynix, Samsung, and Micron have all announced their GDDR6 processes, so availability should be enough to fill NVIDIA's lineup, and AMD's budget and mainstream graphics cards, should the company choose to do so. Simpler packaging and PCB integration should also help in not lowering yields from usage of more complex memory subsystems.Sources: Tweakers.net, Guru 3D, Thanks @ P4-630!
Add your own comment

25 Comments on AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators

#1
RejZoR
Looks like next generation will be ditching HBM2 in a hope for better supply and less issues... I mean, GDDR5X turned out great for NVIDIA's high end...
Posted on Reply
#2
Raevenlord
News Editor
RejZoR said:
Looks like next generation will be ditching HBM2 in a hope for better supply and less issues... I mean, GDDR5X turned out great for NVIDIA's high end...
I doubt AMD will be ditching the HBM2 implementation for its top tier cards. Availability, packaging and yields can only improve with time, and all the work for the controllers is already done. Seems counterproductive to send all that work to the gutter.

I reserve myself the right to be wrong, though :)
Posted on Reply
#4
medi01
Does HBM even make sense outside computing? (where nVidia is also using it)
Posted on Reply
#5
Frick
Fishfaced Nincompoop
medi01 said:
Does HBM even make sense outside computing? (where nVidia is also using it)
I'd say it does. That Intel+Vega thing has HBM and there it makes perfect sense.
Posted on Reply
#6
Tomorrow
_Flare said:
There is a reason for AMD and Nvidia to use GDDR6 over the shortcommings of the HBM2 supply.
And further some more reasons.

This is more about Computing, but it shows a direction in general.
Well that's Nvidia for you. They bash things they themselves do not use. But hype it to 11 when they start using it. Remembe why HBM was created in the first place! - because GDDR was consuming ever more power. Granted G5X and by the looks of it G6 will keep power consumption at bay due to lower voltages but HBM still has a massive advantage.

Plus the slide speculates about future HBM standards we have not even seen yet. I highly doubt future HBM standards up the operating voltage. At worst they keep it at the same level. HBM is also highly scalable in terms of density and die area.
Posted on Reply
#7
Vya Domus
_Flare said:
There is a reason for AMD and Nvidia to use GDDR6 over the shortcommings of the HBM2 supply.
And further some more reasons.

This is more about Computing, but it shows a direction in general.
http://images.nvidia.com/events/sc15/pdfs/SC_15_Keckler_distribute.pdf


I wonder how they worked that out , HBM-type memory is inherently more power efficient and faster.

medi01 said:
Does HBM even make sense outside computing? (where nVidia is also using it)
It sure as hell does, all those GFLOPS are useless if you can't access huge chunks of data fast enough. And this includes rendering , as far as current GPU architectures are concerned there is no distinction between compute and classical graphics processing. They use the same resources and are bound by the same limitations.
Posted on Reply
#8
medi01
Vya Domus said:
It sure as hell does, all those GFLOPS are useless if you can't access huge chunks of data fast enough.


The main difference between the two that is very useful in compute is lower lag of HBM.
Bandwidth alone is already there with GDDR5x. GDDR6 beats that.
Posted on Reply
#9
Vya Domus
medi01 said:
GDDR6 beats that.
It doesn't and when it gets close it needs a bus and memory chips that span the entire area of the card.

Maximum you can achieve on a 384bit bus (typical of today's cards) with GDDR6 is 768 GB/s according to the current predicted spec.

V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
Posted on Reply
#10
Steevo
Vya Domus said:
It doesn't and when it gets close it needs a bus and memory chips that span the entire area of the card.

Maximum you can achieve on a 384bit bus (typical of today's cards) with GDDR6 is 768 GB/s according to the current predicted spec.

V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
Lower cost of implementing.
Interposer is expensive.
AMD has shown even with HBM power supply die side is still huge, so saving 20W where cooling density isn't an issue...... isn't the issue.
Posted on Reply
#11
Vya Domus
Yes it's costly and might make manufacturing more difficult but it outperforms traditional on-board memory in every way.

But make no mistake , it's not going away.
Posted on Reply
#12
Steevo
Vya Domus said:
Yes it's costly and might make manufacturing more difficult but it outperforms traditional on-board memory in every way.

But make no mistake , it's not going away.
It will eventually be on the same die.
Posted on Reply
#14
Fluffmeister
Available and not a year and a half late would also help!
Posted on Reply
#15
Jism
Fluffmeister said:
Available and not a year and a half late would also help!
There are different companies currently buying up the HBM2 chips. Amd has a contract where it's guaranteed to have a good percentage, but simply not enough to ensure the constant delivery of cards.

You have Intel buying up the HBM2. You have IBM buying up the HBM2. And alot more then just AMD or Nvidia. They can only produce an X amount of chips every month. And AMD gets a percentage out of that.
Posted on Reply
#16
Fluffmeister
That's all great, but the point still stands.

Nvidia's P100 HBM2 chip hit the market long ago, but I guess they were sensible (or as some would say... shady).
Posted on Reply
#17
Tomorrow
Fluffmeister said:
That's all great, but the point still stands.

Nvidia's P100 HBM2 chip hit the market long ago, but I guess they were sensible (or as some would say... shady).
P100 was also a low volume high price part.
Posted on Reply
#18
Fluffmeister
Tomorrow said:
P100 was also a low volume high price part.
Which is exactly my point.
Posted on Reply
#19
medi01
Vya Domus said:
V100 with HBM2 gets close to 1 TB/s , I don't know how you figured out that it "beats that".
You were literally responding to a post with memory bandwidth benchmark showing Vega 64 (HBM2) is on par with 1080Ti (GDDR5x).
GDDR6 has more bandwidth than that.

The only metric HBM2 in Vega 64 is beating GDDR5x in 1080Ti is latency. Which, you know, is not really that important in gaming (unlike compute, where nvidia is using it too)
Posted on Reply
#20
Vya Domus
medi01 said:
You were literally responding to a post with memory bandwidth benchmark showing Vega 64 (HBM2) is on par with 1080Ti (GDDR5x).
GDDR6 has more bandwidth than that.
The concepts of memory type and bus width seem alien to you which is why you can't see that you are wrong. What you are describing has nothing to do with HBM2 vs GDDR5X vs GDDR6 vs whatever.

Come back when you have a better understanding of these things. Just an advice.
Posted on Reply
#21
medi01
Vya Domus said:
The concepts of memory type and bus width seem alien to you which is why you can't see that you are wrong. What you are describing has nothing to do with HBM2 vs GDDR5X vs GDDR6 vs whatever.
Oh, please.
There surely are various configurations possible.
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
Posted on Reply
#22
R0H1T
medi01 said:
Oh, please.
There surely are various configurations possible.
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
Well you're wrong on that one, GDDR5x & GDDR6 both are rated upto 16Gbps (QDR) & in the end the memory bus (width) determines the winner. HBM is rated up to 2TBps IIRC, with HBM3.
Posted on Reply
#23
Vya Domus
medi01 said:

All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
Good luck connecting GDDR6 chips with a 4096 bit wide bus.
Posted on Reply
#24
mab1376
I feel like HBM is the rambus of gfx card memory. Stats look great, but so little adoption.
Posted on Reply
#25
newtekie1
Semi-Retired Folder
medi01 said:
All other things the same, using GDDR6 instead of GDDR5X would lead to higher bandwidth.
100% wrong. Bandwidth comes down to 3 things: Data Rate, Clock Speed, and Bus Width.

GDDR6 and GDDR5X both have the same data rate. So if clock speed and bus width are also both the the same, meaning all other things are are the same, GDDR6 and GDDR5X will produce the same exact bandwidth. Period.
Posted on Reply
Add your own comment