Tuesday, August 23rd 2016

Samsung Bets on GDDR6 for 2018 Rollout

Even as its fellow-Korean DRAM maker SK Hynix is pushing for HBM3 to bring 2 TB/s memory bandwidths to graphics cards, Samsung is betting on relatively inexpensive standards that succeed existing ones. The company hopes to have GDDR6, the memory standard that succeeds GDDR5X, to arrive by 2018.

GDDR6 will serve up bandwidths of up to 16 Gbps, up from the 10 Gbps currently offered by GDDR5X. This should enable memory bandwidths of 512 GB/s over a 256-bit wide memory interface, and 768 GB/s over 384-bit. The biggest innovation with GDDR6 that sets it apart from GDDR5X is LP4X, a method with which the memory controller can more responsively keep voltages proportionate to clocks, and reduce power-draw by up to 20% over the previous standard.
Source: ComputerBase.de
Add your own comment

57 Comments on Samsung Bets on GDDR6 for 2018 Rollout

#26
dj-electric
TheGuruStudToo bad it didn't really do anything compared to clocks.
Just allowed them to sore above 2Ghz.
Posted on Reply
#27
G33k2Fr34k
ZeppMan217Why doesn't Nvidia use HBM?
Because of their highly aggressive delta color compression implementation. By aggressive I mean that the algorithms they use to compress pixel data probably use a lot of approximation. Meaning, if the delta between pixel (x, y) and (x, l) is "sufficiently small", both pixels are compressed into the same delta value.
Posted on Reply
#28
$ReaPeR$
well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.
Posted on Reply
#29
Captain_Tom
mroofie:D
You mean like how the 1080 is 10% stronger than the old Fury X in the latest games? Yeah that is... "Impressive"...
Posted on Reply
#30
Captain_Tom
$ReaPeR$well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.
I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
Posted on Reply
#31
Captain_Tom
$ReaPeR$well.. this is pathetic.. i understand the usage scenario for this type of memory and i also understand that companies will do anything to keep the status quo. but i still find it pathetic. Afterall HBM in great quantities would be cheap, as everything that comes in great quantities.
I guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
Posted on Reply
#32
arbiter
Captain_TomI guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
To use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.
Posted on Reply
#33
$ReaPeR$
Captain_TomI guess it just depends on how expensive it is. I think HBM1 was like $100 - $150 for 4GB. Cheaper HBM2 about to be produced is likely $75 for 4GB, and it will at least beat the FUTURE 384-bit GDDR6 in both performance and power usage. Unless they can sell GDDR6 for like 1/3rd the price I just don't see it being useful for anything but $150 and lower cards.
arbiterTo use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.
i think we all understand how production procedure works, when something is done in a large scale it can be done more efficiently and with reduced costs, if that procedure is used only for halo products you cannot expect the costs or its efficiency to change drastically or even at all.
Posted on Reply
#34
MxPhenom 216
ASIC Engineer
ZoneDymoYep and sadly thats all the are after, not actually pushing the envelope, actually propelling humanity forward with cutting edge tech, just tiny mouse steps, just enough to beat the competition for easy maximum profit.
I thought Nvidia was a corporation? What are corporations after? Oh yeah, money.
RejZoRPascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.
If tiled resources is the same as tile based rendering, even Kepler had that.
Posted on Reply
#35
Captain_Tom
arbiterTo use HBM you have to mount both gpu and chips to an interposer which add's another step in process of what can go wrong. SO its not about cost of the chip's, is also that inter poser and yields you get outta them when chips are mounted to said inter poser. There will be chips that fail after that process. With that said other side is also bandwidth needed, if there is enough memory bandwidth to keep the chip supplied as you could see with fury x having insane memory bandwidth is just a paper stat that doesn't help if it can't utilized to its full ability.
I'm not saying the Fury X doesn't have more bandwidth than "needed", but I will say the extra bandwidth did add to the performance. Every single card (Both AMD and Nvidia) that I have overclocked gained massively from increased memory speeds. The fact is that cards have been relatively memory starved for the past 5 years. Compare the bandwidth and TFLOP increases that have happened over recent years - The Fury X has 2.5x the computational power while having less than double the bandwidth. Even with memory compression is really isn't enough.

Cards. Need. More. Bandwidth. NOW. Look at the pathetic gains received from overclocking the 1080's core by upwards of 20%, and the massive gains that come from overclocking just the memory on the RX 480.
Posted on Reply
#36
Naito
Caring1Because they are cheapskates and don't care about the consumer, only profits!
What's the point of being in business if you're not in it to make profit?
TheGuruStudScrew off, Sammy. HBM is clearly the future. I bet they're just mad they turned down AMD for HBM lolz (I'm assuming that AMD approached them b/c why wouldn't you).
There will be a need for alternatives to HBM for a long time to come yet. It's nothing to do with Nvidia refusing to use it in consumer products, but more so yields, costs, economics, technical limitations, form-factors, and other variables. There is no point strapping HBM onto a low/mid tier, power-sipping, notebook GPU at this stage, is there? There is a clear market here, and Samsung is looking to fill the need. You don't want low/mid tier GPUs to be stuck of GDDR5/X for the next 5 to 10 years because there was no viable alternative to HBM to keep costs down. You want the best performance you can get for your money, and GDDR6 should be just that.

Do people just cry their fan boy opinions without actually stopping and thinking? Never mind, the answer to this is obvious - it's the internet after all.
Posted on Reply
#37
yotano211
I run a full time ebay selling business by myself, without profits I couldn't pay the rent every month and/or go on 8-10 weeks vacation every year.

So yea, every business cares about profits, but they also have to care about the customer(s). I care about both.
Posted on Reply
#38
Fluffmeister
AMD are basically crowdfunded at this stage, so they will be fine.

With that said, I'm waiting for HBM6 before I jump.
Posted on Reply
#39
Captain_Tom
NaitoWhat's the point of being in business if you're not in it to make profit?



There will be a need for alternatives to HBM for a long time to come yet. It's nothing to do with Nvidia refusing to use it in consumer products, but more so yields, costs, economics, technical limitations, form-factors, and other variables. There is no point strapping HBM onto a low/mid tier, power-sipping, notebook GPU at this stage, is there? There is a clear market here, and Samsung is looking to fill the need. You don't want low/mid tier GPUs to be stuck of GDDR5/X for the next 5 to 10 years because there was no viable alternative to HBM to keep costs down. You want the best performance you can get for your money, and GDDR6 should be just that.

Do people just cry their fan boy opinions without actually stopping and thinking? Never mind, the answer to this is obvious - it's the internet after all.
You are right, but only if GDDR6 is priced accordingly (It probably will be). Let's say HBM3 costs $50 for 4GB, well then imo GDDR6 should cost $15. There is a place for something besides HBM, but only if it is much stronger or dirt cheap.
Posted on Reply
#40
arbiter
$ReaPeR$i think we all understand how production procedure works, when something is done in a large scale it can be done more efficiently and with reduced costs, if that procedure is used only for halo products you cannot expect the costs or its efficiency to change drastically or even at all.
Most people don't understand it really. Most people don't realize that to use HBM you have to the get gpu that passes test's and you need HBM chips that pass. Then you gotta put them all on an interposer would could ruin it all if it don't go perfectly. All of that add's on to cost You say all understand it but i don't think as many under stand it as you think.

If they can get that performance outta GDDR6, I can see Nvidia skipping HBM for that as it would remove 1 possible step of creating a waste.
Posted on Reply
#41
BiggieShady
TheGuruStudToo bad it didn't really do anything compared to clocks.
:confused:
Dj-ElectriCJust allowed them to sore above 2Ghz.
... aaand the sudden realization it's all connected, imagine ... efficiency affects maximum achievable performance
Posted on Reply
#42
Xajel
ZeppMan217Why doesn't Nvidia use HBM?
HBM 1 was limited to only 4GB, that's why Fury only had 4GB.

HBM 2 isn't limited to 4GB, but it's still in limited supply and costs higher, so NV only used it in their pro computing products... HBM 2 is supposed to go for better supply end of this year - early next year, that's why both NV and AMD postponed it's use for next year..
Posted on Reply
#43
medi01
All in all, guys, I can't quite feel the same joy you seem to be feeling in this thread.

nVidia seems to be able to develop 3 different chips in parallel and that with serious architectural changes from project to project
AMD is limited to rolling out in one segment at a time and is doing rather small changes to existing architecture.

Volta is expected in 2017 and it might be to Pascal what Maxwell was to Kepler.
Meanwhile AMD is merely competitive in low range, but even that might evaporate in 2017.

We might end up with AMD not being able to compete in any segment in 2017, and if so, it will be game over.
ZeppMan217Why doesn't Nvidia use HBM?
Except it DOES use HBM2 with GP100 chip.

And if you were wondering about something else, namely:
Q: Why did AMD bother with HBM in Fury?
A: Because, being an underdog in a rather desperate positions, they need to gamble on new tech.

Q: Why do nVidia cards normally need less bandwidth, than AMD cards?
A: Compression on nVidia cards is said to be more effective(although AMD should be closing the gap with Polaris). Architectural differences (yeah, a vague statement, I know) more effective use of cache might also play role.
64K36% to 42% faster in overall gaming than Maxwell
I guess you are comparing 450$ chip (1070) to 330$ chip (970), makes a lot of sense.
RejZoRPascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.
I doubt the "not found on Maxwell" part.
MxPhenom 216I thought Nvidia was a corporation? What are corporations after? Oh yeah, money.
Yeah, "companies make money" = "all company get as low as nVidia, after all, they also make money", very logical statement.
Posted on Reply
#44
$ReaPeR$
arbiterMost people don't understand it really. Most people don't realize that to use HBM you have to the get gpu that passes test's and you need HBM chips that pass. Then you gotta put them all on an interposer would could ruin it all if it don't go perfectly. All of that add's on to cost You say all understand it but i don't think as many under stand it as you think.

If they can get that performance outta GDDR6, I can see Nvidia skipping HBM for that as it would remove 1 possible step of creating a waste.
obviously gddr cannot match hbm performance wise, its only advantage is the cost, and that is only due to the fact that hbm production is in its infancy.
Posted on Reply
#45
64K
medi01I guess you are comparing 450$ chip (1070) to 330$ chip (970), makes a lot of sense.
My statement was aimed at a quote that said Pascal only got higher clocks and no major improvement otherwise. Prices are a different story. I think the prices are too high but that's just speaking for myself and in reality where can you turn to for something competitive with the 1070, 1080 and Pascal Titan X? AMD? No, at least not for a while. Markets need competition to function in a healthy way.
Posted on Reply
#46
medi01
64KMy statement was aimed at a quote that said Pascal only got higher clocks and no major improvement otherwise. Prices are a different story.
No, not really.
Bump was about 20%, you claimed twice that.

AIB 980Ti´s are more than competitive vs 1070.
Posted on Reply
#47
64K
medi01No, not really.
Bump was about 20%, you claimed twice that.

AIB 980Ti´s are more than competitive vs 1070.
I was speaking about overall performance gain over a test suite of games

GTX 1070 over GTX 970 gain
at 1440p 38% faster
at 4K 40% faster

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html

GTX 1080 over GTX 980 gain
at 1440p 40% faster
at 4K 41% faster

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/26.html

Pascal Titan X over Maxwell Titan X gain
at 1440p 40% faster
at 4K 42% faster

www.techpowerup.com/reviews/NVIDIA/Titan_X_Pascal/24.html
Posted on Reply
#48
medi01
64KI was speaking about overall performance gain over a test suite of games
I was speaking about similarly priced card, the 980.

Comparing 450 Euro 1070 to 330 Euro 970 is ridiculous.

Same goes to vs 1080 which es even more expensive than 980Ti was.

Oh, and here is a site where they know how %'s work, hover at will:
www.computerbase.de/2016-08/msi-geforce-gtx-1060-oder-sapphire-radeon-rx-480/2/#diagramm-performancerating-1920-1080

AIB 1070 is 23-24% faster than 980 (1440p/1080p)
AIB 1080 is 19-20% faster than 980Ti (1440p/1080p), stock is only 10% faster
Posted on Reply
#49
BiggieShady
medi01Comparing 450 Euro 1070 to 330 Euro 970 is ridiculous.
Weren't you already told that
64KPrices are a different story.
by your logic any comparable product cannot be compared if it falls out of a price range :shadedshu:
RejZoRPascal is also suppose to be using tile based rendering, something not found on Maxwell 1/2.
medi01I doubt the "not found on Maxwell" part.
Kepler doesn't use tile based rendering, Maxwell does. Pascal uses same algorithm as Maxwell but with differently sized tiles (adjusted for Pascal's higher cache per core)
Posted on Reply
#50
medi01
BiggieShadyby your logic any comparable product cannot be compared if it falls out of a price range :shadedshu:
Yay, how surprising, isn't it? News at 10!

Oh wait, we do have comparably priced products, 980 vs 1070, 980Ti vs 1080.
But let's compare 1070 vs 970 and 1080 vs 980, to get bogus higher "improvement" numbers, shall we.
Posted on Reply
Add your own comment
Apr 26th, 2024 07:02 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts