Tuesday, January 13th 2015

Next AMD Flagship Single-GPU Card to Feature HBM

AMD's next flagship single-GPU graphics card, codenamed "Fiji," could feature High-Bandwidth Memory (HBM). The technology allows for increased memory bandwidth using stacked DRAM, while reducing the pin-count of the GPU, needed to achieve that bandwidth, possibly reducing die-size and TDP. Despite this, "Fiji" could feature TDP hovering the 300W mark, because AMD will cram in all the pixel-crunching muscle it can, at the expense of efficiency from other components, such as memory. AMD is expected to launch new GPUs in 2015, despite slow progress from foundry partner TSMC to introduce newer silicon fabs; as the company's lineup is fast losing competitiveness to NVIDIA's GeForce "Maxwell" family.
Source: The TechReport
Add your own comment

119 Comments on Next AMD Flagship Single-GPU Card to Feature HBM

#51
64K
the54thvoidYes you can. The sales are based on what each company has on offer right now, not what generation or release date they are. 290x is AMD's best card right now. GTX 980 is Nvidia's best card (sort of). It's not an issue of which is newer.

It is AMD's problem they don't have a performance competitor (on perf/watt), not the markets. FWIW, I think their next card should hit the mark based on rumours so far. I think it may be as fast as GM200 but it will consume more power. But if it's a faster card and better at 4K, power draw be damned. But all being said, that's only my opinion.
Power draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.
Posted on Reply
#52
PLAfiller
64KPower draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.
+1, excellent argument. Love it when numbers speak.
Posted on Reply
#53
GhostRyder
rtwjunkieWell, you KNOW there is always an extreme fanboy who joins just to troll and dump on a thread, completely unaware that bta is not biased. Happens on both sides of the fence, sadly, depending whether the news is about the green side or red side.
Yea and it gets quite old especially when you get the people intentionally talking/joking trying to cook up an argument. Makes the thread cluttered and hard to get some good information from.
Vayra86Yes, but that's a dual-GPU card...

The issue is that if you put a single GPU past the 300w mark, many people will run into issues with power supplies for example. It hurts sales, many systems will be incompatible.
The TDP of an R9 290X is 290Watt according to the database on techpowerup so 10 more is not really much of a difference. On top of that TDP is usually not representative anyways of actual power usage as the card uses less than 290 watts under load depending on fan speed (Reference I am speaking of) though the fans are not really making up for much of the power usage and GPGPU where I see these figures climbing up towards ~290 is not representative of real world most of the time. Even so either way I do not think we will be sweating to much with a little bit higher.

I think focusing on performance is better in many ways as long as you do not hit a point that is beyond crazy otherwise you do nullify part of your market with having to invest in more expensive parts to run already expensive parts. Power draw to me is important but more for the mobile class than anything as the higher class of cards are more for the extreme ranges of setups (higher resolution, surround/eyefinity, 3D, etc).
64KPower draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.
Bingo, though to be fair some places do have very high electricity costs compared to you or I so I can see it somewhat but even with that unless your stressing your computer 24/7 it will not amount to much.
Posted on Reply
#54
Ionut B
All I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.
Posted on Reply
#55
TRWOV
LocutusHThese AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...
If history repeats itself HBM will become the new GPU memory standard, just like GDDR3 and GDDR5 did in the past (GDDR4 was kind of a misstep for ATi and only they used it). I would say that those are more than proven success for ATi's R&D. Unified shaders and tessellation also caught on, although ATi's tessellation engine (Thruform) didn't become the standard.
Posted on Reply
#56
Sasqui
Ionut BAll I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.
The 290x reference cooler is/was a complete piece of crap, both in design and manufacturing.
Posted on Reply
#57
Pehla
im begining to belive in conspiracy teory!! maybe intel and/or nvidia is paying for that shitlords not to make amd die shrink!!i mean everyone do that...,but not amd...
samsung go to freakin 14nm...,intel shrink as well,nvidia...,but not amd...,there is something realy wierd in that picture!!
but dont judge me...,its just a teory...,something that crosed my mind :)
Posted on Reply
#58
xvi
Ionut BAll I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.
Eeehhh.. If I recall correctly, someone (I think Intel) has been doing some research on high-temp computing with the theory that it may be cost effective to start designing products that can run safely at rather high temperatures with the intended benefit being that the components become easier (or rather, cheaper) to cool. Imagine a CPU that throttled at, say for example, 150c, and could run quite happily at 120c. The amount of thermal energy a heatsink dissipates increases with thermal delta, so what if we increased that by making the hot side hotter? If AMD's FX chips and Intel's 46xx/47xx chips could run at those temps, we could probably use the stock cooler to achieve the same overclocks we see on high-end air and vis-a-vis high-end air could push in to new territory.

The problem with products that run hot isn't that they were designed to run hot, but more accurately that they were designed to run so close to thermal limits. If those nVidia cards could run at 150c, they'd just turn down the fan speed and most everyone would be happy.
RejZoRI was thinking of just ordering a GTX 970, but now I'm hesitating again. Argh.
Exact same situation for me. I've been considering a 970, but only because it does really well in the one game I want to play and it plays nicely with some backlighting hardware I have. I'd prefer AMD, but even if the new card performs below expectations, at the very least, it should bump the GTX 970's price down.
Posted on Reply
#59
W1zzard
If you have a 200W heat load, the heat output to your system/room is the same (200W), no matter if the card is running cool but with high fan speed or warm with low fan speed.
64Kmy electricity costs
You still have heat dumped into your room / high fan noise
Posted on Reply
#60
RejZoR
I have a ridiculously low custom speed fan profile on my HD7950, so it's absolutely silent. It runs hot, but it's silent. So, I frankly don't realyl care what TDP it has for as long as cooler can deal with it at low RPM. Which means my next card will be a WindForce 3X again for sure.
Posted on Reply
#61
FordGT90Concept
"I go fast!1!11!1!"
We need to know more about the performance before we can judge 300w a bad thing or not. If it has 3-5 times the performance of a 290X, I'd argue that it isn't going to waste. When you can get one 300w card that replaces two 200w cards, I'd call that a win.
Posted on Reply
#62
Sasqui
FordGT90ConceptIf it has 3-5 times the performance of a 290X,
Very doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
Posted on Reply
#63
HumanSmoke
RejZoRYou can't base the findings on the fact that GPU's are like 1 year apart...
They both compete in the same market at the same time, and are both current (non-EOL - therefore they can.

By your reasoning, Intel's latest 2-3 platform offerings shouldn't have reviews including AMD FX and 990X chipsets for comparison, since the AMD platform is over 2 (Vishera) and 3 ( 900 series chipset) years old.
RejZoR
hardcore_gamerYes, but can it play at 4K ?
Most likely yes. With such memory it will have tons of bandwidth to support it. It's just up to the GPU design to utilize it now...
Bandwidth is only half the equation. HBM is limited to 4GB of DRAM in it's first generational phase. Are you confident that 4GB is fully capable for holding the textures in all scenario's for 4K gaming?
SasquiVery doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
That is likely a fairly low estimation IMO. If the quoted numbers are right, Fiji has 4096 cores which are a 45% increase over Hawaii. The wide memory I/O afforded by HBM in addition to colour compression should also add further, as would any refinement in the caching structure - as was the case between Kepler and Maxwell-assuming it was being accorded the priority that Nvidia's architects imbued their project with.
Posted on Reply
#64
FordGT90Concept
"I go fast!1!11!1!"
SasquiVery doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
HBM should mean smaller die required too connect the memory which translates to lower TDP. The TDP growth is not coming from the HBM, it is coming from elsewhere.

If leaked information is to believed, it has double the stream processors as the 280X, a 17% higher clockspeed, and more than double the memory bandwidth.
Posted on Reply
#65
Casecutter
Correct me if wrong the Flagship graphics card, codenamed "Fiji" would via the GM200 as the 390/390X, then "Bermuda" is said to become the 380/380X and via the 970/980 correct?

First how does btarunr come up with, "Despite this, "Fiji" could feature TDP hovering the 300W mark..."? the article said "the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer." Honestly that doesn't sound like anything more than an internal engineering project rather than any inferance to "Fiji". It appears to be pure speculation/assumption, not grounded in any evidence it is a imminent consumer product release.

I also would like btarunr to expound on "despite slow progress from foundry partner TSMC to introduce newer silicon fabs" as that doesn't seem to come from the linked article. It seems a slam on AMD for not having something for what now 4mo's since 970/980?
We know that TSCM "as normal" effected both companies abilities; Nvidia basically had to hold to 28nm on mainstream, and possible so will AMD for "Bermuda". Is saying as he does a hint there's some use of 16nm FinFET for "Flagship graphics" cards from both or either? (I don’t think that going out on a limb). With a 16nm FinFET would be strange for either side to really need push the 300W envelope? I believed AMD learned that approaching 300W is just too much for the thermal effectiveness of most reference rear exhaust coolers (Hawaii).

Despite many rumors of that mocked up housing, I don’t see AMD releasing a single card "reference water" type cooler for their initial "Fiji" release, reference air cooling will maintain. I don't discount they could provide a "Gaming Special" to find the market reaction as things progress for a "reference water" cooler, but not primarily.
Posted on Reply
#66
HumanSmoke
FordGT90ConceptHBM should mean smaller die required too connect the memory which translates to lower TDP.
I think you missed the point of HBM. The lower power comes about due to the lower speed of the I/O ( which is more than offset by the increased width). GDDR5 presently operates at 5-7Gbps/pin. HBM as shipped now by Hynix is operating at 1Gbps/pin


FordGT90ConceptThe TDP growth is not coming from the HBM, it is coming from elsewhere.
Maybe the 45% increase in core count over Hawaii ?
CasecutterCorrect me if wrong the Flagship graphics card, codenamed "Fiji" would via the GM200 as the 390/390X, then "Bermuda" is said to become the 380/380X and via the 970/980 correct?
There seem to be two schools of thought on that. Original roadmaps point to Bermuda being the second tier GPU, but some sources are now saying that Bermuda is some future top-tier GPU on a smaller process. The latter begs the question: If this is so, what will be the second tier when Fiji arrives? Iceland is seen as entry level, and Tonga/Maui will barely be mainstream. There is a gap unless AMD are content to sell Hawaii in the $200 market.
CasecutterSo where does btarunr come up with, "Despite this, "Fiji" could feature TDP hovering the 300W mark..."? It appears to be pure speculation/assumption, not grounded in any evidence?
I would have thought the answer was pretty obvious. btarunr's article is based on a Tech Report article (which is referenced as source). The Tech Report article is based upon a 3DC article(which they linked to) which does reference the 300W number along with other salient pieces of information.
CasecutterI also would like btarunr to expound on "despite slow progress from foundry partner TSMC to introduce newer silicon fabs".With a 16nm FinFET would be strange for either side to really need push the 300W envelope?
TSMC aren't anywhere close to volume production of 16nmFF required for large GPUs (i.e. high wafer count per order). TSMC are on record themselves as saying that 16nmFF / 16nmFF+ will account for 1% of manufacturing by Q3 2015.
CasecutterI believed AMD learned that approaching 300W is just too much for the thermal effectiveness of most reference rear exhaust coolers (Hawaii).
IDK about that. The HD 7990 was pilloried by review sites, the general public, and most importantly, OEMs for power/noise/heat issues. It didn't stop AMD from going one better with Hawaii/Vesuvius. If AMD cared anything for heat/noise, why saddle the reference 290/290X with a pig of reference blower design that was destined to follow the HD 7970 as the biggest example GPU marketing suicide in recent times?
Why would you release graphics cards with little of no inherent downsides from a performance perspective, with cheap-ass blowers that previously invited ridicule? Nvidia proved that the blower design doesn't have to be some Wal-Mart looking, Pratt & Whitney sounding abomination as far back as the GTX 690, yet AMD hamstrung their own otherwise excellent product with a cooler guaranteed to cause a negative impression.
CasecutterDespite many rumors of that mocked up housing, I don’t see AMD releasing a single card "reference water" type cooler for their initial "Fiji" release, reference air cooling will maintain. I don't discount they could provide a "Gaming Special" to find the market reaction as things progress for a "reference water" cooler, but not primarily.
That reference design AIO contract Asetek recently signed was for $2-4m. That's a lot of AIO's for a run of "gaming special" boards don't you think?
Posted on Reply
#67
RejZoR
How exactly was HD7970 a GPU marketing suicide? HD7970 was awesome and still is considering its age.
Posted on Reply
#68
FordGT90Concept
"I go fast!1!11!1!"
HumanSmokeI think you missed the point of HBM. The lower power comes about due to the lower speed of the I/O ( which is more than offset by the increased width). GDDR5 presently operates at 5-7Gbps/pin. HBM as shipped now by Hynix is operating at 1Gbps/pin
It's not pin. It's 128 GiB/s per HBM chip with up to 1 GiB density.
Posted on Reply
#69
HumanSmoke
RejZoRHow exactly was HD7970 a GPU marketing suicide?
As I said:
HumanSmokeWhy would you release graphics cards with little of no inherent downsides from a performance perspective, with cheap-ass blowers that previously invited ridicule?
Of the "cons" outlined in the reference card review, price was what AMD could charge, perf/watt was a necessary trade off for compute functionality, and PowerTune/ZeroCore weren't a big influence which leaves...


Now, are you going to tell me that the largest negative gleaned from reviews, users, and tech site/forum feedback WASN'T due to the reference blower shroud?
Do you not think that if AMD had put more resources into putting together a better reference cooling solution that the overall impression of the reference board - THE ONLY OPTION AT LAUNCH - might have been better from a marketing and PR standpoint ? How many people stated that they would only consider the HD 7970 once the card was available with non-reference cooling - whether air or water?
Posted on Reply
#70
HumanSmoke
FordGT90ConceptThat's backwards. 128 GiB/s per 1 Gb (128 MiB) chip. 4 of them stacked up gets 4 Gb (512 MiB) and 512 GiB/s effective rate. Stick 8 of those on the card and you still have 512 GiB/s and 4 GiB of RAM or be ridiculous and stick 16 of them on the card on two memory controllers for 1 TiB/s and 8 GiB of RAM.
FFS. First generation HBM is limited to four 1GB stacks (256MB * 4 layers)

FordGT90ConceptIt's not pin. It's 128 GiB/s per HBM chip with up to 1 GiB density.
I was referring to the effective data rate (also see the slide above). Lower effective memory speed = lower voltage = lower power envelope - as SK Hynix's own slides show


EDIT: Sorry about the double post. Thought I was editing the one above.
Posted on Reply
#71
Xzibit
HumanSmokeThat reference design AIO contract Asetek recently signed was for $2-4m. That's a lot of AIO's for a run of "gaming special" boards don't you think?
Maybe they plan on AIO everything from now on... It does look like Asetek with sleeves on the tubes.


EDIT:


The release of this card also lines up with the Asetek announcement. Not saying AMD wont have a AIO cooler but at least with EVGA we have proof in a product.
Posted on Reply
#72
HumanSmoke
XzibitMaybe they plan on AIO everything from now on... It does look like Asetek with sleeves on the tubes.
Asetek cooling does seem like the new black. I think you're right in thinking that the EVGA card is using an Asetek cooler judging by comments on the EVGA forum and views of the card without the shroud in place.
If Asetek's cooling becomes will become the de facto standard for AMD's reference cards, it stands to reason that others will follow suit. To my eyes it certainly looks cleaner than Arctic's hybrid solution- but then, I'm not a big fan of Transformer movies either.
XzibitThe release of this card also lines up with the Asetek announcement. Not saying AMD wont have a AIO cooler but at least with EVGA we have proof in a product.
Well, the Asetek announcement for the $2-4m contract specifies an OEM (Nvidia or AMD), not an AIB/AIC, so chances are the EVGA contract isn't directly related any more than Sycom or any of the other outfits adding Asetek units to their range. The fact that card pictured is Nvidia OEM reference GTX 980 rather than an EVGA designed product would also tend to work against the possibility.
Having said that, I'm sure EVGA would love to have sales that warrant committing to a seven-figure contract for cooling units for a single SKU.
Posted on Reply
#73
FordGT90Concept
"I go fast!1!11!1!"
HumanSmokeFFS. First generation HBM is limited to four 1GB stacks (256MB * 4 layers)




I was referring to the effective data rate. Lower effective memory speed = lower voltage = lower power envelope - as SK Hynix's own slide shows


EDIT: Sorry about the double post. Thought I was editing the one above.
Better document:
hpcuserforum.com/presentations/seattle2014/IDC_AMD_EmergingTech_Panel.pdf
819.2 Mb/s to 1,228.8 Mb/s

The fourth slide shows 30w for HBM vs 85w for GDDR5.

Edit: From what I gather, the power savings come from the logic board being on the chip rather than off chip. Everything doesn't have to go as far to get what it needs and that substantially cuts power requirements in addition to improving performance by way of reducing latency.
Posted on Reply
#74
HumanSmoke
FordGT90ConceptEdit: From what I gather, the power savings come from the logic board being on the chip rather than off chip. Everything doesn't have to go as far to get what it needs and that substantially cuts power requirements in addition to improving performance by way of reducing latency.
The power savings are certainly helped by moving off-die to interposer as is latency (although trace distance latency is minor compared to the larger decrease due to slower data rate. Latency increases with data rate - for example CAS3 or 4 is common for DDR2, while DDR3 (the basis for GDDR5) on the other hand is closer to 8-10 cycles.
The large power savings are also data rate related (as Hynix themselves highlight). It is no coincidence that vRAM started to take a significant portion of total board power budget (~30%) with the advent of faster GDDR5 running in excess of 5Gbps, or the corresponding rise of LPDDR3 and 4 for system RAM as data rates increased and the need to reduce voltage became more acute.
FordGT90ConceptBetter document:
hpcuserforum.com/presentations/seattle2014/IDC_AMD_EmergingTech_Panel.pdf
I think you'll find that SK Hynix's own presentation (PDF) is somewhat more comprehensive.
Posted on Reply
#75
RejZoR
HumanSmokeAs I said:

Of the "cons" outlined in the reference card review, price was what AMD could charge, perf/watt was a necessary trade off for compute functionality, and PowerTune/ZeroCore weren't a big influence which leaves...


Now, are you going to tell me that the largest negative gleaned from reviews, users, and tech site/forum feedback WASN'T due to the reference blower shroud?
Do you not think that if AMD had put more resources into putting together a better reference cooling solution that the overall impression of the reference board - THE ONLY OPTION AT LAUNCH - might have been better from a marketing and PR standpoint ? How many people stated that they would only consider the HD 7970 once the card was available with non-reference cooling - whether air or water?
Then how come my HD7950 is the most silent card I've ever owned? I had it clocked at 1175/7000 and it was pretty much inaudible even during gaming? Pay that extra 20 bucks and take a card with proper heatsink and not that crap blower heatsink and every card will be silent. Without exceptions.
Posted on Reply
Add your own comment
May 9th, 2024 03:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts