• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next AMD Flagship Single-GPU Card to Feature HBM

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD's next flagship single-GPU graphics card, codenamed "Fiji," could feature High-Bandwidth Memory (HBM). The technology allows for increased memory bandwidth using stacked DRAM, while reducing the pin-count of the GPU, needed to achieve that bandwidth, possibly reducing die-size and TDP. Despite this, "Fiji" could feature TDP hovering the 300W mark, because AMD will cram in all the pixel-crunching muscle it can, at the expense of efficiency from other components, such as memory. AMD is expected to launch new GPUs in 2015, despite slow progress from foundry partner TSMC to introduce newer silicon fabs; as the company's lineup is fast losing competitiveness to NVIDIA's GeForce "Maxwell" family.



View at TechPowerUp Main Site
 
I believe the HBM is almost but confirmed at this point (I guess things could always change) but as far as this GPU goes its going to be interesting what choices they make and how much power it truly outputs. Personally as far as efficiency goes I am kinda mixed on the high end of cards because while I want more efficiency it is not the primary concern. Though I hope that if they decide that, I hope this top tier card is seriously powerful and beats their previous generation but a significant margin.

Although the naming rumors seem to be a curious part for me because most leaks and such seem to point to calling the next "top" card the R9 380X which would mean that a 390X is in the shadows somewhere and if so where do these specs fall under. More of this to me is just going to be waiting for official confirmation than anything but I am getting more and more intrigued by what I hear.
 
They did it with GDDR5, pulled a rabbit out of a hat for the most part, using a new and unproven hardware.
 
With all the problems foundry partners have had with decreasing the size nodes down to 20nm, I just had to jump onto my R9 290 from my previous 5850. The high-end isn't where power consumption should be a worry, it is the mid/high, mid, mid/lower end segments that should be a priority in trying to get that watt number lower. If the newly appointed CEO Lisa Su works this company around, and the launch of the next wave of cards AND CPUs/APUs is even a little bit successful, AMD will be in a much better position than it's currently in.
 
These AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...
 
They are going full bank = maximum risk.

I guess they bet on the techprocess, no matter what, it all will lay out when FinFETs arrive, and the power question will be drawn away.

It may be a Intel like Tick...
 
With all the problems foundry partners have had with decreasing the size nodes down to 20nm, I just had to jump onto my R9 290 from my previous 5850. The high-end isn't where power consumption should be a worry, it is the mid/high, mid, mid/lower end segments that should be a priority in trying to get that watt number lower. If the newly appointed CEO Lisa Su works this company around, and the launch of the next wave of cards AND CPUs/APUs is even a little bit successful, AMD will be in a much better position than it's currently in.

The high end also needs to worry about power consumption, unless you want a single GPU pulling 600+ watts and needing a 280mm rad with 4 fans just to keep it cool. Efficiency is just as important in the high-end, if not more so, because greater efficiency means more performance per watt.
 
The high end also needs to worry about power consumption, unless you want a single GPU pulling 600+ watts and needing a 280mm rad with 4 fans just to keep it cool. Efficiency is just as important in the high-end, if not more so, because greater efficiency means more performance per watt.

I'm not saying it shouldn't be. I'm saying that no true enthusiast will look at power consumption as the main reason whether or not to purchase a high-end card for their system. EVERYONE likes lower power consumption, but when a company's offerings are pretty much all higher watt consuming products versus their competitors, it makes people wonder how efficient their design really is. I'm not knocking AMD, I'm just being realistic here. But at the same time, AMD is not really competing with Nvidia's Maxwell cards ATM, they're just lowering prices on their R9 series instead in the meantime.
 
Power draw itself is relative. If this card can perform 50% faster than 290X but only consume 10% more power, that's okay. If it can just about drive a single 4k panel, it's a win.
The win/lose scenario kicks in with NV's part. but again, the mythical touted GM200 is also suggested to be nowhere near as efficient as GM204. We'll all win if the new AMD card comes out on steroids.
 
These AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...

i can't see that ... my 290 still go strong and both of my friend who have some 770/780Ti and 970 are surprised that my 190$ 2nd hand (under water ... tho) is nearly toe to toe with the cards they respectively own, meaning the 780Ti and 970, the 770 is out of league. (i also had a 770 before alongside with a 7950 )
 
Last edited:
300w? Cool, not bad.
New tech? Awesome
If price and performance are in line, my PCI-E slot is ready. :D
 
So there are two guys mentioned, both from AMD and working on stuff.

One page mentions the 380X and the other 300W. Why do then people all of a sudden conclude they're both working on the 380X? wouldn't t make a lot more sense if the 2nd guy worked on the 390X?
 
Last edited:
These AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...

Whut?

AMD/ATI delivered some pretty good products, like the 9xxx, X800, 4xxx series, 5xxx series, 7xxx series and the R9 is still great at 4K output. The change from VLIW to GCN was just a few years ago and their first success was with the 7xxx series on it and they still make those chips as a very competitive offering today, which speaks volumes about how good it was and is.

Their most fatal flaw is and remains the promise of many software advantages about 2 years before they are actually available or ever being available. And their CPU cache latency issues.
 
because greater efficiency means more performance per watt.

Guys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP

They are like apples and oranges really.
 
Guys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP

They are like apples and oranges really.
so, that mean AMD is holding in perf side but loosing in consumption side with a older and nearly obsolete process??? ok the 980 is a real good one but not so far from a good OC'ed 290X and quite a bit pricier.

they're just lowering prices on their R9 series instead in the meantime.
well that's a good idea tho since you can find a 290 for a 770 price where i am and a 290X at a 970 price ... unless purely fanboy statement, it's wrong to say AMD can't compete (even if as i wrote a bit above the 970/980 are good product ofc)
 
so, that mean AMD is holding in perf side but loosing in consumption side with a older and nearly obsolete process??? ok the 980 is a real good one but not so far from a good OC'ed 290X and quite a bit pricier.

Well because I guess nVidia paid a lions share to TSMC to be his lovely puppy. It is been always like that actually... The R9 290 has approximately 20% transistors on board too, that heats up, but still reduces the performance gap at their heat cost. But hey... ATI was a Canadian company... heater during cold winter... actually a two in one :D

And actually they must sell their R9 290 no matter what, unsold silicon is a more loss for them than sold for a bargain. I bet they calculated everything as good they can.
 
So there are two guys mentioned, both from AMD and working on stuff.
One page mentions the 380X and the other 300W. Why do then people all of a sudden conclude they're both working on the 380X? wouldn't t make a lot more sense if the 2nd guy worked on the 390X?
My thoughts as well. 380X supposes a second-tier card which doesn't gel with the initially high price of HBM. Another consideration is if the 380X is a 300W card, then the 390X is either way outside the PCI-SIG, or it someway off in the future on a smaller process node ( If the 380X is a 300W card on 20nm then AMD have some serious problems with BoM).
Guys keep in mind that R9 290 uses first gen 28nm HPM process and 980 uses HPP
TSMC doesn't have an HPP process. GM 204 uses 28HPC (High performance mobile computing) since Maxwell is a mobile-centric architecture. The difference in efficiency is more a product of how Nvidia prioritize perf/watt at the expense of double precision - so more a difference in opinion at the ALU level AFAIK.
From TSMC's own literature:
stt6MH4.jpg
 
Last edited:
My thoughts as well. 380X supposes a second-tier card which doesn't gel with the initially high price of HBM. Another consideration is if the 380X is a 300W card, then the 390X is either way outside the PCI-SIG, or it someway off in the future on a smaller process node ( If the 380X is a 300W card on 20nm then AMD have some serious problems with BoM).

Well Boney don't you think also that this number is just the theoretical engineering envelope(summing up the power connector max theoretical delivery current). Do the actually have a real mass produced silicon from GloFo that shows the real consumption numbers. I guess nope... The best they have is still 28nm alfa silicon or even more...

Thanks for correcting, but still that graph is kind of useless.
 
Last edited:
Well Boney don't you think also that this number is just the theoretical engineering envelope(summing up the power connector max theoretical delivery current).
If AMD are working on 300W board power - even for ES, how does that portend for a higher tier card in the same series. When has a top tier card used less power than the second-tier card in the same model series?
Do the actually have a real mass produced silicon from GloFo that shows the real consumption numbers. I guess nope... The best they have is still 28nm alfa silicon or even more...
AMD had at least a hot lot of silicon at least two months ago. By your reckoning, either AMD haven't made any headway with silicon in the interim (indicating a metal layer revision), or are taking a leisurely approach in revising the silicon.
Thanks for correcting, but still that graph is kind of useless.
The chart wasn't provided to supply information on the processes (that's what the individual product briefs are for), it was provided to show what 28nm processes TSMC provides.
 
Last edited:
If AMD are working on 300W board power - even for ES, how does that portend for a higher tier card in the same series. When has a top tier card used less power than the second-tier card in the same model series?

AMD had at least a hot lot of silicon at least two months ago. By your reckoning, either AMD haven't made any headway with silicon in the interim (indicating a metal layer revision), or are taking a leisurely approach in revising the silicon.

I am just trying to understand why the quote appeared on linkedin, nobody says she didn't work on such project, but nobody told what kind of tech node it had, it could be a catch and the blooper around these news.

The seconds they are using GloFo now, we have no hard info on them and their silicon leakage at this stage. There may be many variables.

And the speculation about the 380X, it is funny that it has not the R9 class in front of it, ain't it?
 
One thing that tends to be overlooked in most comments (and even articles).

The silicon is cheap. The development is costly.

When an architecture manages to endure a long time with relatively small improvements (as GCN is), cards made on it make significant profit. Even with a price reduction. Yesterdays flagship becomes mid-high, mid range becomes entry etc.

AMD offerings still generate profit, despite lowered price - and probably a good deal of it.

NVIDIA has done the same multiple times in the past - remember all the re-branding?

Not taking sides, 970 and 980 are certainly excellent products, but they are enthusiast level only - we are yet to see mid-range products (and eagerly, if I may add).

These 'mysterious' AMD cards (I also suppose there are likely two of them) are also eagerly awaited - HBM is a technology which is looking promising, but real life test should confirm to which extent.
 
  • Like
Reactions: xvi
Here's my super-not-hardware-geeky answer to this storm of comments:

I don't care if it will feature HBM, LLP or WTF.
If i will recieve a card with low noise, high performance and a good price - i'm in.
 
GTX 970 is indeed a prime example.
All i need is to get it shrunk and pack about twice that performance and i'll make the leap :)
 
GTX 970 is indeed a prime example.
All i need is to get it shrunk and pack about twice that performance and i'll make the leap :)
Would also be optimal if we had more than one supplier of such a card.
 
Back
Top