Friday, November 13th 2015

Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus

AMD's upcoming generations of GPUs will get a massive design focus on energy-efficiency and increases in performance-per-Watt, according to a WCCFTech report. The first of these chips, codenamed "Arctic Islands," will leverage cutting edge 10 nm-class FinFET silicon fab technology, coupled with bare-metal and software optimization to step up performance-per-Watt in a big way. The last time AMD achieved an energy efficiency leap was with the Radeon HD 5000 series (helped in part by abysmal energy efficiency of the rivaling GeForce GTX 400 series).
Source: WCCFTech
Add your own comment

59 Comments on Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus

#1
The Quim Reaper
I must say, AMD without doubt, do the best....snazzy looking logos for the GPU's & CPU's.



If only the reality matched the image.
Posted on Reply
#2
Ja.KooLit
People, just give AMD the chance.

I still want them to fight Nvidia and intel. its for us consumers
Posted on Reply
#3
64K
The cited article says that AMD will be focusing on power savings a lot and on 14nm I believe they will do just that. I do hope they still release a 250 watt Flagship GPU that will be crazy fast.
Posted on Reply
#4
lilhasselhoffer
...What?

By nature they'll have to decrease voltage, just because the transistors are physically smaller and don't take as much potential to open. That comes from a decrease in lithography, when going 28nm to 14nm process. Additionally, they're integrating HBM2, which touts decreased power consumption as one of its major features.
Have they just sold us on the idea that power consumption will be better, because of things beyond their control, but claimed it's one of their focuses?


I ask because Fury isn't exactly a power sipper, but part of the reason it fits where it does is the memory uses less power, generating less heat, allowing the GPU to be clocked higher which makes up for its design optimization shortcomings versus the Nvidia offerings.
Posted on Reply
#5
TheinsanegamerN
night.foxPeople, just give AMD the chance.

I still want them to fight Nvidia and intel. its for us consumers
We gave them a chance with bulldozer, with piledriver, with fury, ece. Occasionally they deliver (290x) but they just keep shooting themselves in the foot.

At this point, I say "ill believe it when I see it". until AMD can show a fully-working, retail part that matches their claims, I don't believe them. This is just the hype train getting started again, just like fury.


The same goes for intel. Everyone was clamoring on how good skylake would be, I said "ill believe it when I see it". Sure enough, it was another incremental increase, nothing spectacular.

Just like with video games, nothing the hardware industry says can be taken with less than an entire salt shaker until they show it actually working. Until then, might as well be made of fairy dust.
Posted on Reply
#6
the54thvoid
Intoxicated Moderator
lilhasselhoffer...What?

By nature they'll have to decrease voltage, just because the transistors are physically smaller and don't take as much potential to open. That comes from a decrease in lithography, when going 28nm to 14nm process. Additionally, they're integrating HBM2, which touts decreased power consumption as one of its major features.
Have they just sold us on the idea that power consumption will be better, because of things beyond their control, but claimed it's one of their focuses?


I ask because Fury isn't exactly a power sipper, but part of the reason it fits where it does is the memory uses less power, generating less heat, allowing the GPU to be clocked higher which makes up for its design optimization shortcomings versus the Nvidia offerings.
Yeah, I think you got it.

This is AMD marketing to the analysts (see slide). Maxwell was an massive energy efficiency improvement on the same node as Kepler (by slashing compute). As you say, Fiji managed to look better than Hawaii by using HBM (in fact why some say they had to use HBM on Fiji).

I don't like attacking AMD but the slide is clearly selling ice as frozen water on the energy part. But, all things considered, Fiji trades blows (stock for stock) with Maxwell so I'm hopeful they'll be firing on all guns for 2016.
Posted on Reply
#7
Kaleid
The difference between the two brands is exaggerated. Make a blind test and nobody will see any difference.
Posted on Reply
#8
KainXS
Didn't Koduri say yesterday that 2 new GPU's are coming in 2016, that could be 4 cards.
Posted on Reply
#9
the54thvoid
Intoxicated Moderator
KaleidThe difference between the two brands is exaggerated. Make a blind test and nobody will see any difference.
Absolutely true but I don't know any blind gamers...

But, Lil's point is, advertising an energy efficiency when it's the process node, not the architecture is a little PR ish. Quite sure Pascal from NV will do the same.
Posted on Reply
#10
FordGT90Concept
"I go fast!1!11!1!"
lilhasselhoffer...What?

By nature they'll have to decrease voltage, just because the transistors are physically smaller and don't take as much potential to open. That comes from a decrease in lithography, when going 28nm to 14nm process. Additionally, they're integrating HBM2, which touts decreased power consumption as one of its major features.
Have they just sold us on the idea that power consumption will be better, because of things beyond their control, but claimed it's one of their focuses?


I ask because Fury isn't exactly a power sipper, but part of the reason it fits where it does is the memory uses less power, generating less heat, allowing the GPU to be clocked higher which makes up for its design optimization shortcomings versus the Nvidia offerings.
^ This. Also, 10nm? Isn't that like 2018 at the absolute earliest? They haven't even put out a 14nm GPU yet. Aren't they getting a little lot ahead of themselves? Seeing how much Intel is struggling to reach 10nm, I wouldn't be surprised of GloFo doesn't reach 10nm until 2020 or later.
Posted on Reply
#11
RejZoR
Well, the R9 Nano already proved they can cram full fledged core into a compact low power package. If they can shrink it down and further optimize shader units (quite frankly, they can't recycle R9-285 Tonga cores for the 3rd time directly), it can be very interesting.
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
But how big is the market for Nano? How many people are buying it over the equally-equipped-but-more-performance Fury X? The GPU in Nano demands a premium price but everything else about it does not unless you have a really, really small case.
Posted on Reply
#13
Ikaruga
Next gen something will be X times more efficient than previous gen something.
Posted on Reply
#14
Rowsol
When it comes to APUs I'd rather see gains in performance to make it a suitable option for a gaming budget build. I mean, they already are sorta, but still.
Posted on Reply
#15
Nkd
Enenrgy efficiency could mean anything. It could mean twice the power at same wattage as fury x or The same performance as fury x at half the wattage. So they can scale it like all other gpu's see how much power they want the GPU to have. So efficiency doesn't mean it won't be powerful.
Posted on Reply
#16
HumanSmoke
KainXSDidn't Koduri say yesterday that 2 new GPU's are coming in 2016, that could be 4 cards.
That's what he said. I would also expect one of those GPUs to be a replacement for Pitcairn/Curacao/Trinidad and Bonaire since that former GPU is almost old enough to vote. Wouldn't surprise me to see the other GPUs get architecture tweaks (as opposed to a new arch).
Posted on Reply
#17
dj-electric
*reads title


Well... do they wanna go bankrupt? no? well, they must have this kind of focus. There literally is no other way.
Posted on Reply
#18
Assimilator
Um, what? Where do you get 10nm from that article? There's zero indication that "Vega10" refers to the process node in any way.

With the completely incorrect Samsung 750 article and now this, I have to ask - when did TPU become a clickbait website more interested in headlines than accuracy? It's extremely disappointing.
Posted on Reply
#19
qubit
Overclocked quantum bit
TheinsanegamerNWe gave them a chance with bulldozer, with piledriver, with fury, ece. Occasionally they deliver (290x) but they just keep shooting themselves in the foot.

At this point, I say "ill believe it when I see it". until AMD can show a fully-working, retail part that matches their claims, I don't believe them. This is just the hype train getting started again, just like fury.


The same goes for intel. Everyone was clamoring on how good skylake would be, I said "ill believe it when I see it". Sure enough, it was another incremental increase, nothing spectacular.

Just like with video games, nothing the hardware industry says can be taken with less than an entire salt shaker until they show it actually working. Until then, might as well be made of fairy dust.
Couldn't agree more. It pays to be highly sceptical when it comes to AMD's performance claims nowadays. Shame it's come to this, but it's caused by their own mismanagement.
Posted on Reply
#20
AlwaysHope
"10 nm-class FinFET silicon fab technology" < that's gotta be a typo right? 10nm?? really?

If this be true, I'll hang out with my HD7870 a bit longer...
Posted on Reply
#21
Eagleye
I think AMD Engineers have been absolutely correct in all of their GPU performance claims going back to 7970/290X/390X (All of those cards are as fast/faster vs nVidia) and Fury-X vs 980ti (Stock Vs Stock) holds up as just as good. But I think the Software engineers have been a bit slow hitting the mark from maybe under staffing/R&D etc. AMD have come out and said they will be attacking software allot more next gen. We already see this with Crimson software and latest Linux open source drivers recently released for Tonga and Fiji.

I wouldn't be surprised to see AMD having the all rounder Architecture yet again.
Posted on Reply
#22
truth teller
Eagleyelatest Linux open source drivers recently released for Tonga and Fiji
i have to agree on that, the said they would do it and they actually did. took some time, but i guess it was spent writing and testing all the patches. kudos on that

as for this news, only time will tell the impact, humor me ma... erm... amd
Posted on Reply
#23
lilhasselhoffer
AlwaysHope"10 nm-class FinFET silicon fab technology" < that's gotta be a typo right? 10nm?? really?

If this be true, I'll hang out with my HD7870 a bit longer...
Poor quotation by our own @btarunr

The original article read:
"A source close to AMD recently offered some details to WCCFTech regarding the company’s future plans; plans which include the upcoming generation of GPUs as well as next gen consoles. The next generation of AMD graphic cards will be based on FinFETs (either 14nm or 16nm) and will offer roughly twice the performance over the 28nm generation. The information we received not only confirms previously revealed codenames but also sheds light on a brand new one: Vega10. It also states that AMD will be focusing on power saving alot during the next generation."

The 10nm discussion is about something entirely different.


This is why we read the links, and don't let others process information for us.
Posted on Reply
#24
Relayer
KaleidThe difference between the two brands is exaggerated. Make a blind test and nobody will see any difference.
^^^This. The biggest difference between the two brands is the mainstream press that's being "supported" more by one brand than the other.
Posted on Reply
#25
daoson5
Well, the AMD have to make they own way, not by Nvidia or Intel. Make the GPU by own term, unik one like combine GPU and CPU like CrossXfire or somthing appell to costume buy there product.
Posted on Reply
Add your own comment
Apr 24th, 2024 20:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts