Friday, November 13th 2015

Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus

AMD's upcoming generations of GPUs will get a massive design focus on energy-efficiency and increases in performance-per-Watt, according to a WCCFTech report. The first of these chips, codenamed "Arctic Islands," will leverage cutting edge 10 nm-class FinFET silicon fab technology, coupled with bare-metal and software optimization to step up performance-per-Watt in a big way. The last time AMD achieved an energy efficiency leap was with the Radeon HD 5000 series (helped in part by abysmal energy efficiency of the rivaling GeForce GTX 400 series).
Source: WCCFTech
Add your own comment

59 Comments on Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus

#26
Space Lynx
Astronaut
I don't care about energy efficiency, use an extra 200 watts for all I care, just you know beat Pascal, otherwise I am buying Pascal if its faster, /shrug.
Posted on Reply
#27
ypsylon
Energy efficiency is all good and nice, but they shouldn't forget about performance. What good does energy efficiency bring if card perform like [beep!]. :)

Already thinking about contingency plan... Possibly?
Posted on Reply
#28
TheoneandonlyMrK
Chasing efficiency is exactly what all chip arch companies are doing,loads more smaller transistors packed together(been going on a while this tactic of smaller and more) need to be efficient or the heat coming off them will ruin the party.
We all like to party.

Sorry but some comments are mental ie use more watts just beat Nvidia, beat Nv yeah cos prices fall and people party more but i dont wanna build a nitrogen cooled rig either.
Posted on Reply
#29
happita
theoneandonlymrkChasing efficiency is exactly what all chip arch companies are doing,loads more smaller transistors packed together(been going on a while this tactic of smaller and more) need to be efficient or the heat coming off them will ruin the party.
We all like to party.

Sorry but some comments are mental ie use more watts just beat Nvidia, beat Nv yeah cos prices fall and people party more but i dont wanna build a nitrogen cooled rig either.
I agree. It's a balancing act though, even for Nvidia. "Which part of the equation would you sacrifice Mr. CEO? Well...." You know what I'm saying? Making these kinds of good tough decisions is what keeps Nvidia on top.

But honestly, I think I would split it down the middle, 50% less power, 50% faster. Yea, I'll settle for that :D I want fluid gameplay...I'm lookin at you 4K
Posted on Reply
#30
vega22
i thought 14nm stuff was what they had taped out for 16?

if this is 10nm they must be looking at refreshing the designs they have for their next cores already, no?
Posted on Reply
#31
ShurikN
theoneandonlymrkSorry but some comments are mental ie use more watts just beat Nvidia, beat Nv yeah cos prices fall and people party more but i dont wanna build a nitrogen cooled rig either.
Or even better, my fav, they want AMD products to shine so they can buy NV cheaply...
and then they wonder why AMD is struggling..
Posted on Reply
#32
R-T-B
AssimilatorUm, what? Where do you get 10nm from that article? There's zero indication that "Vega10" refers to the process node in any way.

With the completely incorrect Samsung 750 article and now this, I have to ask - when did TPU become a clickbait website more interested in headlines than accuracy? It's extremely disappointing.
As much as it pains me to admit this, they have been slipping lately. Badly.
Posted on Reply
#33
BiggieShady
lilhasselhofferBy nature they'll have to decrease voltage, just because the transistors are physically smaller and don't take as much potential to open. That comes from a decrease in lithography, when going 28nm to 14nm process. Additionally, they're integrating HBM2, which touts decreased power consumption as one of its major features.
You forgot that smaller node process has more transistors overall on the same surface. So the efficiency optimizations are very relevant here to keep heat dissipation spikes in check. The die will have to be both smaller and with more transistors.
Posted on Reply
#34
Constantine Yevseyev
Slightly off-topic: does anybody have any confirmed info on future AMD/NVIDIA GPU's? I'm planning on building a new desktop PC, but I'm not really interested in current NVIDIA 900 Series or AMD Radeon R300/Fury.

I know that Maxwell is great and all, but I want something like GTX 970 with 6+ GiB VRAM. Or half of a Fury X... Or maybe even a budget-tier professional solution (like cheaper W4100 or K1200), mostly for 3D, but also occasional, "light" (1080p, Mid-High settings) gaming.

Any chance I can find something like that in stores by the end of Q1 2016?
Posted on Reply
#35
medi01
TheinsanegamerNWe gave them a chance with bulldozer, with piledriver, with fury, ece. Occasionally they deliver (290x) but they just keep shooting themselves in the foot..
How on earth did Fury get into fail list?
On CPU front, Carrizo deserved a chance it never had.
the54thvoidAbsolutely true but I don't know any blind gamers...

But, Lil's point is, advertising an energy efficiency when it's the process node, not the architecture is a little PR ish. Quite sure Pascal from NV will do the same.
Jaguar cores is a good example of focus on energy efficiency.
Apparently it's not only about process node.
Posted on Reply
#36
lilhasselhoffer
BiggieShadyYou forgot that smaller node process has more transistors overall on the same surface. So the efficiency optimizations are very relevant here to keep heat dissipation spikes in check. The die will have to be both smaller and with more transistors.
No, I really didn't.

The mathematics behind it is an absolute beast, but as you decrease the input voltage, you consistently decrease the required voltage that signals a 1 or 0. Thus your transistors increase in an approximately ^2 fashion, while the power dissipation increases by lowering power consumption by two factors (V signal switching is lowered, and the associated A minimum at the gate is cut) and decreasing overall transistor size such that the gates themselves are smaller. With less switching power needed, and a smaller physical transistor size to have to allow electrons to pass through, you have the net effect of lithography decreasing power consumption at a slightly greater rate than the lithography can increase heat by packing more transistors into the same space.

The engineering has to determine what the acceptable voltage levels are, but for the last decade we've managed to keep or increase switching frequencies, decrease transistor size, and increase transistor count in roughly the same silicon die area. Our chips today actually use less power under loads than their predecessors (which is why TDP can drop). If you've got a hard time taking this for granted, let's look at a 2600k versus a 4770k.
www.bit-tech.net/hardware/cpus/2011/01/03/intel-sandy-bridge-review/11
www.tomshardware.com/reviews/core-i7-4770k-haswell-review,3521-18.html

You're looking at 156 consumed watts for the 2600k (loaded, 3.4 GHz), while the 4770k is 95.5 watts.



I'm using Intel as the benchmark here because they rarely have dramatic alterations where one generation is optimized for a huge improvement of subsequent generations. They've been focused on power management since Sandy Bridge, yet they still manage to give us more transistors, in roughly the same die space, at greater frequencies and even manage to decrease TDP.
Posted on Reply
#37
BiggieShady
lilhasselhofferThey've been focused on power management since Sandy Bridge, yet they still manage to give us more transistors, in roughly the same die space, at greater frequencies and even manage to decrease TDP.
What do you mean "yet". They manage to do it because of it, not in spite of it. All benefits of lower voltages get negated with increased frequency plus you get more transistors in roughly the same space which increases thermal density. TDP does get lower but even with lower TDP Haswell is still a bitch to cool because of higher thermal density.
What I'm saying is that it's even more necessary to focus on efficiency and power management with increased thermal densities to maximize frequency.
Posted on Reply
#38
TheoneandonlyMrK
lilhasselhofferNo, I really didn't.

The mathematics behind it is an absolute beast, but as you decrease the input voltage, you consistently decrease the required voltage that signals a 1 or 0. Thus your transistors increase in an approximately ^2 fashion, while the power dissipation increases by lowering power consumption by two factors (V signal switching is lowered, and the associated A minimum at the gate is cut) and decreasing overall transistor size such that the gates themselves are smaller. With less switching power needed, and a smaller physical transistor size to have to allow electrons to pass through, you have the net effect of lithography decreasing power consumption at a slightly greater rate than the lithography can increase heat by packing more transistors into the same space.

The engineering has to determine what the acceptable voltage levels are, but for the last decade we've managed to keep or increase switching frequencies, decrease transistor size, and increase transistor count in roughly the same silicon die area. Our chips today actually use less power under loads than their predecessors (which is why TDP can drop). If you've got a hard time taking this for granted, let's look at a 2600k versus a 4770k.
www.bit-tech.net/hardware/cpus/2011/01/03/intel-sandy-bridge-review/11
www.tomshardware.com/reviews/core-i7-4770k-haswell-review,3521-18.html

You're looking at 156 consumed watts for the 2600k (loaded, 3.4 GHz), while the 4770k is 95.5 watts.



I'm using Intel as the benchmark here because they rarely have dramatic alterations where one generation is optimized for a huge improvement of subsequent generations. They've been focused on power management since Sandy Bridge, yet they still manage to give us more transistors, in roughly the same die space, at greater frequencies and even manage to decrease TDP.
Intel have done most of that but not all, the frequency increases have been abysmal,and now intels core counts going up watch those tiny frequency increases melt away m8 and i mean tiny they aren't matching my fx8350s stock 4ghz in many skus these days
Posted on Reply
#39
the54thvoid
Intoxicated Moderator
theoneandonlymrkIntel have done most of that but not all, the frequency increases have been abysmal,and now intels core counts going up watch those tiny frequency increases melt away m8 and i mean tiny they aren't matching my fx8350s stock 4ghz in many skus these days
I'm no expert but frequency isn't the best metric. IPC is? It doesn't matter if Brand A is 'x' Hz if those Hz don't give the performance.
Posted on Reply
#40
BiggieShady
the54thvoidI'm no expert but frequency isn't the best metric. IPC is? It doesn't matter if Brand A is 'x' Hz if those Hz don't give the performance.
My oversimplified understanding: CPU architecture has a known Instruction Per Cycle number by design, power efficiency of the actual chip determines thermal dissipation which determines max frequency while still being inside chosen thermal envelope.
Posted on Reply
#41
lilhasselhoffer
BiggieShadyWhat do you mean "yet". They manage to do it because of it, not in spite of it. All benefits of lower voltages get negated with increased frequency plus you get more transistors in roughly the same space which increases thermal density. TDP does get lower but even with lower TDP Haswell is still a bitch to cool because of higher thermal density.
What I'm saying is that it's even more necessary to focus on efficiency and power management with increased thermal densities to maximize frequency.
I think you're reading words that aren't there.

The point is this - I can shrink my transistors, array them into a core, and spread out a bunch of cores on a die in order to decrease average thermal loading dramatically. The cores themselves would produce less heat (they are smaller, and thus require less power), and it would be divided up over the same area. What Intel has constantly said is that their power management, ie shutting down inactive cores and decreasing clocks, is getting better. They haven't touted genuine refinements, for energy savings specifically, for years.

How do I prove this? The figures I gave you. Wattage consumed by the CPUs under loads negate the idling benefits of the newer architectures. We have to agree that thermal limits are relatively constant, because the materials aren't substantially different. If the thermal limits are constant, the die size is relatively constant, and the amount of transistors on the die is increasing, then we've got a net decrease in thermal load, per transistor. I've said all of this assuming a constant frequency, yet that isn't true either. The operational frequency continues to increase (even if only slightly).

So what we've got here is that despite increasing transistor count, despite keeping die size constant, despite not appreciably changing materials, despite increasing operational frequencies, and despite not having a heavy focus on energy consumption while under extreme load, the chips are actually running with less overall power draw and a smaller rated TDP. Please, explain how that makes any sense. If the shrink wasn't substantially decreasing power usage per transistor, as you seem to be implying with the argument that it's optimizations doing this, then why aren't all chips required to have a liquid cooler (lest they incinerate upon startup)?



Edit:
BiggieShadyMy oversimplified understanding: CPU architecture has a known Instruction Per Cycle number by design, power efficiency of the actual chip determines thermal dissipation which determines max frequency while still being inside chosen thermal envelope.
This is a problem which may explain why we differ.

Power efficiency and heat don't directly correspond to operational frequency. Operational frequency is determined by how quickly the semi-conductor materials in a transistor can go from a "high" to "low" voltage given a signal (why it's expressed in Hz, or 1/s). The differences between high and low influence frequency just as much as the transistor composition.

If you look at some data sheets, we can see this in action.

Power transistors generally take much longer to react, because the difference between on and off is large. Despite this, they do have some minor leakage, this is why "off" in a circuit still consumes power.

The reason we're able to constantly push frequencies higher, despite having the same material, is that "on" is constantly decreasing. If the threshold for on and off can be minimized the corresponding frequency can be pushed up. This is why you test overclocks with a calculator for PI. If the values aren't precise the value for PI fluctuates because transistors didn't retain the appropriate state.
Posted on Reply
#42
64K
Just agree with him @BiggieShady

For the love of god just agree with him.

The walls of text are more righter so acquiesce.
Posted on Reply
#43
lilhasselhoffer
64KJust agree with him @BiggieShady

For the love of god just agree with him.

The walls of text are more righter so acquiesce.
So ignore me.

Go to my name, right click, ignore. If you want to bitch, despite there being a way to fix it, then you're being an idiot.


Edit:
I say this not out of anger, but ease. If you don't care for me, then I should be ignored. That particular feature was new to me a while ago, but it's made life a lot easier since it was pointed out to me.
Posted on Reply
#44
64K
lilhasselhofferSo ignore me.

Go to my name, right click, ignore. If you want to bitch, despite there being a way to fix it, then you're being an idiot.
I don't ignore anyone and I never will. You are entertainment for me.
Posted on Reply
#45
lilhasselhoffer
64KI don't ignore anyone and I never will. You are entertainment for me.
And my edit didn't come through fast enough.

The point is this, I'm long winded. I haven't made claims otherwise. If that isn't acceptable please feel free to silence me. It isn't meant to be an insult, simply me acquiescing to a perceived flaw that I can't, or perhaps won't , rectify.
Posted on Reply
#46
64K
lilhasselhofferAny my edit didn't come through fast enough.

The point is this, I'm long winded. I haven't made claims otherwise. If that isn't acceptable please feel free to silence me. It isn't meant to be an insult, simply me acquiescing to a perceived flaw that I can't, or perhaps won't , rectify.
You miss 50% of the feedback to your posts. Very few members dispute that you are very intelligent and have a good deal of tech knowledge to share with all of us, however, it's that you spin off into "people are stupid" regularly for some kind of extroverted egotistical self gratification thing that is mundane.
Posted on Reply
#47
lilhasselhoffer
64KYou miss 50% of the feedback to your posts. Very few members dispute that you are very intelligent and have a good deal of tech knowledge to share with all of us, however, it's that you spin off into "people are stupid" regularly for some kind of extroverted egotistical self gratification thing that is mundane.
My ego calls bullshit.

To that end, let me be like most of the people I argue against. I am right because I said so. I will allow you to put forward the large amount of effort to comb through all my conversations, provide links to why the conclusions you draw are correct, and whenever you've put forward all of that effort I'm not going to pay any attention to it and still argue the point.

I cannot possibly be reacting to laziness with anger, because after hundreds of discussion where I tried to be the better person I've learned that the better person isn't the victor. It couldn't possibly be that after providing nuance, admitting to mistakes, and trying to be better I've gotten into more arguments than the people who just say "AMD sucks," "Nvidia sucks," and "Intel sucks." I haven't ever tried to ask people questions, giving them the clear opportunity to answer why I am wrong. I have to be ignoring the people who cite me getting angry.



You know what, you're right. For the next week I'll just put forward the effort most other people do. Consider these three paragraph superfluous. Just go with "I call bullshit. You're wrong." That gives you some insight into me, right? It gives you the opportunity to address me as something more than a child, incapable of saying what I mean.


Fuck it, we'll do it live!
Posted on Reply
#48
TheinsanegamerN
medi01How on earth did Fury get into fail list?
On CPU front, Carrizo deserved a chance it never had.



Jaguar cores is a good example of focus on energy efficiency.
Apparently it's not only about process node.
Fury over-hyped and under delivered. It was supposed to be the fastest GPU in the world (it wasn't) and was supposed to overclock well (it didnt, and when OCed it drew tons of power). It was the same price as a 980ti, but was slower, more power hungry, and required the mounting of a water cooler. And it released so much later than the 900 series, allowing nvidia to gain a massive portion of the market.

As for carrizo, the cpu may have finally been fixed, but it still cant compete against intel's latest cpus, or even two year old haswell designs. GPU wise, it is now outclassed by both intel's gt3 and gt4e gpus. And AMD, once again, underwhelmed by allowing OEMs to build whatever junk they wanted, rather than having someone like clevo make GOOD laptops using the APUs. Perhaps, if AMD had released carrizo two years ago instead of rehashing trinity again, and letting OEMs relegate them to red head step child status, they would have an actual position in the market right now,
Posted on Reply
#49
HalfAHertz
lilhasselhoffer...The engineering has to determine what the acceptable voltage levels are, but for the last decade we've managed to keep or increase switching frequencies, decrease transistor size, and increase transistor count in roughly the same silicon die area...
I like that part but unfortunately, for the last couple of generations, the CPU portion of the die in Intel's CPUs has been getting smaller and smaller while the price per CPU has remained roughly the same (or has even increased) . And we'll keep getting screwed harder and harder without any competition for Intel.
Posted on Reply
#50
HumanSmoke
HalfAHertzI like that part but unfortunately, for the last couple of generations, the CPU portion of the die in Intel's CPUs has been getting smaller and smaller while the price per CPU has remained roughly the same (or has even increased) . And we'll keep getting screwed harder and harder without any competition for Intel.
But they aren't the only metrics in play are they? If they were, Intel would be turning over process nodes as fast as was technically possible and would have pushed to 450mm wafers ages ago. At least Samsung and TSMC and co. can gouge the hell out of their customers to offset the price of building/upgrading their fabs, clean room costs - litho tooling, test and validation tooling etc. Intel largely produces for its own product lines.
You also need to factor in the die space allocated for IGP (and eDRAM where applicable) for mainstream processors. And as you should be aware, die size for -E/-EP/-EN/-EX hasn't fallen for the most part.
Posted on Reply
Add your own comment
May 14th, 2024 07:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts