Friday, November 13th 2015
Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus
AMD's upcoming generations of GPUs will get a massive design focus on energy-efficiency and increases in performance-per-Watt, according to a WCCFTech report. The first of these chips, codenamed "Arctic Islands," will leverage cutting edge 10 nm-class FinFET silicon fab technology, coupled with bare-metal and software optimization to step up performance-per-Watt in a big way. The last time AMD achieved an energy efficiency leap was with the Radeon HD 5000 series (helped in part by abysmal energy efficiency of the rivaling GeForce GTX 400 series).
Source:
WCCFTech
59 Comments on Next Gen AMD GPUs to Get a Massive Energy Efficiency Design Focus
Already thinking about contingency plan... Possibly?
We all like to party.
Sorry but some comments are mental ie use more watts just beat Nvidia, beat Nv yeah cos prices fall and people party more but i dont wanna build a nitrogen cooled rig either.
But honestly, I think I would split it down the middle, 50% less power, 50% faster. Yea, I'll settle for that :D I want fluid gameplay...I'm lookin at you 4K
if this is 10nm they must be looking at refreshing the designs they have for their next cores already, no?
and then they wonder why AMD is struggling..
I know that Maxwell is great and all, but I want something like GTX 970 with 6+ GiB VRAM. Or half of a Fury X... Or maybe even a budget-tier professional solution (like cheaper W4100 or K1200), mostly for 3D, but also occasional, "light" (1080p, Mid-High settings) gaming.
Any chance I can find something like that in stores by the end of Q1 2016?
On CPU front, Carrizo deserved a chance it never had. Jaguar cores is a good example of focus on energy efficiency.
Apparently it's not only about process node.
The mathematics behind it is an absolute beast, but as you decrease the input voltage, you consistently decrease the required voltage that signals a 1 or 0. Thus your transistors increase in an approximately ^2 fashion, while the power dissipation increases by lowering power consumption by two factors (V signal switching is lowered, and the associated A minimum at the gate is cut) and decreasing overall transistor size such that the gates themselves are smaller. With less switching power needed, and a smaller physical transistor size to have to allow electrons to pass through, you have the net effect of lithography decreasing power consumption at a slightly greater rate than the lithography can increase heat by packing more transistors into the same space.
The engineering has to determine what the acceptable voltage levels are, but for the last decade we've managed to keep or increase switching frequencies, decrease transistor size, and increase transistor count in roughly the same silicon die area. Our chips today actually use less power under loads than their predecessors (which is why TDP can drop). If you've got a hard time taking this for granted, let's look at a 2600k versus a 4770k.
www.bit-tech.net/hardware/cpus/2011/01/03/intel-sandy-bridge-review/11
www.tomshardware.com/reviews/core-i7-4770k-haswell-review,3521-18.html
You're looking at 156 consumed watts for the 2600k (loaded, 3.4 GHz), while the 4770k is 95.5 watts.
I'm using Intel as the benchmark here because they rarely have dramatic alterations where one generation is optimized for a huge improvement of subsequent generations. They've been focused on power management since Sandy Bridge, yet they still manage to give us more transistors, in roughly the same die space, at greater frequencies and even manage to decrease TDP.
What I'm saying is that it's even more necessary to focus on efficiency and power management with increased thermal densities to maximize frequency.
The point is this - I can shrink my transistors, array them into a core, and spread out a bunch of cores on a die in order to decrease average thermal loading dramatically. The cores themselves would produce less heat (they are smaller, and thus require less power), and it would be divided up over the same area. What Intel has constantly said is that their power management, ie shutting down inactive cores and decreasing clocks, is getting better. They haven't touted genuine refinements, for energy savings specifically, for years.
How do I prove this? The figures I gave you. Wattage consumed by the CPUs under loads negate the idling benefits of the newer architectures. We have to agree that thermal limits are relatively constant, because the materials aren't substantially different. If the thermal limits are constant, the die size is relatively constant, and the amount of transistors on the die is increasing, then we've got a net decrease in thermal load, per transistor. I've said all of this assuming a constant frequency, yet that isn't true either. The operational frequency continues to increase (even if only slightly).
So what we've got here is that despite increasing transistor count, despite keeping die size constant, despite not appreciably changing materials, despite increasing operational frequencies, and despite not having a heavy focus on energy consumption while under extreme load, the chips are actually running with less overall power draw and a smaller rated TDP. Please, explain how that makes any sense. If the shrink wasn't substantially decreasing power usage per transistor, as you seem to be implying with the argument that it's optimizations doing this, then why aren't all chips required to have a liquid cooler (lest they incinerate upon startup)?
Edit: This is a problem which may explain why we differ.
Power efficiency and heat don't directly correspond to operational frequency. Operational frequency is determined by how quickly the semi-conductor materials in a transistor can go from a "high" to "low" voltage given a signal (why it's expressed in Hz, or 1/s). The differences between high and low influence frequency just as much as the transistor composition.
If you look at some data sheets, we can see this in action.
Power transistors generally take much longer to react, because the difference between on and off is large. Despite this, they do have some minor leakage, this is why "off" in a circuit still consumes power.
The reason we're able to constantly push frequencies higher, despite having the same material, is that "on" is constantly decreasing. If the threshold for on and off can be minimized the corresponding frequency can be pushed up. This is why you test overclocks with a calculator for PI. If the values aren't precise the value for PI fluctuates because transistors didn't retain the appropriate state.
For the love of god just agree with him.
The walls of text are more righter so acquiesce.
Go to my name, right click, ignore. If you want to bitch, despite there being a way to fix it, then you're being an idiot.
Edit:
I say this not out of anger, but ease. If you don't care for me, then I should be ignored. That particular feature was new to me a while ago, but it's made life a lot easier since it was pointed out to me.
The point is this, I'm long winded. I haven't made claims otherwise. If that isn't acceptable please feel free to silence me. It isn't meant to be an insult, simply me acquiescing to a perceived flaw that I can't, or perhaps won't , rectify.
To that end, let me be like most of the people I argue against. I am right because I said so. I will allow you to put forward the large amount of effort to comb through all my conversations, provide links to why the conclusions you draw are correct, and whenever you've put forward all of that effort I'm not going to pay any attention to it and still argue the point.
I cannot possibly be reacting to laziness with anger, because after hundreds of discussion where I tried to be the better person I've learned that the better person isn't the victor. It couldn't possibly be that after providing nuance, admitting to mistakes, and trying to be better I've gotten into more arguments than the people who just say "AMD sucks," "Nvidia sucks," and "Intel sucks." I haven't ever tried to ask people questions, giving them the clear opportunity to answer why I am wrong. I have to be ignoring the people who cite me getting angry.
You know what, you're right. For the next week I'll just put forward the effort most other people do. Consider these three paragraph superfluous. Just go with "I call bullshit. You're wrong." That gives you some insight into me, right? It gives you the opportunity to address me as something more than a child, incapable of saying what I mean.
Fuck it, we'll do it live!
As for carrizo, the cpu may have finally been fixed, but it still cant compete against intel's latest cpus, or even two year old haswell designs. GPU wise, it is now outclassed by both intel's gt3 and gt4e gpus. And AMD, once again, underwhelmed by allowing OEMs to build whatever junk they wanted, rather than having someone like clevo make GOOD laptops using the APUs. Perhaps, if AMD had released carrizo two years ago instead of rehashing trinity again, and letting OEMs relegate them to red head step child status, they would have an actual position in the market right now,
You also need to factor in the die space allocated for IGP (and eDRAM where applicable) for mainstream processors. And as you should be aware, die size for -E/-EP/-EN/-EX hasn't fallen for the most part.