Thursday, June 19th 2014

AMD Details Plans to Deliver 25x APU Energy Efficiency Gains by 2020

AMD today announced its goal to deliver a 25x improvement in the energy efficiency of its Accelerated Processing Units (APUs) by 2020.1 Details including innovations that will produce the expected efficiency gains were presented today by AMD's Chief Technology Officer Mark Papermaster during a keynote at the China International Software and Information Service Fair (CISIS) conference in Dalian, China. The "25X20" target is a substantial increase compared to the prior six years (2008 to 2014), during which time AMD improved the typical use energy efficiency of its products more than 10x.

Worldwide, three billion personal computers use more than one percent of all energy consumed annually, and 30 million computer servers use an additional 1.5 percent of all electricity consumed at an annual cost of $14 billion to $18 billion USD. Expanded use of the Internet, mobile devices, and interest in cloud-based video and audio content in general is expected to result in all of those numbers increasing in future years.
"Creating differentiated low-power products is a key element of our business strategy, with an attending relentless focus on energy efficiency," said Papermaster. "Through APU architectural enhancements and intelligent power efficient techniques, our customers can expect to see us dramatically improve the energy efficiency of our processors during the next several years. Setting a goal to improve the energy efficiency of our processors 25 times by 2020 is a measure of our commitment and confidence in our approach."

"The energy efficiency of information technology has improved at a rapid pace since the beginning of the computer age, and innovations in semiconductor technologies continue to open up new possibilities for higher efficiency," said Dr. Jonathan Koomey, research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. "AMD has steadily improved the energy efficiency of its mobile processors, having achieved greater than a 10-fold improvement over the last six years in typical-use energy efficiency. AMD's focus on improving typical power efficiency will likely yield significant consumer benefits substantially improving real-world battery life and performance for mobile devices. AMD's technology plans show every promise of yielding about a 25-fold improvement in typical-use energy efficiency for mobile devices over the next six years, a pace that substantially exceeds historical rates of growth in peak output energy efficiency. This would be achieved through both performance gains and rapid reductions in the typical-use power of processors. In addition to the benefits of increased performance, the efficiency gains help to extend battery life, enable development of smaller and less material intensive devices, and limit the overall environmental impact of increased numbers of computing devices."

Moore's Law states that the number of transistors capable of being built in a given area doubles roughly every two years. Dr. Koomey's research demonstrates that historically, energy efficiency of processors has closely tracked the rate of improvement predicted by Moore's Law. Through intelligent power management and APU architectural advances, in tandem with semiconductor manufacturing process technology improvements and a focus on typical use power, AMD's expects its energy efficiency achievements to outpace the historical efficiency trend predicted by Moore's law by at least 70 percent between 2014 and 2020.

Architecting for Energy-Efficiency Leadership
Like advances in computing performance, advances in power efficiency have historically come along with new generations of silicon process technology that shrink the size of each individual transistor. AMD expects to outpace the power efficiency gains expected from process technology transitions through 2020 for typical use based on successfully executing three central pillars of the company's energy efficient design strategy:
  • Heterogeneous-computing and power optimization: Through Heterogeneous System Architecture (HSA), AMD combines CPU and GPU compute cores and special purpose accelerators such as digital signal processors and video encoders on the same chip in the form of APUs. This innovation from AMD saves energy by eliminating connections between discrete chips, reduces computing cycles by treating the CPU and GPU as peers, and enables the seamless shift of computing workloads to the optimal processing component. The result is improved energy efficiency and accelerated performance for common workloads, including standard office applications as well as emerging visually oriented and interactive workloads such as natural user interfaces and image and speech recognition. AMD provides APUs with HSA features to the embedded, server and client device markets, and its semi-custom APUs are inside the new generation of game consoles.
  • Intelligent, real-time power management: Most computing operation is characterized by idle time, the interval between keystrokes, touch inputs or time reviewing displayed content. Executing tasks as quickly as possible to hasten a return to idle, and then minimizing the power used at idle is extremely important for managing energy consumption. Most consumer-oriented tasks such as web browsing, office document editing, and photo editing benefit from this "race to idle" behavior. The latest AMD APUs perform real-time analysis on the workload and applications, dynamically adjusting clock speed to achieve optimal throughput rates. Similarly, AMD offers platform aware power management where the processor can overclock to quickly get the job done, then drop back into low-power idle mode.
  • Future innovations in power-efficiency: Improvements in efficiency require technology development that takes many years to complete. AMD recognized the need for energy efficiency years ago and made the research investments that have since led to high impact features. Going forward many differentiating capabilities such as Inter-frame power gating, per-part adaptive voltage, voltage islands, further integration of system components, and other techniques still in the development stage should yield accelerated gains.
Industry analyst firm TIRIAS Research recently reviewed AMD's methodology for measuring its energy efficiency and the plans to achieve a 25x improvement by 2020 and produced a publicly-available white paper detailing their analysis.

"The goal of an energy-efficient processor is to deliver more performance than the prior generation at the same or less power," said Kevin Krewell, analyst at TIRIAS Research. "AMD's plan to accelerate the energy-efficiency gains for its mobile-computing processors is impressive. We believe that AMD will achieve its energy efficiency goal, partially through process improvement but mostly by combining the savings from reducing idle power, the performance boost of heterogeneous system architecture, and through more intelligent power management. With this undertaking, AMD demonstrates leadership in the computing industry, driving innovations for a more energy-efficient future."
Add your own comment

49 Comments on AMD Details Plans to Deliver 25x APU Energy Efficiency Gains by 2020

#26
dj-electric
This just in: AMD are planning to exist until at least 2020.


[humour]
Posted on Reply
#27
HumanSmoke
theoneandonlymrkNice trolling mroofie but they Are going to piss these economy figures whilst dancing on intels grave (put there by qualcom and others) ;p
AMD might be making plans for the future, but that doesn't mean that the competition stands still.
This also seems to reflect the classic mindset that dug a large hole for AMD in the first place. A single minded focus on what would become K8 and Bulldozer whilst almost totally ignoring the competition and expecting Intel to persevere with Netburst and Core respectively. Intel might be wedded to x86, but that doesn't mean it's their sole focus - they do have an ARM architectural licence, and the IP deal (rare for Intel) with Rockchiptends to point to some diversification in processor strategy.

@GhostRyder
You keep dreaming those dreams son. The naïveté is refreshing. Last time I checked, Intel was built across quite a few product lines - and even taking CPUs in isolation, they basically own the x86 pro markets. HSA is all nice and dandy but at some stage it has to progress to actual implementation rather than a PPS decks and "The Future IS..." ™. For that to happen, AMD need to start delivering. They won't have IBM on board, and Dell, Cisco, and HP are all firmly entrenched in the Intel camp.
Posted on Reply
#28
Aquinus
Resident Wat-man
The problem is that regardless of what AMD does, Intel can always be one step ahead because of how much money Intel has available for things like R&D.

Seriously, how do you think AMD plans to contend with CPUs like the C2750? It's like an i5, without an iGPU, twice as many cores, and the PCH put onto the CPU. It's everything you could ever want from a low power CPU with the exception of half-decent graphics, but Intel already knows how to play that game with the Iris Pro and if the consumer market ever demanded it, I'm sure Intel would deliver and it's important to remember that Intel's iGPUs aren't as crappy as they used to be (most people don't game, keep that in mind too.)

AMD should take all this PR funding and put it into R&D because pandering to the masses isn't going to make their hardware any better than it already is. I don't see Intel making claims like this nearly as often as AMD does when it comes to PR.

With all of this said, I still love my AMD graphics cards but I'm glad I decided to get an i7.
Posted on Reply
#29
GhostRyder
@HumanSmoke and you keep blowing that ignorant smoke. Funny read as per usual. Gee wonder why IBM, Dell, and some of the other camps are stuck with intel, could be that whole business that just got another settlement recently or in the past, pick your poison. Did I ever mention HSA once?

But then again I expect nothing less from you hence why I rarely care anymore what you have to say. Keep posting I have nothing to say to you.
AquinusThe problem is that regardless of what AMD does, Intel can always be one step ahead because of how much money Intel has available for things like R&D.

Seriously, how do you think AMD plans to contend with CPUs like the C2750? It's like an i5, without an iGPU, twice as many cores, and the PCH put onto the CPU. It's everything you could ever want from a low power CPU with the exception of half-decent graphics, but Intel already knows how to play that game with the Iris Pro and if the consumer market ever demanded it, I'm sure Intel would deliver and it's important to remember that Intel's iGPUs aren't as crappy as they used to be (most people don't game, keep that in mind too.)

AMD should take all this PR funding and put it into R&D because pandering to the masses isn't going to make their hardware any better than it already is. I don't see Intel making claims like this nearly as often as AMD does when it comes to PR.

With all of this said, I still love my AMD graphics cards but I'm glad I decided to get an i7.
Maybe, no one is saying an i7 is not as good as anything amd has on the table. They have the best performance right now and it's not going to change for awhile.
Posted on Reply
#30
Aquinus
Resident Wat-man
GhostRyderMaybe, no one is saying an i7 is not as good as anything amd has on the table. They have the best performance right now and it's not going to change for awhile.
...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
Posted on Reply
#31
GhostRyder
Aquinus...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
True, but the chips that do have iris pro are expensive at the time. The mobile market is where these matter and where they shine.

Right now iris pros main advantage is that ram built into the chip. Depends on how far they take it, but I could dig it either way if offered at a decent price.
Posted on Reply
#32
HumanSmoke
GhostRyder@HumanSmoke and you keep blowing that ignorant smoke. Gee wonder why IBM, Dell, and some of the other camps are stuck with intel
Well, talking of ignorance, IBM haven't been with Intel since the 5162......twenty-eight years ago
GhostRyderDid I ever mention HSA once?
Nope, but then I'd be surprised if you did since you think the computing business revolves around APU gaming laptops.
Aquinus...or power efficiency for that matter and if Intel's iGPUs continue to improve, AMD is going to lose the iGPU advantage as well which leaves them with nothing but cost. I don't know about you, but that troubles me.
And so it should, as well as everyone else for that matter. AMD's business strategy seems to be based on razor thin margins ( console and consumer APUs, ARM cores for server) which means they need large sales volumes. OK when you have OEM confidence and a locked down market. With the exception of the gaming consoles -which don't net big return, that isn't the case.
GhostRyderTrue, but the chips that do have iris pro are expensive at the time. The mobile market is where these matter and where they shine.
Right now iris pros main advantage is that ram built into the chip. Depends on how far they take it, but I could dig it either way if offered at a decent price.
How long do you think it would take Intel to jam an HD 5200 into any chip if they felt that their market dominance was threatened by not having it? This is a company with a huge fabrication overcapacity.
Posted on Reply
#33
GhostRyder
HumanSmokeWell, talking of ignorance, IBM haven't been with Intel since the 5162......twenty-eight years ago
"Are stuck", they use their own processor and mostly intel in the server and desktop world which now or are about to belong to Lenovo. Want a picture of an IBM machine with an intel processor inside?
HumanSmokeNope, but then I'd be surprised if you did since you think the computing business revolves around APU gaming laptops.
It being a media center since a 600 dollar APU laptop is highly unlikely to be a straight compute device. Im not surprised you don't understand there are people out there that are casual users that use there laptops as media houses and do not intend to spend a fortune on a laptop.
HumanSmokeAnd so it should, as well as everyone else for that matter. AMD's business strategy seems to be based on razor thin margins ( console and consumer APUs, ARM cores for server) which means they need large sales volumes. OK when you have OEM confidence and a locked down market. With the exception of the gaming consoles -which don't net big return, that isn't the case.
Yea because over ten million devices sold is minor...
HumanSmokeHow long do you think it would take Intel to jam an HD 5200 into any chip if they felt that their market dominance was threatened by not having it? This is a company with a huge fabrication overcapacity.
Or they could just stick with what they usually do...

Now then I'm done with you, and I'll leave on a nice Mark Twain quote which i should heed. I'm sure your response will be equally hilarious but I would rather not drag this thread any further off subject than this.
Posted on Reply
#34
Over_Lord
News Editor
mroofieReads AMD APU *
Stops reading *
Clicks on close tab*
And yet you are still here :P

Looks like somebody 'roofied' you.

See what I did there?
Posted on Reply
#35
HumanSmoke
GhostRyder"Are stuck", they use their own processor and mostly intel in the server and desktop world which now or are about to belong to Lenovo. Want a picture of an IBM machine with an intel processor inside?
Oh, you suddenly want to bleat on about servers when its IBM vs Intel, but when its AMD vs Intel, server share isn't a subject for discussion? You seemed to want to focus entirely upon consumer products. I adapted.
GhostRyderDid I ever mention HSA once?
You do realise that AMD's whole enterprise strategy is predicated upon HSA ?
You flip-flop faster than a politician caught red handed with a rent boy
GhostRyderNow then I'm done with you...
Didn't you say that last time out? Oh, yes! You did
GhostRyderKeep posting I have nothing to say to you.
:rolleyes: o_O
GhostRyderYea because over ten million devices sold is minor...
So what? Sales mean f___ all if it doesn't translate into revenue.
Posted on Reply
#36
R0H1T
HumanSmokeAMD might be making plans for the future, but that doesn't mean that the competition stands still.
This also seems to reflect the classic mindset that dug a large hole for AMD in the first place. A single minded focus on what would become K8 and Bulldozer whilst almost totally ignoring the competition and expecting Intel to persevere with Netburst and Core respectively. Intel might be wedded to x86, but that doesn't mean it's their sole focus - they do have an ARM architectural licence, and the IP deal (rare for Intel) with Rockchiptends to point to some diversification in processor strategy.

@GhostRyder
You keep dreaming those dreams son. The naïveté is refreshing. Last time I checked, Intel was built across quite a few product lines - and even taking CPUs in isolation, they basically own the x86 pro markets. HSA is all nice and dandy but at some stage it has to progress to actual implementation rather than a PPS decks and "The Future IS..." ™. For that to happen, AMD need to start delivering. They won't have IBM on board, and Dell, Cisco, and HP are all firmly entrenched in the Intel camp.
Ahem you were saying ~
The graphics units will also add support for Shared Virtual Memory. As we understand, and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Some features of Skylake graphics architecture

The fact is Intel has benefited greatly from the innovations AMD has brought to the x86 & general computing realm whilst the single biggest gift they've received from Intel in the last decade has been the bribes to OEM's circa 2006, in other words a stab in the back ! Also Nvidia is embracing HSA with CUDA 6 (software only atm) so what I see from your post is ignorance for one & secondly you (perhaps) think that Intel is pro-consumer when in fact they're virtually the exact opposite & their actions, like unfairly blocking overclocking on non Z boards just recently, over the last many years certainly proves this point !
Posted on Reply
#37
HumanSmoke
R0H1TAhem you were saying ~
Directly from your quote:
and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Which heterogeneous applications would they be ? Would these be future applications or applications actually available?
R0H1TThe fact is Intel has benefited greatly from the innovations AMD has brought to the x86 & general computing realm
And? What has that got to do with AMD's strategic planning?
R0H1Twhilst the single biggest gift they've received from Intel in the last decade has been the bribes to OEM's circa 2006, in other words a stab in the back !
Yep. That's Intel.
R0H1TAlso Nvidia is embracing HSA with CUDA 6 (software only atm)
Not strictly HSA, it's unified memory pooling and isn't Nvidia part of the OpenPOWER consortium rather than the HSA Foundation?
R0H1Tso what I see from your post is ignorance
If you think OpenPOWER, Intel's UMA, and HSA are all interchangeable on a software level I think you're going to have to show your working before you start bandying around terms like ignorance.
R0H1Tsecondly you (perhaps) think that Intel is pro-consumer
Maybe you should stop ascribing conclusions based on comments that haven't been made.
I'm a realist, and I see what the vendors do, how they achieve it, and the outcomes. Noting the facts doesn't imply anything other than noting the facts.
R0H1Twhen in fact they're virtually the exact opposite & their actions, like unfairly blocking overclocking on non Z boards just recently, over the last many years certainly proves this point !
Looks like you're just looking for an excuse to vent because this has absolutely no correlation to anything I've commented on.

Looking for an argument that Intel isn't an abuser of its position? You won't find one here. Intel's modus operandi is fairly well known. Intel's failings as a moral company don't excuse AMD's years of dithering, changing of focus depending upon what others are doing, saddling themselves with a massive debt burden by paying double what ATI was worth, selling off mobile IP for peanuts, dismissing the mobile market in toto, and a host of missteps.

You want to talk about ignorance? Blame Intel's bribery of OEM's (particularly Dell) to keep AMD out of the market? Know why the settlement wasn't bigger? AMD - thanks to Jerry "Real men have fabs" Sanders were too proud to second source foundry capacity. Bribes from 2006? Sure there were....AMD also couldn't supply the vendors it already had. Think that was a blip? Analysts were warning of AMD processor shortages yearsbefore this ever became acute. AMD complaining that Dell didn't want their processors was offset to a degree by OEM's complaining that AMD chips weren't available in quantity (so, 2002, 2006, and this from 2004 - see the trend), so AMD waited until vendors were publicly complaining* (and Jerry had been put out to pasture) before AMD struck a deal with Chartered Semi....and even then used less than half their outsourcing allocation allowed under the licence agreement with Intel.

Sometimes the truth isn't as cut-and-dried as good versus evil.

*Poor AMD planning causes CPU shortages: ....But European motherboard firms, talking to the INQ on conditions of anonymity, were rather more blunt about the problem. One described the shortages as due to "bad planning".
Posted on Reply
#38
R0H1T
HumanSmokeDirectly from your quote:
and this is not confirmed, this feature will allow the CPU and GPU to share system memory, which should boost performance of heterogeneous applications.
Which heterogeneous applications would they be ? Would these be future applications or applications actually available?
Well Intel is going to implement HSA now whether they'll call it xSA or whatever remains to be seen, OpenCL 2.0 for instance brings SVM (shared virtual memort) support & unless Intel somehow plans to delay implementation of an industry wide Open standard in their iGPU's I don't see how that piece of info is speculation.
Not strictly HSA, it's unified memory pooling and isn't Nvidia part of the OpenPOWER consortium rather than the HSA Foundation?

If you think OpenPOWER, Intel's UMA, and HSA are all interchangeable on a software level I think you're going to have to show your working before you start bandying around terms like ignorance.
OPENPOWER is completely separate from HSA, hUMA & OpenCL cause it's just something IBM's done to save their power based server live. As for Nvidia now since they're going to add OpenCL 2.x support to their GPU's it means they'll be jumping on the HSA bandwagon themselves, again it doesn't have to be called HSA to be implemented as such & I won't be surprised if MS brings OS level support for HSA in win9.
Looks like you're just looking for an excuse to vent because this has absolutely no correlation to anything I've commented on.

Looking for an argument that Intel isn't an abuser of its position? You won't find one here. Intel's modus operandi is fairly well known. Intel's failings as a moral company don't excuse AMD's years of dithering, changing of focus depending upon what others are doing, saddling themselves with a massive debt burden by paying double what ATI was worth, selling off mobile IP for peanuts, dismissing the mobile market in toto, and a host of missteps.

You want to talk about ignorance? Blame Intel's bribery of OEM's (particularly Dell) to keep AMD out of the market? Know why the settlement wasn't bigger? AMD - thanks to Jerry "Real men have fabs" Sanders were too proud to second source foundry capacity. Bribes from 2006? Sure there were....AMD also couldn't supply the vendors it already had. Think that was a blip? Analysts were warning of AMD processor shortages yearsbefore this ever became acute. AMD complaining that Dell didn't want their processors was offset to a degree by OEM's complaining that AMD chips weren't available in quantity (so, 2002, 2006, and this from 2004 - see the trend), so AMD waited until vendors were publicly complaining (and Jerry had been put out to pasture) before AMD struck a deal with Chartered Semi....and even then used less than half their outsourcing allocation allowed under the licence agreement with Intel.

Sometimes the truth isn't as cut-and-dried as good versus evil.
Not really, I've heard this "HSA being vaporware" stuff more than once & it just irks me more every time I hear it. Lastly I'll add that it isn't AMD's fault that most software/game developers are fat ass lazy turds that need spoon feeding, I mean how long has it been since we've had multicore processors & the number of applications/games properly utilizing them is still in the low hundreds at best. It took the next gen consoles for game developers to add support for four or more cores in their game engines, it'll take something bigger for them to adopt HSA but I have very little doubt that those who don't or won't will become extinct, perhaps not in the next 5yrs but certainly in a decade or so. What we as consumers can do is support (software & game) developers that promote innovation & shun those who're dinosaurs in the making.
Posted on Reply
#39
Aquinus
Resident Wat-man
R0H1TNot really, I've heard this "HSA being vaporware" stuff more than once & it just irks me more every time I hear it. Lastly I'll add that it isn't AMD's fault that most software/game developers are fat ass lazy turds that need spoon feeding, I mean how long has it been since we've had multicore processors & the number of applications/games properly utilizing them is still in the low hundreds at best. It took the next gen consoles for game developers to add support for four or more cores in their game engines, it'll take something bigger for them to adopt HSA but I have very little doubt that those who don't or won't will become extinct, perhaps not in the next 5yrs but certainly in a decade or so. What we as consumers can do is support (software & game) developers that promote innovation & shun those who're dinosaurs in the making.
Are you a software developer? Do you write concurrent code that is thread-safe and works all the time? Yeah, I didn't think so. Keep your assumptions about how people like me do my job to yourself. Don't presume to talk about something where you have absolutely no idea what kind of work needs to be done accomplish what you suggest. There are a lot of considerations that need to be made when writing concurrent code, even more so when things like data order or the order that data is processed is important because when you introduce a basic (and common,) factor like that, the benefit of threading and multi-core systems goes out the window because you still have a bottleneck and the only difference is that you moved it from a single thread to a lock where only one thread can run at once, even if you spin up 10 of them.

With all of that said, it pisses me off when people like you think that writing concurrency code that scales is easy when it's not.

For it to scale, most of it needs to be parallel, not just a tiny bit of it and I can't even begin to describe to you how complex that can get.


So if 50% of your workload is parallel, you'll benefit from two cores basically. HALF of your work needs to be parallel just for a 2.0 speedup... and you want code to run on how many cores again?
WikipediaThe speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (95%) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence the speedup is limited to at most 20×.
en.wikipedia.org/wiki/Amdahl's_law
Posted on Reply
#40
R0H1T
AquinusAre you a software developer? Do you write concurrent code that is thread-safe and works all the time? Yeah, I didn't think so. Keep your assumptions about how people like me do my job to yourself. Don't presume to talk about something where you have absolutely no idea what kind of work needs to be done accomplish what you suggest. There are a lot of considerations that need to be made when writing concurrent code, even more so when things like data order or the order that data is processed is important because when you introduce a basic (and common,) factor like that, the benefit of threading and multi-core systems goes out the window because you still have a bottleneck and the only difference is that you moved it from a single thread to a lock where only one thread can run at once, even if you spin up 10 of them.

With all of that said, it pisses me off when people like you think that writing concurrency code that scales is easy when it's not.

For it to scale, most of it needs to be parallel, not just a tiny bit of it and I can't even begin to describe to you how complex that can get.
I don't think there's any need to get offended by something that's posted regularly on this & many other forums, though in a more subtle & (somewhat) polite way. What do you think X game being a cr@ppy port means or Y application being slow as hell on my octa core signifies ?

Also people (including but not limited to developers) do need a push to get things done more efficiently, for instance how many browsers were using GPU acceleration before Google (chrome) pushed them into obsolescence ? How many browsers still don't use SSE 4x or other advanced instruction sets, this isn't just you I'm talking about but it also is not a blanket statement targeting every software/game developer out there, since I clearly put emphasis on most !

So if 50% of your workload is parallel, you'll benefit from two cores basically. HALF of your work needs to be parallel just for a 2.0 speedup... and you want code to run on how many cores again?
Irrelevant since I didn't mention the type of workload & thus you shouldn't try to sell Amdahl's law as an argument in such case.
Posted on Reply
#41
Aquinus
Resident Wat-man
R0H1TI don't think there's any need to get offended by something that's posted regularly on this & many other forums, though in a more subtle & (somewhat) polite way. What do you think X game being a cr@ppy port means or Y application being slow as hell on my octa core signifies?
It signifies that maybe either the developers had little time and/or little funding to make an already existent game run on a different platform so it's realistic to assume that maybe the code can't easily be made to utilize more cores with investing a lot more time (which to businesses is money). They're only going to spend so much time on making it perform better than what it needs to.
R0H1TAlso people (including but not limited to developers) do need a push to get things done more efficiently, for instance how many browsers were using GPU acceleration before Google (chrome) pushed them into obsolescence ? How many browsers still don't use SSE 4x or other advanced instruction sets, this isn't just you I'm talking about but it also is not a blanket statement targeting every software/game developer out there, since I clearly put emphasis on most!
GPU acceleration started to become important on browsers because of the complexity of rendering pages now versus pages several years ago. Web applications are much more rich and have much more client-side scripting that goes on that alter the page in ways that make it more intensive then they used to be. Now that's just rendering, because it was becoming a bottleneck and in Google's case with Chrome, solved it. However that doesn't mean that chrome uses any more threads to accomplish the same task.
R0H1TIrrelevant since I didn't mention the type of workload & thus you shouldn't try to sell Amdahl's law as an argument in such case.
You don't need to mention the type of workload for it to be relevant because performance and the ability to make any application performant on multi-core systems depends on the kind of workload. You can't talk about any level of parallelism without discussing the workload that is to be run in parallel. It's a selling point that making applications multi-threaded highly depends on the application, not all applications can be made to run in parallel and the impression you're giving me is that you don't believe that is the case and that is the point I was trying to prove with Amdahl's law.
Posted on Reply
#42
R0H1T
AquinusIt signifies that maybe either the developers had little time and/or little funding to make an already existent game run on a different platform so it's realistic to assume that maybe the code can't easily be made to utilize more cores with investing a lot more time (which to businesses is money). They're only going to spend so much time on making it perform better than what it needs to.
What would you say about the likes of EA (DICE) & their bug filled launch of BF4 OR is that you're downplaying the fault of developers in such a mess ? I would put winrar/winzip in the same category though they've done a lot especially in the last couple of years in implementing multi-core enhancements & hardware (OpenCL) acceleration respectively.
GPU acceleration started to become important on browsers because of the complexity of rendering pages now versus pages several years ago. Web applications are much more rich and have much more client-side scripting that goes on that alter the page in ways that make it more intensive then they used to be. Now that's just rendering, because it was becoming a bottleneck and in Google's case with Chrome, solved it. However that doesn't mean that chrome uses any more threads to accomplish the same task.
The GPU acceleration was just an example of how developers need to be aware of the demands of this ever computing computing landscape before some of them become irrelevant btw you still didn't answer why don't major browsers implement SSE 4x or other advanced instruction sets ? FYI firefox had GPU acceleration even before IE & chrome but they enabled it by default only after chrome forced them to, the same goes for IE & their implementation of it since version 9.
You don't need to mention the type of workload for it to be relevant because performance and the ability to make any application performant on multi-core systems depends on the kind of workload. You can't talk about any level of parallelism without discussing the workload that is to be run in parallel. It's a selling point that making applications multi-threaded highly depends on the application, not all applications can be made to run in parallel and the impression you're giving me is that you don't believe that is the case and that is the point I was trying to prove with Amdahl's law.
Again you're nitpicking on what I said, my basic point was that most developers (not all of'em) don't use the tools at their disposal as effectively as they could or rather as they should.
Posted on Reply
#43
Aquinus
Resident Wat-man
R0H1TWhat would you say about the likes of EA (DICE) & their bug filled launch of BF4 OR is that you're downplaying the fault of developers in such a mess ? I would put winrar/winzip in the same category though they've done a lot especially in the last couple of years in implementing multi-core enhancements & hardware (OpenCL) acceleration respectively.
That depends? If it's the fault of them using poorly designed libraries that they wrote in the past, it could be a cost/time saving measure, definitely a bad one, but the company could have pushed them down that road. It could be the developer's fault, but that really depends on the timeline they had for doing the work they had to get done. Development doesn't always go the way you want it to. Sometimes that's the developer's fault and sometimes it isn't. It's hard to say without being inside the company and seeing what is going on but, one thing is certain, it's definitely EA (DICE)'s fault as a whole. :) I don't dispute that for a second.

I would put winrar/winzip and other compression utilities in the category of workloads that are more easily paralleled than others because of the nature of what they're doing. Once again, this comes down to the workload argument. Archival applications and games are two very different kinds of workloads, it's a lot easier to make something like LZMA2 to run in parallel than something like a game which is incredibly more stateful than something like an algorithm for compression or decompression. This isn't a matter of tools, you could have all the tools in the world but that won't change the nature of some applications and how they need to be implemented. OpenCL doesn't solve all programming issues and it doesn't mysteriously make things that couldn't be run in parallel to suddenly able to be. These tools you talk about enable already parallel applications to scale a lot better and across more compute cores than they did before, it doesn't solve the problem of having to make your workload thread-safe without being detrimental to performance in the first place.

You complain about me nitpicking, but you're pointing out things that require that level of analysis and detail because problems like these aren't as easy to solve as you make them out to be.

One question, have you ever tried to write some OpenCL code and running it on a GPU? Try doing something useful in it if you haven't and you'll understand real quickly why only applications that are mostly parallel code in the first place use OpenCL. I get the impression that you haven't so you shouldn't talk about something if you've never done it. I am, because I have... trust me, it's not intuitive, it's hard to use, and it's only helpful in very selective situations. I would never use it unless I was working with purely numerical data that was tens of gigabytes large or bigger and only if the algorithm I'm implementing is almost completely stateless (or functional if you will). Games (other than the rendering part, which GPUs are already doing,) hardly fit any of those criteria. It's not that developers aren't using OpenCL, it's that they can't or it doesn't make sense to in most real world applications in the consumer market.

I do enjoy listening to you try to say what developers are and are not doing right when you're not in their shoes. Even as a developer I wouldn't presume to think I knew more about another developer's project than they do without even seeing the code itself and having worked with it. So I find it both amusing and disturbing that you feel that you can voice you opinion in such an authoritative way when not even I would make those kinds of claims given my own experience in the subject as I'm a developer professionally and I'm even working on a library that uses multiple threads.

Tell me more about why you're right.
Posted on Reply
#44
Steevo
W1zzardTPU plans to hand out 25x more free graphics cards to readers by 2020
Comparative analysis shows this is plausible given the current trend of 0, I for one approve this message.

**Edit** There is a lot of butthurt in this thread, over PR, nothing more. I am glad they have this goal.

How about we debate the power useage and how perhaps a embedded capacitor(s) that can provide the peak power required when firing up more cores or execute units could provide us with a 200Mhz CPU that clocks to 4Ghz instantly?

Decoupling capacitor built in anyone?
Posted on Reply
#45
Fx
HoodA few years ago they were bragging about how APUs would revolutionize the industry - and they did, by letting Intel get so far ahead, they don't even have to try anymore - basically grinding the industry to a halt as far as real innovations and improvements. Thanks AMD! You guys have a lot of nerve, spouting more nonsense about your crappy APUs, whose useless graphics core is too much for general use and not enough for gaming...
You sound like a uneducated fanboy. Useless APUs... really? You have some serious reading, comprehension, and contemplating to do.
Posted on Reply
#46
Prima.Vera
Seriously I don't get AMD. They are a huge company, so really, cannot they afford hiring 2 or 3 top design engineers to design a new top CPU that should compete with the latest i7 from Intel?? I mean, geez, even reverse engineer the stuff, or follow and try to improve Intel's design if they are in a lack of inspiration. The CPU design, architecture and even detailed charts and stuff are all over the internet.
Honestly, I don't get it...
mroofieReads AMD APU *
Stops reading *
Clicks on close tab*
That's a little childish I guess. Latest top i7 CPUs from Intel are also APUs.
Posted on Reply
#47
Franzen4Real
AquinusThat depends? If it's the fault of them using poorly designed libraries that they wrote in the past, it could be a cost/time saving measure, definitely a bad one, but the company could have pushed them down that road. It could be the developer's fault, but that really depends on the timeline they had for doing the work they had to get done. Development doesn't always go the way you want it to. Sometimes that's the developer's fault and sometimes it isn't. It's hard to say without being inside the company and seeing what is going on but, one thing is certain, it's definitely EA (DICE)'s fault as a whole. :) I don't dispute that for a second.

I would put winrar/winzip and other compression utilities in the category of workloads that are more easily paralleled than others because of the nature of what they're doing. Once again, this comes down to the workload argument. Archival applications and games are two very different kinds of workloads, it's a lot easier to make something like LZMA2 to run in parallel than something like a game which is incredibly more stateful than something like an algorithm for compression or decompression. This isn't a matter of tools, you could have all the tools in the world but that won't change the nature of some applications and how they need to be implemented. OpenCL doesn't solve all programming issues and it doesn't mysteriously make things that couldn't be run in parallel to suddenly able to be. These tools you talk about enable already parallel applications to scale a lot better and across more compute cores than they did before, it doesn't solve the problem of having to make your workload thread-safe without being detrimental to performance in the first place.

You complain about me nitpicking, but you're pointing out things that require that level of analysis and detail because problems like these aren't as easy to solve as you make them out to be.

One question, have you ever tried to write some OpenCL code and running it on a GPU? Try doing something useful in it if you haven't and you'll understand real quickly why only applications that are mostly parallel code in the first place use OpenCL. I get the impression that you haven't so you shouldn't talk about something if you've never done it. I am, because I have... trust me, it's not intuitive, it's hard to use, and it's only helpful in very selective situations. I would never use it unless I was working with purely numerical data that was tens of gigabytes large or bigger and only if the algorithm I'm implementing is almost completely stateless (or functional if you will). Games (other than the rendering part, which GPUs are already doing,) hardly fit any of those criteria. It's not that developers aren't using OpenCL, it's that they can't or it doesn't make sense to in most real world applications in the consumer market.

I do enjoy listening to you try to say what developers are and are not doing right when you're not in their shoes. Even as a developer I wouldn't presume to think I knew more about another developer's project than they do without even seeing the code itself and having worked with it. So I find it both amusing and disturbing that you feel that you can voice you opinion in such an authoritative way when not even I would make those kinds of claims given my own experience in the subject as I'm a developer professionally and I'm even working on a library that uses multiple threads.

Tell me more about why you're right.
That was quite the amusing exchange lol! Gotta love the air charm developers. Anyways, if I may ask something somewhat on the topic of multi threaded gaming.. I happen to not be a developer and I do not write code, so it was enlightening reading your thoughts on the whole workload dependencies for multi threading. I too have wondered why gaming has taken a while to really embrace multi core processors and your explanation helps to understand some of those reasons (though I can say I never thought it was because devs are fat and lazy). My question though, is with Mantle and DX12, it seems that we are looking for more and more ways to off load the CPU as much as possible, and to me that seems like it makes the need to try and heavily thread games (which as you say may not really be possible anyways) kind of irrelevant. It seems like direction of making games is that the less and less the CPU is involved, the better. Certainly correct me if I'm wrong, but I don't understand why people like whats-his-name that was trying to argue programming with you, are wanting more and more CPU utilization when, to me at least, it seems pretty clear that the road to more performance in games relies less on the CPU and off loading more to the GPU. (Maybe they just want to justify their expensive purchase of a many core CPU's to play Battlefield? I dunno..)

I apologize, I didn't mean to completely ignore the topic of the article in the first place... But as a user of both AMD and Intel (actually an AMD user from Socket A up until my first Intel build at the release of Core 2) I certainly hope that they achieve these goals simply for the reason that any innovation from any team is always good for us. I can remember when AMD first mentioned Fusion, and adding the GPU to the CPU die... then low and behold, here comes Intel taking that idea and running with it and beating AMD to market initially with a crappy solution (though AMD still has the better iGPU today) and look where we are now with integrated graphics. I for one do appreciate the strides made with iGPU's for systems such as my Surface Pro. (I would really like to see an AMD APU version of one!) Or when AMD puts the memory controller on die with the Athlon 64, then here comes Intel with Nehalem doing the same thing. It really is too bad that they took such a step backwards with Bulldozer when they seemed to have good momentum going and hanging with Intel back in those days. I really hope that the day may come again when they are close and drive each other to really innovate. I've been an Intel user now since Core 2 and would love to feel like I have another option when building my desktops.... possibly by the time ill be looking to replace my upcoming Haswell-E system?
Posted on Reply
#48
Aquinus
Resident Wat-man
Franzen4RealThat was quite the amusing exchange lol! Gotta love the air charm developers. Anyways, if I may ask something somewhat on the topic of multi threaded gaming.. I happen to not be a developer and I do not write code, so it was enlightening reading your thoughts on the whole workload dependencies for multi threading. I too have wondered why gaming has taken a while to really embrace multi core processors and your explanation helps to understand some of those reasons (though I can say I never thought it was because devs are fat and lazy). My question though, is with Mantle and DX12, it seems that we are looking for more and more ways to off load the CPU as much as possible, and to me that seems like it makes the need to try and heavily thread games (which as you say may not really be possible anyways) kind of irrelevant. It seems like direction of making games is that the less and less the CPU is involved, the better. Certainly correct me if I'm wrong, but I don't understand why people like whats-his-name that was trying to argue programming with you, are wanting more and more CPU utilization when, to me at least, it seems pretty clear that the road to more performance in games relies less on the CPU and off loading more to the GPU. (Maybe they just want to justify their expensive purchase of a many core CPU's to play Battlefield? I dunno..)

I apologize, I didn't mean to completely ignore the topic of the article in the first place... But as a user of both AMD and Intel (actually an AMD user from Socket A up until my first Intel build at the release of Core 2) I certainly hope that they achieve these goals simply for the reason that any innovation from any team is always good for us. I can remember when AMD first mentioned Fusion, and adding the GPU to the CPU die... then low and behold, here comes Intel taking that idea and running with it and beating AMD to market initially with a crappy solution (though AMD still has the better iGPU today) and look where we are now with integrated graphics. I for one do appreciate the strides made with iGPU's for systems such as my Surface Pro. (I would really like to see an AMD APU version of one!) Or when AMD puts the memory controller on die with the Athlon 64, then here comes Intel with Nehalem doing the same thing. It really is too bad that they took such a step backwards with Bulldozer when they seemed to have good momentum going and hanging with Intel back in those days. I really hope that the day may come again when they are close and drive each other to really innovate. I've been an Intel user now since Core 2 and would love to feel like I have another option when building my desktops.... possibly by the time ill be looking to replace my upcoming Haswell-E system?
No, that's the kind of questions that people need to best asking. It's important to remember one basic thing: games are very complex and that Mantle and DX12 are only doing part of the task. More graphics and rendering related tasks are being offloaded to the GPU because that is where it belongs. However this doesn't change anything for game logic itself and I'm sure if you've played Civilization 5, you'll see how as the world gets bigger, the time it takes to end each turn takes longer and longer.

There are really maybe three situations that I feel are important for multi-threading:
A: When you know that you want, have everything you need to get it but is something that you don't need until later. (a form of speculative execution)
B: When you have a task that needs to run multiple times on multiple items and doesn't produce side effects. (I.E. Graphics rendering or protein folding)
C: A task that occurs regularly (every x seconds, or x milliseconds) and requires very little coordination.

As soon as you have side effects or have tasks that rely on the output of several other tasks, the ability to make something usefully multi-threaded goes out the window and people might not realize it, but games are one of the most stateful kinds of applications you can have and just "make it multi-threaded" as it doesn't solve problems. In fact if the workload wasn't properly made to run in parallel, making an application multi-threaded can degrade performance when overhead is most costly than the speedup that's gained from it or even make the code that's executing more confusing because of any locking or thread-coordination you may have to do.

I currently develop with Clojure, which is a functional language on top of the JVM among other platforms which I don't typically use (except for ClojureScript which is interesting).
WikipediaClojure (pronounced like "closure"[3]) is a dialect of the Lisp programming language created by Rich Hickey. Clojure is a general-purpose programming language with an emphasis on functional programming. It runs on theJava Virtual Machine, Common Language Runtime, and JavaScript engines. Like other Lisps, Clojure treats code as data and has a macro system.

Clojure's focus on programming with immutable values and explicit progression-of-time constructs are intended to facilitate the development of more robust programs, particularly multithreaded ones.
...and
WikipediaHickey developed Clojure because he wanted a modern Lisp for functional programming, symbiotic with the established Java platform, and designed for concurrency.[5][6]

Clojure's approach to state is characterized by the concept of identities,[7] which represent it as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, and concurrency becomes a question of managing changes from one state to another. For this purpose, Clojure provides several mutable reference types, each having well-defined semantics for the transition between states.
To make a long story short, application state is what makes applications demand single-threaded performance and not managing it well is what reinforces that.
Posted on Reply
#49
m4gicfour
DEM DANG AMD TURK URR JERBS or something.
SteevoComparative analysis shows this is plausible given the current trend of 0, I for one approve this message.
I concur.

I, for one, would like one of these 0 new free cards. With a 25X improvement over the current 0 free cards, it should not be any issue for me to receive [RESULT UNDEFINED]
Steevo**Edit** There is a lot of butthurt in this thread, over PR, nothing more. I am glad they have this goal.

How about we debate the power useage and how perhaps a embedded capacitor(s) that can provide the peak power required when firing up more cores or execute units could provide us with a 200Mhz CPU that clocks to 4Ghz instantly?

Decoupling capacitor built in anyone?
Your post is invalid. Minimum butthurt level not met. Ignoring.
Posted on Reply
Add your own comment
Apr 26th, 2024 19:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts