• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD to Unveil Next-Generation APUs on November 11

AMD needs to address TDP issues for both their CPU and GPU. It's way too high, which is why they are losing to nVidia badly in the mobile market.

Got any recent data that shows that "they are losing to nVidia badly in the mobile market" ?

John Peddie
jpr-gpu-market-2q13.jpg


(AMD).. APUs declined 9.6% from Q1 and increased an astounding 47.1% in notebooks. The company’s overall PC graphics shipments increased 10.9%.
Nvidia’s desktop discrete shipments were down 8.9% from last quarter; and, the company’s mobile discrete shipments decreased 7.1%
 
Last edited:
AMD needs to address TDP issues for both their CPU and GPU. It's way too high, which is why they are losing to nVidia badly in the mobile market.

I find it strange that members with little to say all year can be assed throwing a little bait in Amd threads.
cheers for the input though Dwade however , this being TPU , most readers here fecked efficiency right out the window on day 1 of their new build or old rebuild when they turned Eist / cool and quite off and all other eco features off then overclocked the snot out of it and left it like that eternally or until instability shows up only to step it back a bit:confused:

MOOOOOAAAARR POWERSSSS not less pls:D
 
funny thing is that as bad as the reputation for the 9590s have been, Intel are no better when clocked to 5Ghz..

http://pctuning.tyden.cz/ilustrace3/obermaier/4770K/scaling_sandy.pnghttp://i.imgur.com/gzBNZwN.png

With Haswell things are not much better.. +18% clock costs over 60% more power (3900 to 4600mhz).. and that last 400Mhz is going to cost another 50%+ in power consumption to reach 5Ghz.

fair enough.... gotta admit though even if it was intel or nvidia in that picture.... it still would've been hilarious :laugh: And anyone who buy that level of cpu dont care about power consumption and tend to water cool so its all good
 
In the same boat,.. X6 1100T @ 4.2Ghz, see no reason to upgrade..

I can say the same thing with an Athlon II X4 @ 3.6Ghz.

AMD needs to address TDP issues for both their CPU and GPU. It's way too high, which is why they are losing to nVidia badly in the mobile market.

Hence why AMD/ATI currently have a larger market share than Nvidia in the mobile market.
 
Last edited:
funny thing is that as bad as the reputation for the 9590s have been, Intel are no better when clocked to 5Ghz..

http://pctuning.tyden.cz/ilustrace3/obermaier/4770K/scaling_sandy.pnghttp://i.imgur.com/gzBNZwN.png

With Haswell things are not much better.. +18% clock costs over 60% more power (3900 to 4600mhz).. and that last 400Mhz is going to cost another 50%+ in power consumption to reach 5Ghz.

Not doubting the accuracy of this, but it's a little hard to take these charts seriously when they can't even spell what they're measuring.
 
I can say the same thing with an Athlon II X4 @ 3.6Ghz.


Didn't we (customers) wanted our CPU to lastttt long? Funny, enough some complaint AMD socket, too! :banghead:

I gotta said, my hat is down for AMD lonnng last as good enough CPU! :respect:
 
I can say the same thing with an Athlon II X4 @ 3.6Ghz.


Didn't we (customers) wanted our CPU to lastttt long? Funny, enough some complaint AMD socket, too! :banghead:

I gotta said, my hat is down for AMD lonnng last as good enough CPU! :respect:

100% agree. When AMD was churning out CPUs they are saying slow down we just upgraded, damn corporate greed milking the consumer. Then they give us hardware which almost half a decade and people complain for something new.

Not doubting the accuracy of this, but it's a little hard to take these charts seriously when they can't even spell what they're measuring.

Yh power "consuption". Think its the authors second language.
 
100% agree. When AMD was churning out CPUs they are saying slow down we just upgraded, damn corporate greed milking the consumer. Then they give us hardware which almost half a decade and people complain for something new.



Yh power "consuption". Think its the authors second language.

Iirc, the author is Czech/Slovak.. I assume that you communicate Czechoslovakian as well as they do English ???!
 
Iirc, the author is Czech/Slovak.. I assume that you communicate Czechoslovakian as well as they do English ???!

I assumed as much, but that is beside the point. If I was publishing data in Czechoslovakian and trying to be taken seriously, you bet your ass I'd make sure there weren't any spelling errors. Once again, I'm not saying the data is false or inaccurate, just a little unprofessional.

Anyways, back on topic. 832 GCN cores seems like a waste of space/power if they're just going to be held back by the memory bandwidth anyways. I'm thinking it'll be between 384 and 512 cores.
 
Last edited:
Iirc, the author is Czech/Slovak.. I assume that you communicate Czechoslovakian as well as they do English ???!

Nope. But I wasn't criticising the authors spelling Ralfies was. I was just point out the mistake so everyone knew what Ralfies was talking about.
 
I'm not expecting much of anything regarding hardware announcements, even during the press conference. All the seminars involve software, and I expect that will be the theme of the conference.

Anyways, back on topic. 832 GCN cores seems like a waste of space/power if they're just going to be held back by the memory bandwidth anyways. I'm thinking it'll be between 384 and 512 cores.

That bandwidth constraint is the issue. I don't understand why mobile processors haven't gotten wider memory buses to compensate. I can understand socketed desktop CPUs needing too many pins to support a wider memory bus, and also DIMM placement is an issue with a wide bus. But modern notebooks use BGA CPUs and soldered down memory. Theoretically a 256-bit DDR3 bus shouldn't require much more space in a laptop than a 128-bit bus, and the only increase in cost might be a PCB with a few more layers. In exchange graphics performance would scale immensely. Microsoft and Sony can do it for the APUs in their consoles and AMD's graphics division does it all the time for its GPUs, so I don't see why it isn't done for the mass market APUs.
 
That bandwidth constraint is the issue. I don't understand why mobile processors haven't gotten wider memory buses to compensate. I can understand socketed desktop CPUs needing too many pins to support a wider memory bus, and also DIMM placement is an issue with a wide bus. But modern notebooks use BGA CPUs and soldered down memory. Theoretically a 256-bit DDR3 bus shouldn't require much more space in a laptop than a 128-bit bus, and the only increase in cost might be a PCB with a few more layers. In exchange graphics performance would scale immensely. Microsoft and Sony can do it for the APUs in their consoles and AMD's graphics division does it all the time for its GPUs, so I don't see why it isn't done for the mass market APUs.

You're asking the questions that have boggled my mind since the conception of the APU, and certainly what I find the most interesting challenge moving forward.

There are many conceivable answers, a wider bus among them (256-bit ddr3 would be sufficient for a ~512sp design), although perhaps less probable as we move to ddr4 and it's 1dimm-per-channel restriction and larger, more demanding iGPUs that will quickly outpace a 128-bit ddr4 bus. Certainly there is bga, but I wonder if amd is really willing to take that leap with their larger designs (as a consumer platform, ie not the ps4 or iterations of bobcat).

Hypertransport, if not a discrete (or optional) gddr5 bus to a gpu cache (ala what used to be called Sideport Memory in the discrete IGP days) seemed like a realistic option even up to this generation. While 32-bit, with the max bandwidth of a link resting somewhere near what gddr5 is capable on AMD's current gpu controllers, and meshing fairly nicely with being around half of what a 32/28nm iGPU would need (and twice what a 128-bit ddr3 bus could deliver), that would have more-or-less made sense. Obviously moving past this gen it would be less so, unless itself coupled with a ddr4 bus (ie ddr4 + gddrX).

From there, we have the possibilities of larger/faster caches (like the X1's on-die ram) offsetting what is needed externally. There is also the possibility of things like on-package off-die caches (not unlike Intel's Iris) as well stacked dram like Volta.

Whatever their solution, they need to do it yesterday. Their strength is (and has always been) in the floating point computation per mm (per process/cost) their designs deliver. While HSA capitalizes on this fact, as it should, with each passing node they lose that (realistic) advantage to intel, whom can ramp clocks higher until they reach parity in design (and then clock them lower to save power) even as their priority lies in improving their cpu cores. With each passing gpu gen nvidia grows closer to parity, as they are clearly receding from purely thinking of their designs as efficient gpus to rather more-or-less a floating point core (that makes sense as such unit with or without the shell of a cpu). The scary thing about all that is...intel and nvidia, those least dependant on memory bandwidth currently, have shown their plans for going forward. AMD, whom already is restricted on all fronts by this reality, has not (outside the ps4.)

I find that sincerely troubling. No doubt they have an answer...I just hope it comes sooner rather than later.
 
You're asking the questions that have boggled my mind since the conception of the APU, and certainly what I find the most interesting challenge moving forward.

There are many conceivable answers, a wider bus among them (256-bit ddr3 would be sufficient for a ~512sp design), although perhaps less probable as we move to ddr4 and it's 1dimm-per-channel restriction and larger, more demanding iGPUs that will quickly outpace a 128-bit ddr4 bus. Certainly there is bga, but I wonder if amd is really willing to take that leap with their larger designs (as a consumer platform, ie not the ps4 or iterations of bobcat).

Hypertransport, if not a discrete (or optional) gddr5 bus to a gpu cache (ala what used to be called Sideport Memory in the discrete IGP days) seemed like a realistic option even up to this generation. While 32-bit, with the max bandwidth of a link resting somewhere near what gddr5 is capable on AMD's current gpu controllers, and meshing fairly nicely with being around half of what a 32/28nm iGPU would need (and twice what a 128-bit ddr3 bus could deliver), that would have more-or-less made sense. Obviously moving past this gen it would be less so, unless itself coupled with a ddr4 bus (ie ddr4 + gddrX).

From there, we have the possibilities of larger/faster caches (like the X1's on-die ram) offsetting what is needed externally. There is also the possibility of things like on-package off-die caches (not unlike Intel's Iris) as well stacked dram like Volta.

Whatever their solution, they need to do it yesterday. Their strength is (and has always been) in the floating point computation per mm (per process/cost) their designs deliver. While HSA capitalizes on this fact, as it should, with each passing node they lose that (realistic) advantage to intel, whom can ramp clocks higher until they reach parity in design (and then clock them lower to save power) even as their priority lies in improving their cpu cores. With each passing gpu gen nvidia grows closer to parity, as they are clearly receding from purely thinking of their designs as efficient gpus to rather more-or-less a floating point core (that makes sense as such unit with or without the shell of a cpu). The scary thing about all that is...intel and nvidia, those least dependant on memory bandwidth currently, have shown their plans for going forward. AMD, whom already is restricted on all fronts by this reality, has not (outside the ps4.)

I find that sincerely troubling. No doubt they have an answer...I just hope it comes sooner rather than later.

From what I've read about AMD's goals, they don't want a heterogeneous memory pool like the XBOX One where different memory addresses have different bandwidths and latencies. AMD is pushing to have all memory addresses the same speed and latency in order to avoid the need for software to shuffle memory among different addresses in order to optimize bandwidth, sort of what is one with a discrete GPU today. This doesn't eliminate an algorithm implemented in the core hardware managing more levels of cache (like what Intel does with Crystalwell), but AMD wants this to be transparent to the developer.

As far as DDR4, the doubled bandwidth will stave off the bandwidth limitation for a while but even without the need for more bandwidth the 1 DIMM/channel limitation will encourage wider memory buses. The people who want lots of memory for the desktop or mobile will now need lots double the memory channels to achieve the same capacity with DDR4 as DDR3. The server market already moved in this direction with DDR3; the reason for the migration to 256-bit buses were more for the sheer memory capacity of that many memory channels rather than the increased bandwidth.
 
Again people with the same concerns and mentality... *sigh*

Let me put it simple... HSA > pure iGPU for games and crap.

HSA is ment as a revolution in x86... and possibly the only thing that can save it from a slow and painful death by ARM.

Seriously, while the iGPU part should be beastly, even if with the new IMC and faster DDR3 support, it'll still come short of it's potential... the great iGPU is far from the (only) point of Kaveri...

And I'm sure, on paper at least, adding an extra 192-256 ALUs make much more performance sense to AMD, than adding 2 extra cores.
 
From what I've read about AMD's goals, they don't want a heterogeneous memory pool like the XBOX One where different memory addresses have different bandwidths and latencies. AMD is pushing to have all memory addresses the same speed and latency in order to avoid the need for software to shuffle memory among different addresses in order to optimize bandwidth, sort of what is one with a discrete GPU today. This doesn't eliminate an algorithm implemented in the core hardware managing more levels of cache (like what Intel does with Crystalwell), but AMD wants this to be transparent to the developer.

As far as DDR4, the doubled bandwidth will stave off the bandwidth limitation for a while but even without the need for more bandwidth the 1 DIMM/channel limitation will encourage wider memory buses. The people who want lots of memory for the desktop or mobile will now need lots double the memory channels to achieve the same capacity with DDR4 as DDR3. The server market already moved in this direction with DDR3; the reason for the migration to 256-bit buses were more for the sheer memory capacity of that many memory channels rather than the increased bandwidth.

No, they DO want it. It improves their performance in all facets.

http://arstechnica.com/information-...orm-memory-access-coming-this-year-in-kaveri/


Instead of having software decide where to run the process from, the hardware decides in real time which is more efficient, and then runs it. Addresses are the same, so no latency penalty for transporting it around. Hugely improved performance in DSP and other filtered data, serial data still run on the CPU cores.
 
Apparently AMD has indirectly confirmed the naming scheme for desktop Kaveri.

A10-6790K-chipsets.jpg


So as I suspected, Ax-7x00x. Like A10-7800K for the next top tier model.

Edit: As well as the existance of next Athlon CPUs... Like Athlon II X4 770K or 850K? I guess.
 
Last edited:
No, they DO want it. It improves their performance in all facets.

http://arstechnica.com/information-...orm-memory-access-coming-this-year-in-kaveri/

Instead of having software decide where to run the process from, the hardware decides in real time which is more efficient, and then runs it. Addresses are the same, so no latency penalty for transporting it around. Hugely improved performance in DSP and other filtered data, serial data still run on the CPU cores.

I don't understand how that article refutes what I said; I think you agree with me but didn't understand what I said. I wasn't referring to dedicated memory for the GPU and GPU, which is obviously going away. I was referring AMD not wanting something like a NUMA where different memory addresses have different bandwidths and latencies.

When programming for the XBOX One, programmers have to write their code so that the most latency and bandwidth sensitive parts are sent to the small SRAM while the rest of the data is written to the larger but slower main memory. AMD doesn't want to have developers worrying about swapping data between the SRAM versus main memory, so they want a unified memory architecture like the PS4.

This is why I don't see something like alwayssts said occurring, where there is a small, high speed, on chip cache managed by software. The whole point of AMD's heterogeneous computing initiative is to make it as easy as possible for programmers to utilize heterogeneous computing. If there is to be a large SRAM cache at all, AMD wants something more like Crystalwell where the cache is managed by hardware and it is transparent to the developer.
 
Last edited:
I don't understand how that article refutes what I said; I think you agree with me but didn't understand what I said. I wasn't referring to dedicated memory for the GPU and GPU, which is obviously going away. I was referring AMD not wanting something like a NUMA where different memory addresses have different bandwidths and latencies.

When programming for the XBOX One, programmers have to write their code so that the most latency and bandwidth sensitive parts are sent to the small SRAM while the rest of the data is written to the larger but slower main memory. AMD doesn't want to have developers worrying about swapping data between the SRAM versus main memory, so they want a unified memory architecture like the PS4.

This is why I don't see something like alwayssts said occurring, where there is a small, high speed, on chip cache managed by software. The whole point of AMD's heterogeneous computing initiative is to make it as easy as possible for programmers to utilize heterogeneous computing. If there is to be a large SRAM cache at all, AMD wants something more like Crystalwell where the cache is managed by hardware and it is transparent to the developer.

thats exactly it and exactly where i think they are all going, stacked chips with multi layered memory and in centralising the memory rescource it only makes sense to up the bandwidth of each route to it and remove some of the intermediary caches to bring back some latency.
Im thinking quad module for Amd but per layer and effectively 4x ddr4 imc per layer x2 for 16 logic cores from 8 tied across an 8 channel ddr4 interface to 8 gig of Tsv connected dram, drop the sytem ram too at this point and the year is,,, ,likely 2015:cool:
 
Back
Top