1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Drag Socket FM2+ On Till 2016

Discussion in 'News' started by btarunr, Jul 25, 2014.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,847 (11.08/day)
    Thanks Received:
    13,714
    Location:
    Hyderabad, India
    AMD's desktop processor and APU platforms are not expected to see any major changes till 2016, according to a BitsnChips report. The delay is attributed to a number of factors, including DDR4 memory proliferation (i.e. for DDR4 memory to get affordable enough for target consumers of APUs), and AMD's so-called "project Fast-Forward," which aims to place high-bandwidth memory next to the APU die, for the AMD's increasingly powerful integrated graphics solutions to overcome memory bottlenecks.

    The company's upcoming "Carrizo" APU is targeted at low-TDP devices such as ultra-slim notebooks and tablets; and is the first chip to integrate AMD's next-generation "Excavator" CPU micro-architecture. "Carrizo" chips continue to use DDR3 system memory, and therefore it's possible that AMD may design a socket FM2+ chip based on "Excavator," probably leveraging newer silicon fab processes. But otherwise, socket FM2+ is here to stay.

    [​IMG]

    Source: BitsnChips, Image Courtesy: VR-Zone
     
  2. Mathragh

    Mathragh

    Joined:
    Dec 3, 2009
    Messages:
    1,103 (0.61/day)
    Thanks Received:
    305
    Location:
    The Netherlands
    I guess they want to wait for their new CPU arch before switching to a totally new socket.

    Looks like I'll need to wait till atleast 2016 for a replacement of my system if I want to stay with AMD!
     
  3. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,110 (6.12/day)
    Thanks Received:
    6,168
    Really, I don't see a need for a new socket until DDR4 is mainstream as long as Excavator will run in FM2+.
     
    Chevalr1c says thanks.
    Crunching for Team TPU 50 Million points folded for TPU
  4. NC37

    NC37

    Joined:
    Oct 30, 2008
    Messages:
    1,203 (0.54/day)
    Thanks Received:
    268
    Not like Intel is doing anything special. They've totally stagnated like I figured they would. No need to press performance when you've got no real competitor. Bout the only positive right now is Intel's graphics are finally doing something, which AMD can easily counter with their own. Guess its just a calm period before AMD gets it's act together. Its an interesting guerrilla tactic to watch. Distancing itself from "CPU" to embrace "APU" and getting the consumer to do the same because Intel can't beat AMD on graphics. But when it comes down to it, it is still a CPU. So AMD is just biding time in the lower end market till they can come outright and say they're back in competition with Intel.

    I wonder if by then they'll drop the APU tag. Or maybe they'll pick up a new tagline. lol they could go with TPU (Total Processing Unit) then this site could reap some benefits...or get sued for rights to the tag....heh.
     
    Lionheart and Chevalr1c say thanks.
  5. john_

    john_

    Joined:
    Sep 6, 2013
    Messages:
    264 (0.59/day)
    Thanks Received:
    69
    Location:
    Athens, Greece
    For 2015 they only need two things to be able to say that they give their customers an upgrade path that it can justify itself.

    In FM2+, Carrizo with Excavator and HBM. Without HBM, Excavator would have to be much much better performing in the cpu part, to be considered an upgrade. Personally I don't see it happening. I don't see why the fourth version of the module architecture to be a bigger step than the last two (Bulldozer-->Piledriver, Piledriver-->Steamroller).
    Of course there is the possibility of new FX processors for the FM2+ with more than 2 modules. But that could also mean new motherboards because 3 modules and a few stream processors, could be a possibility with 100W ceiling, 4 or .... 6 modules, I think they are a "No Go" with only 100W limit.

    [​IMG]

    In the AM1 platform, Beema models that will also be compatible with existing Kabini boards. 25 Watts are more than enough for a 2.8-3GHz Beema quad core.

    I don't expect anything in AM3+ unfortunately.

    PS That guy that thought that Bulldozer, or should I say AMD's Pentium 4 version, was a good idea,... well, I hope he/she works at a McDonalds store today serving people hot potatoes. He/she knows much about hot potatoes.
     
  6. Shambles1980

    Joined:
    May 3, 2014
    Messages:
    540 (2.65/day)
    Thanks Received:
    102
    bulldozer would have been a lot better if they hadnt cut corners on the mfr process. and if they didnt share floating point process. its a reall shame too could have been something so much more than it was.
    as for no update to the socket till 2016.. i dont really see that as an issue. sockect 775 for intell was one of the best and that sucker went on forever. (pentium 4, pentium d, pentium c2d, c2q. and with a slight mod xenon's)
    a longer life span for a socket isnt always a bad thing. provided you improve the components that go in and arround it.
     
  7. john_

    john_

    Joined:
    Sep 6, 2013
    Messages:
    264 (0.59/day)
    Thanks Received:
    69
    Location:
    Athens, Greece
    The whole idea of the module architecture was to cut corners. But cut them only as much as not to have a problem advertising a module as a full dual core.
     
  8. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,665 (6.47/day)
    Thanks Received:
    2,324
    Location:
    Concord, NH
    No, it was to save die space so you can fit more compute cores in the same area. They removed extra hardware that wasn't needed and added more where it needed it (eventually). A module is more of a dual core than you think because there are actually two full integer cores that run in parallel unlike with HyperThreading which re-uses components that aren't being used to get some extra work done. Some of the shared components are things like the op code decoders, cache, and a wide FPU (256-bit vs 128) and using things like XOP and FMA, that single wide FPU can be run as two individual 128-bit FPUs for particular instructions. It's not perfect but it made one thing very clear; CPUs should be doing mostly integer math and some floating point math and if you need to do a ton of floating point calculations, you should be doing it on a GPU/GPGPU setup. It's no different than nVidia gimping its double precision performance to improve single precision because that is what games typically use.

    So no, they didn't do it to "cut corners", that's just how you feel about it which is different than why they did it. They did it to save die space so they could cram more cores on a single CPU.
     
    eidairaman1 says thanks.
  9. Sempron Guy

    Sempron Guy

    Joined:
    Feb 2, 2011
    Messages:
    266 (0.19/day)
    Thanks Received:
    81
    Compute cores to be specific :) The module architecture was designed with APUs in mind.
     
    Aquinus says thanks.
  10. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,665 (6.47/day)
    Thanks Received:
    2,324
    Location:
    Concord, NH
    It's part of their heterogeneous computing goal, so was HSA with more recent APUs. It'a all to bridge the gap for everything between strictly serial workloads and strictly parallel workloads on a single IC. While I think this has always been an ambitious goal, reducing power consumption could help them more than they know. I would be all for a CPU where the CPU and GPU cores used shared components, it might not be the fastest for the most efficient, but it would be the most flexible I think that's what AMD's long term goal is.
     
    eidairaman1 says thanks.
  11. Shambles1980

    Joined:
    May 3, 2014
    Messages:
    540 (2.65/day)
    Thanks Received:
    102
    the corner cutting was done during the mfr process. the single fpu is an issue in my eyes as if you have a 256bit instruction that effectively forces the modue to be a single core. and it can only function as two cores if its 128bit. and with the way things are scheduled. a 256bit instruction can get to core 1. then a 128bit sent to core 2 and it has to sit and wait because the fpu is in full use. scheduling should really move it to an unused core/module. and non floating point opperations should really be moved to a unused core or one that isnt doing any floating point opperations. but that dosent happen. and amd should have understood that would be a big factpr in performance when they chose to go with a single split fp..

    but having one fp is not corner cutting thats just a design flaw imo. the cost cutting was due to simply not doing things by hand that should have been done by hand. that cost a lot of extra performance for some money savings.

    its really annoying to me that they chose the path they did. as it could have been so much better.
     
  12. john_

    john_

    Joined:
    Sep 6, 2013
    Messages:
    264 (0.59/day)
    Thanks Received:
    69
    Location:
    Athens, Greece
    As I said. That was the whole idea of the design, to cut corners.

    FPUs where not needed???

    As I also said. A module is as much hardware as it is necessary, so that AMD can advertise it as a full dual core without the fear lawsuits start dropping like bombs in their headquarters for misleading their customers.

    If the integer units where much faster and if the 6 FPUs in the Phenom II X6 where not doing circles around the 4 in the first 8 core Bulldozer chips, in most cases, or if there where stream processors in the FX chips in the first place to take advantage of GPGPU and also we had plenty of software for GPGPU, I could agree with you. But we have a ton of "IFs" years after the first Bulldozer and of course this isn't the same case as with Nvidia because Nvidia's cards are top performers. So I can't agree with you.

    It is not a feeling. It is reality. They couldn't follow Intel in the thread count, Intel had an unfair advantage there with hyperthreading, they couldn't follow Intel in the manufacturing process, so they had to do something. And that something was to throw half the FPUs out and started counting integer units when advertising the chips. Now they started talking about compute cores so they can advertise 4, 8, or 12 cores(I hope this truck I posted doesn't transfer compute cores but integer cores, very optimistic but let's just hope).

    You want to justify a design that failed miserably and brought AMD to it's knees. I can't stop you. I only can say to you that for the Jaguar design where space is much more limited and power consumption much more important, they didn't choose the module design. Even considering that Kabinis for example do have stream processors in them for GPGPU they still paired an integer unit with a full fpu. That should tell you something.
     
    Last edited: Jul 25, 2014
  13. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,665 (6.47/day)
    Thanks Received:
    2,324
    Location:
    Concord, NH
    Cutting corners would imply that they skimped to save on cost, which they didn't. AMD's chips are plenty fast, the problem is power consumption. If your cores make too much heat you can't add more or make them run faster. You're complaining about the wrong stuff.

    When it comes to integer performance (what CPUs are doing most of the time since memory addresses and strings are represented as integers) that's what CPUs will be doing. More often than not, 4 FPUs will be more than enough for your typical floating point use. Also you're misunderstanding me if you're thinking I'm saying that CPU doesn't need any FPUs. If you're running an application that has more than 4 FPU intensive threads, then you really should be considering GPGPU but most of the time FPU instructions will be spread throughout code and not all bunched up so despite there only being 1 FPU per module, it doesn't matter if it's shared as it will just use whatever is free. You run out of FP performance in unique situations with FX chips which are typically only encountered on benchmarks and less in real world applications.

    Loss in performance is probably much more likely to be caused by the long pipeline that FX CPUs have because of the module design, so not predicting a branch properly (which would cause a pipeline stall) will cause a much worse performance hit than fewer FPUs will as the pipeline has to be wiped and the next instruction has to go all the way through it again which was one of the biggest flaws of the first version of Bulldozer to come out and has been improved with every revision since, same deal with cache hit/miss ratios.

    Kabini is a different animal because it doesn't use modules or even the Phenom II architecture for that matter. The pipeline is much shorter (shorter than Phenom IIs were in fact,) and is designed for low power use cases, not performance. The cost of a shorter pipeline is that (initially at least,) it can hinder clock speeds until the components on the pipeline are optimized like Intel has done over the last 8 years with the Core architecture.

    I'm not saying that what AMD did was a good idea. I'm saying that it was ambitious and probably is more suitable for businesses than your typical consumer. It was too early to do this and they suffered because of it. However the claims your making are false though, the things you don't like about FX aren't what hinders it. The shared FPU was probably one of the best decisions they made with the architecture. The worst was the size of the pipeline, it's the single biggest reason why AMD can't get as much done per clock cycle as Intel.

    Also HyperThreading threads typically give you a max improvement of 30% and as little as nothing depending on the workload where AMD's modules scale almost linearly in comparison, as real cores do. So Intel might have better single-threaded performance but AMD CPUs scale better per core and start showing their colors in multi-threaded workloads.

    Also AMD and Intel's philosophy with HT and modules are very similar. AMD is adding components to run more stuff in parallel where Intel just uses what isn't being used already to gain more performance. As a result HT performance depends highly on the current CPU load and what parts of the CPU aren't being used where with module you know that you'll get roughly the same performance per integer compute core as opposed to being highly dependent on what's being done already.

    I did some testing a while back with respect to how much HT and more cores impacts 7zip performance and came up with this and this. You're over estimating the ability of HT.
    [​IMG]
     
    eidairaman1 and digibucc say thanks.
  14. Thefumigator

    Thefumigator

    Joined:
    Jun 11, 2008
    Messages:
    417 (0.18/day)
    Thanks Received:
    66
    I'm writing an application to do just that, it stresses and benches 1 cpu, then 2, then 3 and so on, then it measures the impact in performance. I don't have proper results yet but when the app gets finished I will post some.
     
  15. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,665 (6.47/day)
    Thanks Received:
    2,324
    Location:
    Concord, NH
    No no no. I actually disabled HT and cores when I did each of those benchmarks so CPU-Z and the OS only saw that many threads. You can't do that without restarting the board and changing the BIOS. That wasn't just testing with one thread or two, or three and just changing CPU affinity which doesn't give you an accurate picture.

    Also what kind of workload are you doing to measure performance and in what language?
     
  16. Jizzler

    Jizzler

    Joined:
    Aug 10, 2007
    Messages:
    3,454 (1.30/day)
    Thanks Received:
    645
    Location:
    Geneva, FL, USA
    No problem keeping it awhile longer, but maybe they could just give us dual socket boards and that's where we'll get more modules as well as Crossfired APUs ;)
     
  17. GhostRyder

    GhostRyder

    Joined:
    Apr 29, 2014
    Messages:
    1,349 (6.49/day)
    Thanks Received:
    514
    Location:
    Texas
    Well AMD is known for just updating the existing motherboards and chipsets so this does not surprise me. They can keep expanding the FM2+ socket platform for awhile and add features to expand it as much as they see fit. I am surprised they are sticking with the FM2+ socket for another 2 years, but its not the end of the world. I would be more interested in if they decide to do something like move to an AM4 socket and restart that platform with the excavator chips.

    But I guess we will just have to wait and see whats behind door number 2 lol.

    Personally, I think they should start looking to make DDR4 standard asap because it will benefit APU's so much to have it (Though they can just start integrating high performance DDR3 memory controllers I would suppose as well)
     
    Last edited: Jul 25, 2014
  18. TheMailMan78

    TheMailMan78 Big Member

    Joined:
    Jun 3, 2007
    Messages:
    21,175 (7.75/day)
    Thanks Received:
    7,706
    I get what you are saying and the market doesn't really demand much more than 4 generation old CPU's right now BUT......I think AMD should start being a little more proactive in the desktop/server area than reactive to Intel's advancements.

    Granted we have reached a plateau with desktops in terms of performance demands but the server market is hungry for more speed with all the cloud infrastructure going into industries. I think its a tad bit short sided not to adopt DDR4 earlier rather than later for AMD at least in the server market.
     
    Solaris17 and GhostRyder say thanks.
  19. Shambles1980

    Joined:
    May 3, 2014
    Messages:
    540 (2.65/day)
    Thanks Received:
    102
    i dont see why fm2+ boards couldnt use ddr4 with some updated hardware on mother boards.
    lga 775 managed to span ddr ddr2 and ddr3. obviously would be per board specific but i dont see how the socket type is relivant to what memory can be used.
     
  20. RCoon

    RCoon Gaming Moderator Staff Member

    Joined:
    Apr 19, 2012
    Messages:
    7,734 (8.16/day)
    Thanks Received:
    3,846
    Location:
    Gypsyland, UK
    I'll just leave this addition to AMD's CG video portfolio here...
     
  21. john_

    john_

    Joined:
    Sep 6, 2013
    Messages:
    264 (0.59/day)
    Thanks Received:
    69
    Location:
    Athens, Greece
    Probably HSA promotion. But I think they can promote HSA better if they start selling more Kaveri APUs instead of keep selling Richland and Trinity instead.
     
  22. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,413 (1.99/day)
    Thanks Received:
    572
    Location:
    Manchester uk
    Ddr4 on what is essentially a budget socket won't make any sense until it's not at crazy prices and it isn't likely to get that cheap any time soon.
    And the Op kind of implies that Amd are definitely bad for holding on to reality , intel by comparison keep swapping sockets and chipsets mearly to keep people from having more than a few years upgrade path.
    Imho pciex 3 is not utilised 100% by 99% of those that have it and ddr4 is simply to expensive at this time so I welcome the common sense approach of No we won't swap sockets just to drum up chipset sales.
     
    GhostRyder says thanks.
  23. Assimilator

    Assimilator

    Joined:
    Feb 18, 2005
    Messages:
    623 (0.17/day)
    Thanks Received:
    105
    Location:
    South Africa
    In LGA775 days, memory controllers were embedded into discrete north bridge chipsets. Nowadays, the north bridge functionality has moved onto the CPU itself and the north bridge no longer exists. Hence memory support is now coupled to the CPU you use, not motherboard.

    Granted, there's no technical reason why AMD can't release CPUs that support both DDR3 and DDR4 at the same time... but there are plenty of good financial reasons why two memory controllers on a CPU don't make much sense. Especially when you're in AMD's position where they're targeting their CPUs at the price-conscious.
     
    eidairaman1 and Aquinus say thanks.
  24. Assimilator

    Assimilator

    Joined:
    Feb 18, 2005
    Messages:
    623 (0.17/day)
    Thanks Received:
    105
    Location:
    South Africa
    Not to mention that integrating a DDR4 memory controller into current CPUs would require a re-spin and re-validation of those CPU designs, which isn't cheap. Plus then AMD would need to convince motherboard manufacturers to come up with DDR4 board designs.
     
    eidairaman1 says thanks.
  25. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,413 (1.99/day)
    Thanks Received:
    572
    Location:
    Manchester uk
    That last bits probably the easiest ie motherboard maker's love anything that can sell more boards but an efficient low cost computer platform still needs low cost parts to fit it or your target market wont buy in
     
    eidairaman1 says thanks.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page