Discussion in 'News' started by btarunr, Jul 25, 2014.
FM2+ might be here to stay but AMD staying till 2016 is highly questionable
But they also do not like to have to re-design a board all the time. Which could cause a bit of a stir in general if they had to rebuild a new designed motherboard supporting DDR4.
Yea that would be an oddity in general and to hard for them to do on a cost effective basis. I believe with DDR3 the way it is we still have a bit of room left for some extra power to come from it so my guess will be that they just integrate higher memory controllers into the design until they feel DDR4 is stable enough of a platform while keeping cost down.
I'll just leave this here:
Dual memory support is nothing new for AMD. Whether or not it makes sense to do so on a budget product is another matter entirely.
if they are sticking to the same socket nothing says they wont just build a different cpu that does ddr4 and ddr3. to put in it. but that depends on the server market rather than enthusiasts. and i dunno if its worth it. but then its just a matter of pick a older board with ddr3 vs newer with ddr4 and can use the same cpu in both. so can buy the cpu now use it in your old rig. then get a new board later if you think you need to.
i do think its a good idea to stick to the same socket type though. it does help funnel people who are already using that socket to upgrade with a new amd processor rather than jump ship to intel because they would have to buy a new board any way if they changed socket. it worked with buldosers for the most part and apu's have a much better reputation so its a good strategy.
personally i dont see amd going any where.
they have a lot of gpu sales.
consoles use apu's
laptops with apu's sell really well. it really is only the enthusiast range of cpu's that suck.
how well they manage the money is not something i could really comment on. but i doubt they dont have enough coming in to be a viable company.
Shambles, the problem is that DDR4 has a different pinout than DDR3 did, which very well may call for a different number of pins to be dedicated to DRAM on the CPU socket since most CPUs now have integrated memory controllers. Both DDR2 and DDR3 had 240-pins on the DIMM connector, which means that there was an identical pinout for IMCs using DDR2 and DDR3 so no changes were needed to pin location or count. DDR4 however has 284 pins unlike DDR2 and 3's 240. This requires changes to the IMC interface and for CPUs with IMCs, demands a change in the socket, you simply can't get around it. Also there were some rather extensive changes made with DDR4 where a redesign for DIMM traces should be considered anyways, regardless of pin count. DDR4 is a bigger leap than DDR3 was.
AMD rolled the dice back in 2007-08 and gambled that speed and a server-centric (high profit line) architecture was the way of the future. At the time AMD's server market share was just beginning its slide into oblivion...and of course both Hector Ruiz and Dirk Meyer both with a server backround.
I think the original "think big" stemmed from AMD's slippage of Barcelona and OEM's losing interest in AMD because of it. It's no coincidence that whenever Barcelona's time-to-market came up, AMD attempted to deflect scrutiny by talking up Bulldozer ( this from 2007 for example. Note the writing-the-cheques-the-silicon-couldn't-cash " "the highest performing single and multi-threaded compute core in history" mantra).
The fact that Dirk Meyer and the AMD board marginalized the mobile market development and the very long lead-in times associated with an architecture means that one the decision was made to go with Bulldozer, everyone was along for the ride for years to come.
True enough. Sane and rational seldom sells product as well as hype and a new (if dubious) feature list and a new nameplate. Whether the product justifies itself to a handful of people with claims to being enthusiasts is largely immaterial - a hardware vendors fortunes rise or falls with OEM's...and OEM's demand new product. Somehow I don't think putting out a fourth or fifth revision of a 900-series chipset board every year cuts it for Asus and Co. and there is precious little to recommend these later offerings over the ones that came before.
Having said that, AMD's R&D money men have short arms and deep pockets, and no amount of funding is going to claw back market share in x86 HEDT/server/workstation from Intel. The lead-in time is too great, and Intel's market penetration too encompassing. AMD's board are renowned for playing the short game, so I'd guess that they'd continue to look at markets where they can have an immediate impact.
There is little fundamental difference between DDR2 and DDR3. DDR4 on the other hand has some distinct mechanical changes from DDR3. One that comes to mind is voltage margining. vREF DQ is now on the memory die and is accessed by the memory controller directly. AFAIK DDR3 relies upon external regulation chips of the boards memory sub-system - so you're looking at an overhaul of the memory controller in any case.
Not sure that would be possible TBH, two discrete memory controllers takes up die space and requires a certain amount of extra logic. DDR4 being single channel and requiring more direct interface between the MC and the memory sounds like a nightmare in waiting for anyone contemplating building a MC with DDR3 + DDR4 functionality.
But that was at a time when AMD was banking on the marketing that their new processors could work on the old motherboards. They pretty much had to do this to save face by giving the original Phenom owners an upgrade path to a processor that wasn't crap.
And we saw how well that worked for AMD. they lost more market share, their cpu performance and pwer consumption took a step BACKWARDS, and AMD lost more money and went into panic mode, shedding staff and their CEO. And now.....fusions are all one or two module (cpu clusters dont count, no matter what amd says.). Im pretty sure that the "moar cores" philosophy didnt do amd much good.
Meanwhile, intels fewer, more powerful cores continue to dominate the market, in both servers and workstations. Their more power efficient, fewer cores are much preferred in laptops, so much so that finding an AMD laptop is quite difficult. perhaps AMD should try to follow suit, and produce a faster, more efficient architecture, instead of the "moar cores, more Gigahurts" design that is slowly killing them.
Since there is no tangible advantage to DDR4 for CPU powered desktop use, it's foolish to even bother with it. Intel is using DDR4 as a gimmick for suckers. DDR4 is designed primarily for servers and it will be more expensive than DDR3 LV. DDR3 RAM running at 1600+ MHz. on a discrete CPU desktop system is not a system bottleneck so faster RAM is just wasted money. On an APU DDR3 up to 2133 MHz. offers a modest performance gain. Maybe some day DDR4 will have merit for the desktop use but not for years to come.
There is also no reason to change from socket FM2+ as it works just fine. Changing sockets for the sake of exploiting consumers is what Intel does.
Amd to its knees? HARDLY!
They are making profits in markets that do count. Enthusiast is such a small margin.
They didnt have an IMC THEN
Thinking like an end user. Advantage or no, new hardware is sold on the basis that it is new. Does it need to be practical ? Of course not. Why do you think OEM's strongarm AMD and Nvidia into rebranding graphics every product cycle?
Considering Intel's desktop is an offshoot of their pro CPU/chipsets, would you think Intel would make DDR4 available for Xeon and then devise a whole new platform for consumers using DDR3 ?
DDR3 was also pretty damn expensive when it hit primetime also - more to the point, when DDR3-1066/-1333 became the mainstream DDR3 RAM, it was massively more expensive than the good DDR2 running the same bandwidth. My D9GMH/D9GKX DDR2 modules were well capable of DDR2-1150 to -1200 speeds, but I didn't for one minute think that DDR3 wasn't going to be the new standard.
Yeah, complete and utter bastards scamming people with that one-and-done FM1 socket.........................ah, wait
Yeah but you cant apply the new factor in the same way to a budget aware platform smokey so jorge was right and ddr4 at this point is irrelevant to amd apus ,maybe 2016 is when amd believe ddr4 will become mainstream cheap.
well if amd say/think 2016 you can usually add 2 years to that so 2018 for ddr4 from amd
So so cynical. Imho they will have an apu or two with ddr4 support long before then and long before they hit main stream, for the micro server market.
You know for those with the money to pay for the damn ddr4.
Amds main issue atm is the dissparity in requirements between server and low price point oem pc system.
An oem is never ever going to fit ddr4 into a 5-600 notes home pc Or laptop so why have support for ddr4 or pciex3 when no one wants or can afford them anyway.
Which is exactly what I said in my earlier post...except Jorge is talking the here-and-now, and I was talking about laying down designs and taking lead-in time into account. I'm also guessing that DDR4 introduction would mirror that of DDR2/DDR3, in that non-ECC desktop low-binned entry level DDR4(-2133CAS13) won't be overly expensive since many DDR3 sticks already exceed this.
There are TWO ways of looking at this:
1. What AMD will likely do (and the K12 ARM, Seattle server tend to confirm this), which is maximise their bang-for-buck, and find a market where they don't butt heads with Intel and introduce the feature sets as R&D allow, and:
2. What AMD would need to do to remain anything more than a mosquito on the Intel elephant's arse. This is an exercise in what AMD would need to do in the marketplace to avoid further slippage against their competitors. As an exercise in marketing AMD themselves realize that they must hit the bullet points OEM's demand (see the link below). For some strange reason, Jorge seems to think the consumer DIY market (where AMD buyers pride themselves on their frugality and how long they go between upgrades !?!!?) drives company sustainability. The only other explanation is Jorge sees AMD as only a budget consumer option for years to come - in which case they'll never be on board with DDR4. If there is indeed a further (non rebrand/re-release) of the FX enthusiast platform then AMD would need to be working on DDR4 integration now, regardless of current considerations
What AMD will do is option 1 because they have no choice, because option 2 was taken away by hubris (W. Jerry Sander III), and the level of strategic thinking commensurate with a game of tic-tac-toe rather than a semiconductor company (Hector Ruiz, Dirk Meyer). The difference between my view and Jorge's is that I recognize that AMD have been forced into a course of action that is less than ideal, whereas Jorge sees any AMD initiative as the correct one at any time and tends to look no further ahead than lunch - and any other course of strategy as the province of some evil empire catering to the retarded, which is naïve to say the least. DDR4 will come to AMD APU's since they will need to support it in the server market (AMD's own slides confirm this - look at the monitor next to the guy doing a passable Godzilla impression)....do really think that AMD have sufficient R&D to develop separate APU architectures for both server and desktop ? If Intel with all their resources combine both, what makes you think AMD can do any different?
The only difference here, is that Intel is ready at the introduction of the technology while AMD is not. If AMD had the R&D resources you can bet the mortgage that they would be riding DDR4 for all its worth.
Amd will have 3 independent cpu architecture types for servers.
Apu derived from but not the same as Bd
They don't do too bad with their measly r and d budget.
I think you mean Bulldozer derivatives including current FM2+ APUs.
Then Kabini and low-power APU.
Then future ARM CPUs.
Now ULV APUs use the module architecture like BD-based desktop CPUs do.
No opterons are 4-16 core Bd based (@piledriver arch)
Apus with and without gpu and @steamroller
And seatle with its arm cores
Plus the other cores Amd has and continues to develop like puma for consoles.
My point was is that they don't do to bad with the R and D budget they have
You're counting generational changes concurrently. Bulldozer (and Piledriver) is done and dead from an R&D point of view - no need to include an architecture that is 2+ years in the market
In production Steamroller based arch is Kaveri and Berlin, they will be replaced by Excavator based Carizzo (desktop) and Toronto (server). Like Kaveri/Berlin they use the same base architecture. Seattle is of course an ARM Cortex A57.
Mullins and Beema SoC use the already in production Puma+ core which is just a respin of Kabini/Temash's Jaguar core - the follow on will use a derivative of the same. So of those, you seem to be including past architectures (from an R&D perspective) as current. Going forward, AMD have Excavator (x86) and ARM Cortex (SoC) and Puma+ (SoC)one of which could be classified as "new architecture", the other two are already in existence and being utilized in current product . Classifying Puma+ as anything more than incremental polishing of an older in-production Jaguar seems disingenuous and actually makes for a worse comparison against the company they need to take market share from. Comparing to Intel (using your Bulldozer/Piledriver-based Opteron (Zurich/Delhi) as baseline for "current/going forward" , you could say that Intel have Haswell, Ivy Bridge, Sandy Bridge, Saltwell, Silvermont, Airmont (Goldmont will also debut well before Carizzo, as could Broadwell) architectures using the same parameters.
Im saying they do ok all things considered imho
Nonsense! Those chips were manufactured to the specifications that were given by AMD. They were designed how they were in order to cut cost. FPU's take up a large amount of real estate and that costs money.
The module’s cores, in addition to the shared FPU, share L1 instruction cache, fetch, decode, and L2 cache. Once again, a big reason for this was to minimize cost while the engineers anticipated minimal performance loss from this design. They were wrong. By sharing all those features in addition to the shared FPU, the performance was heavily reduced.
If what you say was accurate about only needing FP performance in synthetic benchmarks, then why do those chips perform so poorly in real benchmarks?
Bringing kabini into this is just going to confuse people. Kabini is a low power APU made for netbooks, tablets, some laptops, and "Next-Gen Consoles". lol. (i love saying that to the console fanboy mouth-breathers)
Considering how small AMD's market share currently is, I don't see software design changing that much just to make AMD chips run better. The market leader is typically who the software is designed for, and currently AMD has about 16.9% of the CPU market, which is up from 14.3% since last year. This was primarily due to the sale of roughly 10 million PS4 and Xbone consoles. They actually lost ground if you don't factor those in. The same goes for the graphics card segment. Down again.
Do you have hyperlinks for all of this info?
sure il just google one of the quotes and find some.
here is the one i pasted above.
but a few covered the story
Thx dude. Yeah i built my bro a Ph2 x2 555BE gearing up for bd. Lucky enough the board supports a 8350 and possibly 9550.
Separate names with a comma.