Friday, July 25th 2014

AMD to Drag Socket FM2+ On Till 2016

AMD's desktop processor and APU platforms are not expected to see any major changes till 2016, according to a BitsnChips report. The delay is attributed to a number of factors, including DDR4 memory proliferation (i.e. for DDR4 memory to get affordable enough for target consumers of APUs), and AMD's so-called "project Fast-Forward," which aims to place high-bandwidth memory next to the APU die, for the AMD's increasingly powerful integrated graphics solutions to overcome memory bottlenecks.

The company's upcoming "Carrizo" APU is targeted at low-TDP devices such as ultra-slim notebooks and tablets; and is the first chip to integrate AMD's next-generation "Excavator" CPU micro-architecture. "Carrizo" chips continue to use DDR3 system memory, and therefore it's possible that AMD may design a socket FM2+ chip based on "Excavator," probably leveraging newer silicon fab processes. But otherwise, socket FM2+ is here to stay.
Source: BitsnChips, Image Courtesy: VR-Zone
Add your own comment

54 Comments on AMD to Drag Socket FM2+ On Till 2016

#1
GhostRyder
by: theoneandonlymrk
That last bits probably the easiest ie motherboard maker's love anything that can sell more boards but an efficient low cost computer platform still needs low cost parts to fit it or your target market wont buy in
But they also do not like to have to re-design a board all the time. Which could cause a bit of a stir in general if they had to rebuild a new designed motherboard supporting DDR4.

by: Assimilator
In LGA775 days, memory controllers were embedded into discrete north bridge chipsets. Nowadays, the north bridge functionality has moved onto the CPU itself and the north bridge no longer exists. Hence memory support is now coupled to the CPU you use, not motherboard.

Granted, there's no technical reason why AMD can't release CPUs that support both DDR3 and DDR4 at the same time... but there are plenty of good financial reasons why two memory controllers on a CPU don't make much sense. Especially when you're in AMD's position where they're targeting their CPUs at the price-conscious.
Yea that would be an oddity in general and to hard for them to do on a cost effective basis. I believe with DDR3 the way it is we still have a bit of room left for some extra power to come from it so my guess will be that they just integrate higher memory controllers into the design until they feel DDR4 is stable enough of a platform while keeping cost down.
Posted on Reply
#2
m4gicfour
I'll just leave this here:
by: wikipedia

Socket AM2+ versions of the Phenom II (920, 940) lack forward-compatibility with Socket AM3.[8]Socket AM3 versions of the Phenom II are backwards-compatible with Socket AM2+, though this is contingent on motherboard manufacturers supplying BIOS updates. In addition to the Phenom II's pin compatibility, the AM3 memory controller supports both DDR2 and DDR3 memory (up to DDR2-1066 and DDR3-1333), allowing existing AM2+ users to upgrade their CPU without changing the motherboard or memory. However, similar to the way the original Phenom handled DDR2-1066, current Phenom II platforms limit the usage of DDR3-1333 to one DIMM per channel; otherwise, the DIMMs are under clocked to DDR3-1066.[9] AMD claims that this behaviour is due to the BIOS, not the memory controller, and plans to address it with a BIOS update. The dual-spec memory controller also gives motherboard manufacturers and system builders the option of pairing AM3 with DDR2, as compared to competing chips from Intel which require DDR3.
Dual memory support is nothing new for AMD. Whether or not it makes sense to do so on a budget product is another matter entirely.
Posted on Reply
#3
Shambles1980
by: Assimilator
In LGA775 days, memory controllers were embedded into discrete north bridge chipsets. Nowadays, the north bridge functionality has moved onto the CPU itself and the north bridge no longer exists. Hence memory support is now coupled to the CPU you use, not motherboard.

Granted, there's no technical reason why AMD can't release CPUs that support both DDR3 and DDR4 at the same time... but there are plenty of good financial reasons why two memory controllers on a CPU don't make much sense. Especially when you're in AMD's position where they're targeting their CPUs at the price-conscious.
if they are sticking to the same socket nothing says they wont just build a different cpu that does ddr4 and ddr3. to put in it. but that depends on the server market rather than enthusiasts. and i dunno if its worth it. but then its just a matter of pick a older board with ddr3 vs newer with ddr4 and can use the same cpu in both. so can buy the cpu now use it in your old rig. then get a new board later if you think you need to.

i do think its a good idea to stick to the same socket type though. it does help funnel people who are already using that socket to upgrade with a new amd processor rather than jump ship to intel because they would have to buy a new board any way if they changed socket. it worked with buldosers for the most part and apu's have a much better reputation so its a good strategy.

by: pidgin
FM2+ might be here to stay but AMD staying till 2016 is highly questionable
personally i dont see amd going any where.
they have a lot of gpu sales.
consoles use apu's
laptops with apu's sell really well. it really is only the enthusiast range of cpu's that suck.
how well they manage the money is not something i could really comment on. but i doubt they dont have enough coming in to be a viable company.
Posted on Reply
#4
Aquinus
Resident Wat-man
Shambles, the problem is that DDR4 has a different pinout than DDR3 did, which very well may call for a different number of pins to be dedicated to DRAM on the CPU socket since most CPUs now have integrated memory controllers. Both DDR2 and DDR3 had 240-pins on the DIMM connector, which means that there was an identical pinout for IMCs using DDR2 and DDR3 so no changes were needed to pin location or count. DDR4 however has 284 pins unlike DDR2 and 3's 240. This requires changes to the IMC interface and for CPUs with IMCs, demands a change in the socket, you simply can't get around it. Also there were some rather extensive changes made with DDR4 where a redesign for DIMM traces should be considered anyways, regardless of pin count. DDR4 is a bigger leap than DDR3 was.
Posted on Reply
#5
HumanSmoke
by: john_
As I said. That was the whole idea of the design, to cut corners.
AMD rolled the dice back in 2007-08 and gambled that speed and a server-centric (high profit line) architecture was the way of the future. At the time AMD's server market share was just beginning its slide into oblivion...and of course both Hector Ruiz and Dirk Meyer both with a server backround.
I think the original "think big" stemmed from AMD's slippage of Barcelona and OEM's losing interest in AMD because of it. It's no coincidence that whenever Barcelona's time-to-market came up, AMD attempted to deflect scrutiny by talking up Bulldozer ( this from 2007 for example. Note the writing-the-cheques-the-silicon-couldn't-cash " "the highest performing single and multi-threaded compute core in history" mantra).
The fact that Dirk Meyer and the AMD board marginalized the mobile market development and the very long lead-in times associated with an architecture means that one the decision was made to go with Bulldozer, everyone was along for the ride for years to come.
by: TheMailMan78
I get what you are saying and the market doesn't really demand much more than 4 generation old CPU's right now BUT......I think AMD should start being a little more proactive in the desktop/server area than reactive to Intel's advancements.
True enough. Sane and rational seldom sells product as well as hype and a new (if dubious) feature list and a new nameplate. Whether the product justifies itself to a handful of people with claims to being enthusiasts is largely immaterial - a hardware vendors fortunes rise or falls with OEM's...and OEM's demand new product. Somehow I don't think putting out a fourth or fifth revision of a 900-series chipset board every year cuts it for Asus and Co. and there is precious little to recommend these later offerings over the ones that came before.
Having said that, AMD's R&D money men have short arms and deep pockets, and no amount of funding is going to claw back market share in x86 HEDT/server/workstation from Intel. The lead-in time is too great, and Intel's market penetration too encompassing. AMD's board are renowned for playing the short game, so I'd guess that they'd continue to look at markets where they can have an immediate impact.
by: m4gicfour
Dual memory support is nothing new for AMD. Whether or not it makes sense to do so on a budget product is another matter entirely.
There is little fundamental difference between DDR2 and DDR3. DDR4 on the other hand has some distinct mechanical changes from DDR3. One that comes to mind is voltage margining. vREF DQ is now on the memory die and is accessed by the memory controller directly. AFAIK DDR3 relies upon external regulation chips of the boards memory sub-system - so you're looking at an overhaul of the memory controller in any case.
by: Assimilator
Granted, there's no technical reason why AMD can't release CPUs that support both DDR3 and DDR4 at the same time...
Not sure that would be possible TBH, two discrete memory controllers takes up die space and requires a certain amount of extra logic. DDR4 being single channel and requiring more direct interface between the MC and the memory sounds like a nightmare in waiting for anyone contemplating building a MC with DDR3 + DDR4 functionality.
Posted on Reply
#6
newtekie1
Semi-Retired Folder
by: m4gicfour
I'll just leave this here:


Dual memory support is nothing new for AMD. Whether or not it makes sense to do so on a budget product is another matter entirely.
But that was at a time when AMD was banking on the marketing that their new processors could work on the old motherboards. They pretty much had to do this to save face by giving the original Phenom owners an upgrade path to a processor that wasn't crap.
Posted on Reply
#7
TheinsanegamerN
by: Aquinus
No, it was to save die space so you can fit more compute cores in the same area. They removed extra hardware that wasn't needed and added more where it needed it (eventually). A module is more of a dual core than you think because there are actually two full integer cores that run in parallel unlike with HyperThreading which re-uses components that aren't being used to get some extra work done. Some of the shared components are things like the op code decoders, cache, and a wide FPU (256-bit vs 128) and using things like XOP and FMA, that single wide FPU can be run as two individual 128-bit FPUs for particular instructions. It's not perfect but it made one thing very clear; CPUs should be doing mostly integer math and some floating point math and if you need to do a ton of floating point calculations, you should be doing it on a GPU/GPGPU setup. It's no different than nVidia gimping its double precision performance to improve single precision because that is what games typically use.

So no, they didn't do it to "cut corners", that's just how you feel about it which is different than why they did it. They did it to save die space so they could cram more cores on a single CPU.
And we saw how well that worked for AMD. they lost more market share, their cpu performance and pwer consumption took a step BACKWARDS, and AMD lost more money and went into panic mode, shedding staff and their CEO. And now.....fusions are all one or two module (cpu clusters dont count, no matter what amd says.). Im pretty sure that the "moar cores" philosophy didnt do amd much good.

Meanwhile, intels fewer, more powerful cores continue to dominate the market, in both servers and workstations. Their more power efficient, fewer cores are much preferred in laptops, so much so that finding an AMD laptop is quite difficult. perhaps AMD should try to follow suit, and produce a faster, more efficient architecture, instead of the "moar cores, more Gigahurts" design that is slowly killing them.
Posted on Reply
#8
Jorge
Since there is no tangible advantage to DDR4 for CPU powered desktop use, it's foolish to even bother with it. Intel is using DDR4 as a gimmick for suckers. DDR4 is designed primarily for servers and it will be more expensive than DDR3 LV. DDR3 RAM running at 1600+ MHz. on a discrete CPU desktop system is not a system bottleneck so faster RAM is just wasted money. On an APU DDR3 up to 2133 MHz. offers a modest performance gain. Maybe some day DDR4 will have merit for the desktop use but not for years to come.

There is also no reason to change from socket FM2+ as it works just fine. Changing sockets for the sake of exploiting consumers is what Intel does.
Posted on Reply
#9
eidairaman1
Amd to its knees? HARDLY!
They are making profits in markets that do count. Enthusiast is such a small margin.

by: john_
As I said. That was the whole idea of the design, to cut corners.

FPUs where not needed???


As I also said. A module is as much hardware as it is necessary, so that AMD can advertise it as a full dual core without the fear lawsuits start dropping like bombs in their headquarters for misleading their customers.



If the integer units where much faster and if the 6 FPUs in the Phenom II X6 where not doing circles around the 4 in the first 8 core Bulldozer chips, in most cases, or if there where stream processors in the FX chips in the first place to take advantage of GPGPU and also we had plenty of software for GPGPU, I could agree with you. But we have a ton of "IFs" years after the first Bulldozer and of course this isn't the same case as with Nvidia because Nvidia's cards are top performers. So I can't agree with you.


It is not a feeling. It is reality. They couldn't follow Intel in the thread count, Intel had an unfair advantage there with hyperthreading, they couldn't follow Intel in the manufacturing process, so they had to do something. And that something was to throw half the FPUs out and started counting integer units when advertising the chips. Now they started talking about compute cores so they can advertise 4, 8, or 12 cores(I hope this truck I posted doesn't transfer compute cores but integer cores, very optimistic but let's just hope).

You want to justify a design that failed miserably and brought AMD to it's knees. I can't stop you. I only can say to you that for the Jaguar design where space is much more limited and power consumption much more important, they didn't choose the module design. Even considering that Kabinis for example do have stream processors in them for GPGPU they still paired an integer unit with a full fpu. That should tell you something.
by: Shambles1980
i dont see why fm2+ boards couldnt use ddr4 with some updated hardware on mother boards.
lga 775 managed to span ddr ddr2 and ddr3. obviously would be per board specific but i dont see how the socket type is relivant to what memory can be used.
They didnt have an IMC THEN
Posted on Reply
#10
HumanSmoke
by: Jorge
Since there is no tangible advantage to DDR4 for CPU powered desktop use, it's foolish to even bother with it.
Thinking like an end user. Advantage or no, new hardware is sold on the basis that it is new. Does it need to be practical ? Of course not. Why do you think OEM's strongarm AMD and Nvidia into rebranding graphics every product cycle?
by: Jorge
Intel is using DDR4 as a gimmick for suckers. DDR4 is designed primarily for servers
Considering Intel's desktop is an offshoot of their pro CPU/chipsets, would you think Intel would make DDR4 available for Xeon and then devise a whole new platform for consumers using DDR3 ?
by: Jorge
and it will be more expensive than DDR3 LV.
Duh.
DDR3 was also pretty damn expensive when it hit primetime also - more to the point, when DDR3-1066/-1333 became the mainstream DDR3 RAM, it was massively more expensive than the good DDR2 running the same bandwidth. My D9GMH/D9GKX DDR2 modules were well capable of DDR2-1150 to -1200 speeds, but I didn't for one minute think that DDR3 wasn't going to be the new standard.
by: Jorge
Changing sockets for the sake of exploiting consumers is what Intel does.
Yeah, complete and utter bastards scamming people with that one-and-done FM1 socket.........................ah, wait
Posted on Reply
#11
theoneandonlymrk
Yeah but you cant apply the new factor in the same way to a budget aware platform smokey so jorge was right and ddr4 at this point is irrelevant to amd apus ,maybe 2016 is when amd believe ddr4 will become mainstream cheap.
Posted on Reply
#12
Shambles1980
well if amd say/think 2016 you can usually add 2 years to that so 2018 for ddr4 from amd
Posted on Reply
#13
theoneandonlymrk
So so cynical. Imho they will have an apu or two ;) with ddr4 support long before then and long before they hit main stream, for the micro server market.
You know for those with the money to pay for the damn ddr4.
Amds main issue atm is the dissparity in requirements between server and low price point oem pc system.
An oem is never ever going to fit ddr4 into a 5-600 notes home pc Or laptop so why have support for ddr4 or pciex3 when no one wants or can afford them anyway.
Posted on Reply
#14
HumanSmoke
by: theoneandonlymrk
Yeah but you cant apply the new factor in the same way to a budget aware platform smokey so jorge was right and ddr4 at this point is irrelevant to amd apus
Which is exactly what I said in my earlier post...except Jorge is talking the here-and-now, and I was talking about laying down designs and taking lead-in time into account. I'm also guessing that DDR4 introduction would mirror that of DDR2/DDR3, in that non-ECC desktop low-binned entry level DDR4(-2133CAS13) won't be overly expensive since many DDR3 sticks already exceed this.
by: HumanSmoke
AMD's R&D money men have short arms and deep pockets, and no amount of funding is going to claw back market share in x86 HEDT/server/workstation from Intel. The lead-in time is too great, and Intel's market penetration too encompassing. AMD's board are renowned for playing the short game, so I'd guess that they'd continue to look at markets where they can have an immediate impact.
There are TWO ways of looking at this:
1. What AMD will likely do (and the K12 ARM, Seattle server tend to confirm this), which is maximise their bang-for-buck, and find a market where they don't butt heads with Intel and introduce the feature sets as R&D allow, and:
2. What AMD would need to do to remain anything more than a mosquito on the Intel elephant's arse. This is an exercise in what AMD would need to do in the marketplace to avoid further slippage against their competitors. As an exercise in marketing AMD themselves realize that they must hit the bullet points OEM's demand (see the link below). For some strange reason, Jorge seems to think the consumer DIY market (where AMD buyers pride themselves on their frugality and how long they go between upgrades !?!!?) drives company sustainability. The only other explanation is Jorge sees AMD as only a budget consumer option for years to come - in which case they'll never be on board with DDR4. If there is indeed a further (non rebrand/re-release) of the FX enthusiast platform then AMD would need to be working on DDR4 integration now, regardless of current considerations

What AMD will do is option 1 because they have no choice, because option 2 was taken away by hubris (W. Jerry Sander III), and the level of strategic thinking commensurate with a game of tic-tac-toe rather than a semiconductor company (Hector Ruiz, Dirk Meyer). The difference between my view and Jorge's is that I recognize that AMD have been forced into a course of action that is less than ideal, whereas Jorge sees any AMD initiative as the correct one at any time and tends to look no further ahead than lunch - and any other course of strategy as the province of some evil empire catering to the retarded, which is naïve to say the least. DDR4 will come to AMD APU's since they will need to support it in the server market (AMD's own slides confirm this - look at the monitor next to the guy doing a passable Godzilla impression)....do really think that AMD have sufficient R&D to develop separate APU architectures for both server and desktop ? If Intel with all their resources combine both, what makes you think AMD can do any different?
The only difference here, is that Intel is ready at the introduction of the technology while AMD is not. If AMD had the R&D resources you can bet the mortgage that they would be riding DDR4 for all its worth.
Posted on Reply
#15
theoneandonlymrk
Amd will have 3 independent cpu architecture types for servers.
Bulldozer derivatives
Apu derived from but not the same as Bd
And Arm
They don't do too bad with their measly r and d budget.
Posted on Reply
#16
Aquinus
Resident Wat-man
by: theoneandonlymrk
Amd will have 3 independent cpu architecture types for servers.
Bulldozer derivatives
Apu derived from but not the same as Bd
And Arm
They don't do too bad with their measly r and d budget.
I think you mean Bulldozer derivatives including current FM2+ APUs.
Then Kabini and low-power APU.
Then future ARM CPUs.

Now ULV APUs use the module architecture like BD-based desktop CPUs do.
Posted on Reply
#17
theoneandonlymrk
No opterons are 4-16 core Bd based (@piledriver arch)
Apus with and without gpu and @steamroller
And seatle with its arm cores
Plus the other cores Amd has and continues to develop like puma for consoles.
My point was is that they don't do to bad with the R and D budget they have
Posted on Reply
#18
HumanSmoke
by: theoneandonlymrk
Amd will have 3 independent cpu architecture types for servers.
Bulldozer derivatives

Apu derived from but not the same as Bd
And Arm
They don't do too bad with their measly r and d budget.
You're counting generational changes concurrently. Bulldozer (and Piledriver) is done and dead from an R&D point of view - no need to include an architecture that is 2+ years in the market
In production Steamroller based arch is Kaveri and Berlin, they will be replaced by Excavator based Carizzo (desktop) and Toronto (server). Like Kaveri/Berlin they use the same base architecture. Seattle is of course an ARM Cortex A57.
Mullins and Beema SoC use the already in production Puma+ core which is just a respin of Kabini/Temash's Jaguar core - the follow on will use a derivative of the same. So of those, you seem to be including past architectures (from an R&D perspective) as current. Going forward, AMD have Excavator (x86) and ARM Cortex (SoC) and Puma+ (SoC)one of which could be classified as "new architecture", the other two are already in existence and being utilized in current product . Classifying Puma+ as anything more than incremental polishing of an older in-production Jaguar seems disingenuous and actually makes for a worse comparison against the company they need to take market share from. Comparing to Intel (using your Bulldozer/Piledriver-based Opteron (Zurich/Delhi) as baseline for "current/going forward" , you could say that Intel have Haswell, Ivy Bridge, Sandy Bridge, Saltwell, Silvermont, Airmont (Goldmont will also debut well before Carizzo, as could Broadwell) architectures using the same parameters.
Posted on Reply
#20
Andrew LB
by: Shambles1980
the corner cutting was done during the mfr process.
Nonsense! Those chips were manufactured to the specifications that were given by AMD. They were designed how they were in order to cut cost. FPU's take up a large amount of real estate and that costs money.
Cutting corners would imply that they skimped to save on cost, which they didn't. AMD's chips are plenty fast, the problem is power consumption. If your cores make too much heat you can't add more or make them run faster. You're complaining about the wrong stuff.

When it comes to integer performance (what CPUs are doing most of the time since memory addresses and strings are represented as integers) that's what CPUs will be doing. More often than not, 4 FPUs will be more than enough for your typical floating point use. Also you're misunderstanding me if you're thinking I'm saying that CPU doesn't need any FPUs. If you're running an application that has more than 4 FPU intensive threads, then you really should be considering GPGPU but most of the time FPU instructions will be spread throughout code and not all bunched up so despite there only being 1 FPU per module, it doesn't matter if it's shared as it will just use whatever is free. You run out of FP performance in unique situations with FX chips which are typically only encountered on benchmarks and less in real world applications.
The module’s cores, in addition to the shared FPU, share L1 instruction cache, fetch, decode, and L2 cache. Once again, a big reason for this was to minimize cost while the engineers anticipated minimal performance loss from this design. They were wrong. By sharing all those features in addition to the shared FPU, the performance was heavily reduced.

If what you say was accurate about only needing FP performance in synthetic benchmarks, then why do those chips perform so poorly in real benchmarks?
Kabini is a different animal because it doesn't use modules or even the Phenom II architecture for that matter. The pipeline is much shorter (shorter than Phenom IIs were in fact,) and is designed for low power use cases, not performance. The cost of a shorter pipeline is that (initially at least,) it can hinder clock speeds until the components on the pipeline are optimized like Intel has done over the last 8 years with the Core architecture.
Bringing kabini into this is just going to confuse people. Kabini is a low power APU made for netbooks, tablets, some laptops, and "Next-Gen Consoles". lol. (i love saying that to the console fanboy mouth-breathers)

Considering how small AMD's market share currently is, I don't see software design changing that much just to make AMD chips run better. The market leader is typically who the software is designed for, and currently AMD has about 16.9% of the CPU market, which is up from 14.3% since last year. This was primarily due to the sale of roughly 10 million PS4 and Xbone consoles. They actually lost ground if you don't factor those in. The same goes for the graphics card segment. Down again.
Posted on Reply
#21
Shambles1980
Ex-AMD Engineer Explains Bulldozer Fiasco: Lack of Fine Tuning.
Engineer: AMD Should Have Hand-Crafted Bulldozer to Ensure High Speed

[10/13/2011 11:21 PM]
by Anton Shilov
Performance that Advanced Micro Devices' eight-core processor demonstrated in real-world applications is far from impressive as the chip barely outperforms competing quad-core central processing units from Intel. The reason why performance of the long-awaited Bulldozer was below expectations is not only because it was late, but because AMD had adopted design techniques that did not allow it tweak performance, according to an ex-AMD engineer.

Cliff A. Maier, an AMD engineer who left the company several years ago, the chip designer decided to abandon practice of hand-crafting various performance-critical parts of its chips and rely completely on automatic tools. While usage of tools that automatically implement certain technologies into silicon speeds up the design process, they cannot ensure maximum performance and efficiency.

Automated Design = 20% Bigger, 20% Slower
"The management decided there should be such cross-engineering [between AMD and ATI teams within the company] ,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style. This results in giving up a lot of performance, chip area, and efficiency. The reason DEC Alphas were always much faster than anything else is they designed each transistor by hand. Intel and AMD had always done so at least for the critical parts of the chip. That changed before I left - they started to rely on synthesis tools, automatic place and route tools, etc.," said Mr. Maier in a forum post noticed by Insideris.com web-site.


A wafer with AMD Orochi dies used for AMD Opteron "Interlagos"/"Valencia and AMD FX "Zambezi" microprocessors

Apparently, automatically-generated designs are 20% bigger and 20% slower than hand-crafted designs, which results in increased transistor count, die space, cost and power efficiency.

"I had been in charge of our design flow in the years before I left, and I had tested these tools by asking the companies who sold them to design blocks (adders, multipliers, etc.) using their tools. I let them take as long as they wanted. They always came back to me with designs that were 20% bigger, and 20% slower than our hand-crafted designs, and which suffered from electro-migration and other problems," the former AMD engineer said.

Inefficiencies in Design?
While it is unknown whether AMD used automatic design flow tools for everything, there are certain facts that point to some inefficient pieces of design within Bulldozer. Officially, AMD claims that the Zambezi/Orochi processor consists of around 2 billion transistors, which is a very large number.


AMD Orochi floorplan

AMD publicly said that each Bulldozer dual-core CPU module with 2MB unified L2 cache contains 213 million transistors and is 30.9mm2 large. By contrast, die size of one processing engine of Llano processor (11-layer 32nm SOI, K10.5+ micro-architecture) is 9.69mm2 (without L2 cache), which indicates that AMD has succeeded in minimizing elements of its new micro-architecture so to maintain small size and production cost of the novelty.

As a result, all four CPU modules with L2 cache within Zambezi/Orochi processor consist of 852 million of transistors and take 123.6mm2 of die space. Assuming that 8MB of L3 cache (6 bits per cell) consist of 405 million of transistors, it leaves around whopping 800 million of transistors to various input/output interfaces, dual-channel DDR3 memory controller as well as various logic and routing inside the chip.

800 million of transistors - which take up a lot of die space - in an incredibly high number for various I/O, memory, logic, etc. For example, Intel's Core i-series "Sandy Bridge" quad-core chip with integrated graphics consists of 995 million.

While it cannot be confirmed, but it looks like AMD Orochi/Zambezi has several hundreds of millions of transistors that are a result of heavy reliance onto automated design tools.

The Result? Profit Drop!
As a consequence of inefficient design and relatively low performance, AMD has to sell its eight-core FX series processors (315mm2 die size) for up to $245 in 1000-unit quantities. By contrast, Intel sells hand-crafted Core i-series "Sandy Bridge" quad-core chips (216mm2 die size) for up to $317 in 1000-unit quantities. Given the fact that both microprocessors are made using 32nm process technology [and thus have comparable per-transistor/per square mm die cost], the Intel one carries much better profit margin than AMD's microprocessor.

AMD did not comment on the news-story.
cut corners...
Posted on Reply
#22
eidairaman1
Do you have hyperlinks for all of this info?
Posted on Reply
#25
anolesoul
What a mistake AMD is making! Intel is taking the lead with supporting DDR4,with motherboards that support DDR4 along with their MOST expense CPU(Top-end)(to support that new chip-set)..a "5 figure amount"(has been rumored,and probably right),for their eight-core!

NOW...what "choice" does "any one" have---"if" they want to get DDR4 memory?!?!?!




Posted on Reply
Add your own comment