Thursday, February 27th 2020

AMD Gives Itself Massive Cost-cutting Headroom with the Chiplet Design

At its 2020 IEEE ISSCC keynote, AMD presented two slides that detail the extent of cost savings yielded by its bold decision to embrace the MCM (multi-chip module) approach to not just its enterprise and HEDT processors, but also its mainstream desktop ones. By confining only those components that tangibly benefit from cutting-edge silicon fabrication processes, namely the CPU cores, while letting other components sit on relatively inexpensive 12 nm, AMD is able to maximize its 7 nm foundry allocation, by making it produce small 8-core CCDs (CPU complex dies), which add up to AMD's target core-counts. With this approach, AMD is able to cram up to 16 cores onto its AM4 desktop socket using two chiplets, and up to 64 cores using eight chiplets on its SP3r3 and sTRX4 sockets.

In the slides below, AMD compares the cost of its current 7 nm + 12 nm MCM approach to a hypothetical monolithic die it would have had to build on 7 nm (including the I/O components). The slides suggest that the cost of a single-chiplet "Matisse" MCM (eg: Ryzen 7 3700X) is about 40% less than that of the double-chiplet "Matisse" (eg: Ryzen 9 3950X). Had AMD opted to build a monolithic 7 nm die that had 8 cores and all the I/O components of the I/O die, such a die would cost roughly 50% more than the current 1x CCD + IOD solution. On the other hand, a monolithic 7 nm die with 16 cores and I/O components would cost 125% more. AMD hence enjoys a massive headroom for cost-cutting. Prices of the flagship 3950X can be close to halved (from its current $749 MSRP), and AMD can turn up the heat on Intel's upcoming Core i9-10900K by significantly lowering price of its 12-core 3900X from its current $499 MSRP. The company will also enjoy more price-cutting headroom for its 6-core Ryzen 5 SKUs than it did with previous-generation Ryzen 5 parts based on monolithic dies.
Source: Guru3D
Add your own comment

89 Comments on AMD Gives Itself Massive Cost-cutting Headroom with the Chiplet Design

#51
ARF
bug
You don't have to tell me that, I used to rock an 4200+ ;)

If you're not talking mobile, I'm pretty sure a C2D@1.5GHz will beat an X2 @~2GHz. Core was about IPC, first and foremost.

Well, yeah, an ancient CPU will not give a first class experience. But throw in a script blocker (e.g. NoScript) so not everyone and their grandma will run scripts in your browser and the web becomes bearable again ;)
NoScript hides the JavaScript, the live thumbnails on the website.
It might offload the CPU but you lose precious content to view.
Posted on Reply
#52
TheGuruStud
ARF
NoScript hides the JavaScript, the live thumbnails on the website.
It might offload the CPU but you lose precious content to view.
That's a pro. I don't need to see anything extra. I only allow JS if it's necessary for the site to function (which the pricks do a lot).
Posted on Reply
#53
bug
ARF
NoScript hides the JavaScript, the live thumbnails on the website.
It might offload the CPU but you lose precious content to view.
Not really. You can (and should) enable JS from the site you visit (it whitelists the visited domain by default, but sometimes that's not enough), but you get to block 3rd party JS. Depending on the website, 3rd party JS can be a crapload, to use the technical term.
Posted on Reply
#54
Mats
TheGuruStud
I did. It was stupid expensive. I waited for phenom 2 and OCed to 4ghz. I used that CPU for several years and it ran every game flawlessly (didn't do too bad in multimedia either).
We were talking about performance.

Besides, you're comparing Intel's 2006 pricing with AMD's 2009 pricing. Core 2 Quad was not expensive in 2009.
Posted on Reply
#55
TheGuruStud
Mats
We were talking about performance.

Besides, you're comparing Intel's 2006 pricing with AMD's 2009 pricing. Core 2 Quad was not expensive in 2009.
And? What's your point? C2Q were old news by 2009 and the same thing happened all over again with Nehalem. MB/CPU for Nehalem was astronomical. The choice was easy.
Posted on Reply
#56
bug
TheGuruStud
And? What's your point? C2Q were old news by 2009 and the same thing happened all over again with Nehalem. MB/CPU for Nehalem was astronomical. The choice was easy.
It was also wrong. Phenom II was never able to match C2Q. C2Q, in turn, overclocked really well.
But if that's what you needed, kudos to you.
Posted on Reply
#57
ARF
The chiplets on N7 are so cheap. It's around $17-18 for a chiplet.

[MEDIA=twitter]1146806732494168064[/MEDIA]

Posted on Reply
#58
Mats
TheGuruStud
And? What's your point? C2Q were old news by 2009 and the same thing happened all over again with Nehalem. MB/CPU for Nehalem was astronomical. The choice was easy.
I've already explained that, we started talking about AMD mocking Intel for using MCM, and I just pointed out that the MCM design was the least of the problem with PD. Then you started talking about something else.
I think we're done, and offtopic.
Posted on Reply
#59
notb
ARF
Not exactly. Intel sells for $20,000, AMD sells for $6,500.
No one prohibits AMD from selling for $10000, not $6500. On paper that would still be much better value, right?

Both companies ask as much as they can, which means AMD product is worth $6500. End of story.
Intel's N14 process is already more than 5-year-old, hence the manufacturing cost should be very low, including the economy of scale.
Well, obviously. If Intel manages to spend less on developing new nodes and new architectures, they make more money.
Since AMD decided to build their lineup on a more advanced and modern process, they have to pay more.

That's the whole point, isn't it?
Posted on Reply
#60
ARF
notb
No one prohibits AMD from selling for $10000, not $6500. On paper that would still be much better value, right?

Both companies ask as much as they can, which means AMD product is worth $6500. End of story.
No, AMD has 2-3% servers market share and needs this pricing in order to ask for some attention from the big partners.

Intel is desperately milking everyone and it's against the interests of the society as a whole.
Posted on Reply
#61
notb
ARF
The chiplets on N7 are so cheap. It's around $17-18 for a chiplet.
LOL.
First of all: that's just the cost of silicon wafer.

But more importantly - in case you've somehow missed it: AMD outsources production to TSMC. It's a separate company.

ARF
No, AMD has 2-3% servers market share and needs this pricing in order to ask for some attention from the big partners.
OMG you're like a semi-intelligent bot. :)

If we assume an OEM is rational and buys the cheaper product*, they would still buy a $10k EPYC over a $20k Xeon - if EPYC was worth $10k.
If AMD sells them for $6.5k, it means that's the actual value (on average).
Of course all of that is true when we assume both Intel and AMD actually ask the suggested price. There's a good chance OEMs pay Intel a lot less. Nevertheless, gross margin remains higher.
Intel is desperately milking everyone and it's against the interests of the society as a whole.
Yes, all companies should give products away for free. The society would benefit for sure. :)

*) and they do, because doing otherwise could be classified as misconduct - and most large OEMs are public companies.
Posted on Reply
#62
TheGuruStud
bug
It was also wrong. Phenom II was never able to match C2Q. C2Q, in turn, overclocked really well.
But if that's what you needed, kudos to you.
Buying Intel is always wrong unless it's a celeron 300A.

Oh noes, C2Q had slightly higher IPC. Whatever shall I do? Meanwhile, you had to use FSB with a really good MB to get nice clocks on the C2Qs (or buy the expensive models with higher stock clock).

Nope, I made the right choice.
Posted on Reply
#63
ARF
TheGuruStud
Buying Intel is always wrong unless it's a celeron 300A.

Oh noes, C2Q had slightly higher IPC. Whatever shall I do? Meanwhile, you had to use FSB with a really good MB to get nice clocks on the C2Qs (or buy the expensive models with higher stock clock).

Nope, I made the right choice.
You could upgrade a Phenom system to a six-core Phenom X6.
You can't upgrade a Core 2 Quad system with anything.
Posted on Reply
#64
thesmokingman
r.h.p
Im no brainiaK so can some one explain what is wrong with chiplets … or is monolithic a better option if it is more expensive :rolleyes:
The better option, that's not really the correct question. It really doesn't matter to the user whether their cpu is chiplet or monolithic. It matters most of all to the producer as yields become the most important factor to cost and scaling. Intels yields on monolithic are abysmal and their 10nm is even worse. We are talking in the low 30% range, whereas AMD is rumored to be in the high 90% range for yields with their chiplets. There's other factors obviously but the biggest effect of poor yields are high costs.
Posted on Reply
#65
ARF
thesmokingman
The better option, that's not really the correct question. It really doesn't matter to the user whether their cpu is chiplet or monolithic. It matters most of all to the producer as yields become the most important factor to cost and scaling. Intels yields on monolithic are abysmal and their 10nm is even worse. We are talking in the low 30% range, whereas AMD is rumored to be in the high 90% range for yields with their chiplets. There's other factors obviously but the biggest effect of poor yields are high costs.
Well, Intel's max is 2x28-core (56-core), while AMD's max is 8x8-core (64-core).

With chiplets, the user does get higher performance, better power consumption and lower cost.
Posted on Reply
#66
thesmokingman
ARF
Well, Intel's max is 2x28-core (56-core), while AMD's max is 8x8-core (64-core).

With chiplets, the user does get higher performance, better power consumption and lower cost.
Not quite. It's not the just because of the chiplets. That only makes it easier for them to package and scale. Using chiplet doesn't automatically mean higher perf or lower power consumption and saying that really dismisses the process lead and their efficient design!
Posted on Reply
#67
ARF
thesmokingman
Not quite. It's not the just because of the chiplets. That only makes it easier for them to package and scale. Using chiplet doesn't automatically mean higher perf or lower power consumption and saying that really dismisses the process lead and their efficient design!
Chiplets means small enough dies which translates to much earlier new process access. You get your intended dies with higher yield and you stay always ahead of the curve.
Posted on Reply
#68
thesmokingman
ARF
Chiplets means small enough dies which translates to much earlier new process access. You get your intended dies with higher yield and you stay always ahead of the curve.
What?
Posted on Reply
#69
gamefoo21
This is why I am personally excited for the desktop Zen 2 APUs. Then we also have to remember the Zen 2 APUs will be much larger because of the added die space required for the iGPU. I really hope AMD doesn't nerf them.

Ryzen 3000 series Zen 2 always felt like a compromise to me.

It does kind of suggest though that we'll see Zen 3 sporting a 12nm I/O die.
Posted on Reply
#70
ARF
thesmokingman
What?
This:
Intels yields on monolithic are abysmal and their 10nm is even worse. We are talking in the low 30% range, whereas AMD is rumored to be in the high 90% range for yields with their chiplets.
Posted on Reply
#72
ARF
thesmokingman
Why the hell you keep quoting me then?
Because you began to argue with "not quite" which of course is just quite. ;)
Posted on Reply
#73
thesmokingman
ARF
Because you began to argue with "not quite" which of course is just quite. ;)
You are implying all the perf gains AMD had are from chiplets. That's pure conjecture. Chiplets actually reduce performance especially in regards to cache and memory. This is where AMD's design has been steadily improving. Look at the single CCD chips, 3600 to 3800x, they all have half the write speeds of dual complex chips. Duh! From Zen 1 to Zen 4, it has been about improving the way the chiplets access the IO and memory.
Posted on Reply
#74
gamefoo21
thesmokingman
The better option, that's not really the correct question. It really doesn't matter to the user whether their cpu is chiplet or monolithic. It matters most of all to the producer as yields become the most important factor to cost and scaling. Intels yields on monolithic are abysmal and their 10nm is even worse. We are talking in the low 30% range, whereas AMD is rumored to be in the high 90% range for yields with their chiplets. There's other factors obviously but the biggest effect of poor yields are high costs.
Well Intel seems to struggling with the transition to cobalt wires. Intel welds the dies to the package generally. AMD solders the dies on, which is what GPUs tended to use.

ARF
Well, Intel's max is 2x28-core (56-core), while AMD's max is 8x8-core (64-core).

With chiplets, the user does get higher performance, better power consumption and lower cost.
If you put them on similar node sizes, most of those benefits evapourate. The chiplet system does get hit with a big disadvantage in that it's high latency. The trade off is scaling and cost are improved. You can use more silicon because faults don't scrap entire units.

Though the need of large caches to offset those latency penalties do present their own issues.
Posted on Reply
#75
Mats
gamefoo21
This is why I am personally excited for the desktop Zen 2 APUs. Then we also have to remember the Zen 2 APUs will be much larger because of the added die space required for the iGPU. I really hope AMD doesn't nerf them.
We already know this is wrong. The 3700X has a larger die size in total than the 8C APU counterpart.
gamefoo21
Ryzen 3000 series Zen 2 always felt like a compromise to me.
The APU's have less cache at least, so they are in turn a compromise compared to the CPU's.
Posted on Reply
Add your own comment