• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Chiplets

Joined
Mar 21, 2021
Messages
5,514 (3.63/day)
Location
Colorado, U.S.A.
System Name CyberPowerPC ET8070
Processor Intel Core i5-10400F
Motherboard Gigabyte B460M DS3H AC-Y1
Memory 2 x Crucial Ballistix 8GB DDR4-3000
Video Card(s) MSI Nvidia GeForce GTX 1660 Super
Storage Boot: Intel OPTANE SSD P1600X Series 118GB M.2 PCIE
Display(s) Dell P2416D (2560 x 1440)
Power Supply EVGA 500W1 (modified to have two bridge rectifiers)
Software Windows 11 Home
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.
 
I mean the M1 is still a very special soc, designed from the ground up to run one OS only (Mac os) , it’s -in my opinion- not comparable to other cpus ...
The other cpus have a totally different “mission” and are designed totally differently
Apple's new M1 shows it is not the way to performance
I mean this is not 100% accurate, what does “performance “ here mean ? I know the synthetic scores look outstanding and great , but In real world applications i don’t think it manages to perform well vs other cpus in the same price bracket , it actually sits far behind in performance...
It’s a low tdp chip for casual users and it’s doing a great job at that , but no serious workload can be expected to run on this chip , and I don’t expect it too no matter what apple says
 
I mean, your right. Monolithic performs better, mostly due to lower onchip latencies.. But it's a bang-for-the-buck equation. You can make a very fast chiplet chip much cheaper. The monolithic designs returns are diminishing vs the cost to make it that way, which is why short of small chips, no one is looking at them anymore in the future.
 
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.

I mean the M1 is ok. Keep in mind it still can't really game all that well. My ancient gtx 1070 laptop for example, is about 100-120% faster in fps in all games at 1080p across the board... macbook and m1 is still great for everyone else, but if I was a gamer I still would want something a little better.
 
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.

How did you come to this bizarre conclusion when the M1's lesson is about having an extremely wide core and very specific hardware acceleration? I thought you were all about scientific analyses?

If I use Snapdragon 888 and Exynos 2100 as the benchmark for "monolithic chips", then does that signify that Alder Lake is DOA and the future is all chiplets? :confused:

AMD's (and soon to be Intel's with Sapphire Rapids) selling point is core count. On today's processes, the way to have a financially feasible x86 product with that many cores is to go with some form of chiplets/tiles. Not to mention limitations of the ringbus. For smaller products like Alder Lake and Cezanne, monolithic will do just fine.
 
Chip design is a cost/die size/performance balance but chiplets open the way to bypass the die size problem with an interconnect. Which in turn opens up ways to get better yields. This enables a higher performance too.

Economy plays a major role here besides performance.
 
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.
That "RAM on the CPU" looks like CPU and RAM on a common substrate, so, again - chiplets.
 
I mean the M1 is still a very special soc, designed from the ground up to run one OS only (Mac os).
In some ways Mac OS is UNIX

But it's a bang-for-the-buck equation. You can make a very fast chiplet chip much cheaper. The monolithic designs returns are diminishing vs the cost to make it that way, which is why short of small chips, no one is looking at them anymore in the future.
I'm all for bang for the buck, but power consumption is part of that equation.

That "RAM on the CPU" looks like CPU and RAM on a common substrate, so, again - chiplets.
By chiplet I meant not on the same substrate

Chip design is a cost/die size/performance balance but chiplets open the way to bypass the die size problem with an interconnect. Which in turn opens up ways to get better yields. This enables a higher performance too.
I agree with the first (better yields), but not the second (enables higher performance)
 
Last edited:
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.
The M1 isn't a general purpose CPU, it's one single design for one single product. You cant make lower power variants or high performance variants... there is just one.

Apple wont go using that one chip for 720p. 1080p, 1440p and 4k variants with different needs, they use it for one OS on one product with crippled software support so that they only allow programs coded well that perform well on that chip
 
The M1 isn't a general purpose CPU, it's one single design for one single product. You cant make lower power variants or high performance variants... there is just one.

Didn't realize that; thought that one could (in principle) compile Windows for the M1; thought they were already working on the M2

I'm here to learn; this is all new to me.
 
I mean it's actually a true UNIX. It even pays the trademark fee to be called that.
Didn't know that; it's all about learning.
 
Didn't realize that; thought that one could (in principle) compile Windows for the M1; thought they were already working on the M2

I'm here to learn; this is all new to me.
They obviously are working on M2 and likely more, there's a diverse amount of use cases, some need real CPU and GPU grunt, like video production, so they'll need higher core count variations themselves.
And make proper GPUS.
 
The M1 isn't a general purpose CPU, it's one single design for one single product. You cant make lower power variants or high performance variants... there is just one.

Apple wont go using that one chip for 720p. 1080p, 1440p and 4k variants with different needs, they use it for one OS on one product with crippled software support so that they only allow programs coded well that perform well on that chip
Huh?

It's in the Mac Mini, MacBook Air, MacBook Pro, iMac, and iPad Pro.
 
Huh?

It's in the Mac Mini, MacBook Air, MacBook Pro, iMac, and iPad Pro.
Yeah, the same chip is. not variants with more or less cores, or 35W to 250W TDP's like desktop chips

Same chip, same software/apps (with slightly differently skinned variants of the same OS)
Didn't realize that; thought that one could (in principle) compile Windows for the M1; thought they were already working on the M2

I'm here to learn; this is all new to me.


That's just it: they COULD make windows work on it, but they cant make IT work on windows

Apple makes one bit of hardware, and then optimises software for it... and software that runs badly, isn't allowed on the device
 
Didn't realize that; thought that one could (in principle) compile Windows for the M1; thought they were already working on the M2

I'm here to learn; this is all new to me.
My belief is that Apple has multiple versions of each processor in various stages of readiness and that only one (or two) are released. It's not like they taped out the M1 and said to themselves "Gee, what shall we work on next?"

For sure Apple had generations of prototype A__ and M__ silicon running in their labs for years before they shipped in actual products. Today, there are likely various designs with different numbers of CPU cores, GPU cores, Neural Engine cores, with different L1 and L2 cache sizes, different clock speeds, different TDPs, etc. in their labs.

There's an M1X. There's an M2. There's an M10 or M100 (the names don't really matter). My guess is that Apple started serious work on the M-series silicon around the time they unveiled the A7 SoC (the first mainstream 64-bit Arm CPU). For sure Apple did not start working on the M-series silicon in 2019. This is probably close to a decade's worth of development.

Hell, I strongly believe that Apple has internal-only Arm SoCs with no GPU cores and a bunch of Neural Engine cores for their servers; these would never ship in a consumer product. Likewise Google, Amazon and now Facebook have custom chip designs for their cloud centers.
 
My belief is that Apple has multiple versions of each processor in various stages of readiness and that only one (or two) are released. It's not like they taped out the M1 and said to themselves "Gee, what shall we work on next?"

For sure Apple had generations of prototype A__ and M__ silicon running in their labs for years before they shipped in actual products. Today, there are likely various designs with different numbers of CPU cores, GPU cores, Neural Engine cores, with different L1 and L2 cache sizes, different clock speeds, different TDPs, etc. in their labs.

There's an M1X. There's an M2. There's an M10 or M100 (the names don't really matter). My guess is that Apple started serious work on the M-series silicon around the time they unveiled the A7 SoC (the first mainstream 64-bit Arm CPU). For sure Apple did not start working on the M-series silicon in 2019. This is probably close to a decade's worth of development.

Hell, I strongly believe that Apple has internal-only Arm SoCs with no GPU cores and a bunch of Neural Engine cores for their servers; these would never ship in a consumer product. Likewise Google, Amazon and now Facebook have custom chip designs for their cloud centers.
They're mostly using off the shelf Arm IP , differently to how others do though.
 
M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.
In fact, having ram on the CPU is a very desktop thing to do, old PINM cpus used to have it. It is actually the most mainframe thing you can do, huge improvement to single thread performance.
 
It's actually pretty clear how Apple has targeted the M-series SoC development.

The M1's primary usage case is the MacBook Air. About 85% of Mac sales are notebook computers and the entry level MacBook Air is the best selling model even though Apple no longer breaks out individual model sales figures. Apple knows exactly what the TDP design limit is for the existing MacBook Air design is: the M1 needed to come within that TDP limit.

The next obvious usage case would be the MacBook Pro which has a higher TDP than the Air. The MBP is the second best selling product family in the Mac portfolio. Whatever Apple calls their next M-series SoC (M1X, M2, M10, another designation), it will be primarily intended for the MacBook Pro.

The last obvious usage case would be a desktop SoC that would straddle the TDPs between a high-end iMac and the Mac Pro family. How Apple choose to implement this (additional cores versus multiple SoCs) is unknown. Will Apple go for an all-in-one SoC? Multiple SoCs of the same specifications? Or split up the CPU, GPU and ML cores into different silicon? It is highly likely that Apple has been testing a variety of iterations in their labs for years.

Apple said at WWDC in June 2020 that Apple Silicon would be a two-year transition.
 
I agree with the first (better yields), but not the second (enables higher performance)

But it does. Did you not witness how AMD could be competitive with much higher core counts than Intel quads, without additional yield risk? AMD was really making quadcores too. Easy stuff. Link em together and poof, octa... and then some. They can re-utilize anything across the entire stack from consumer junk to enterprise perfection. Now, on top of that: performance between chips, most of the limitations in the stack for frequency are artificial or self-controlled systems and the variability is very low.

The whole idea of Moore's Law is not just about transistors per square mm, its about bringing those advances to people without going broke. Technology only works when its, you know, actually working for us. Otherwise its just a prestige project.
 
Last edited:
I seem to recall a suggestion that chiplets was the way to make cheap CPU and GPU chips, and this might be true; but Apple's new M1 shows it is not the way to performance as they even have the RAM on the CPU and the performance is stunning.

Erm Vega...

The more things a cpu does the hotter it gets

Yeah, the same chip is. not variants with more or less cores, or 35W to 250W TDP's like desktop chips

Same chip, same software/apps (with slightly differently skinned variants of the same OS)



That's just it: they COULD make windows work on it, but they cant make IT work on windows

Apple makes one bit of hardware, and then optimises software for it... and software that runs badly, isn't allowed on the device

Yup I dont like Sandboxing in that way.
 
Back
Top