• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Riding on the Success of the M1, Apple Readies 32-core Chip for High-end Macs

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,684 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Apple's M1 SoC is possibly the year's biggest semiconductor success story, as the chip has helped Apple begin its transition away from Intel's x86 machine architecture, and create its own silicon that's optimized for its software and devices; much like its A-series SoCs powering iOS devices. The company now plans to scale up this silicon with a new 32-core version designed for high-performance Mac devices, such as the fastest MacBook Pro models; and possibly even iMac Pros and Mac Pros. The new silicon could debut in a new-generation Mac Pro in 2022. Bloomberg reports that the new silicon will allow this workstation to be half the size of the current-gen Mac Pro workstation in form, while letting Apple keep its generational performance growth trajectory.

In addition, Apple is reportedly developing a 16-core "big" + 4 "small" core version of the M1, which could power more middle-of-the-market Macs, such as the iMac desktop, and the bulk of the MacBook Pro lineup. The 16B+4s core chip could debut as early as Spring 2021. Elsewhere, the company is reportedly stepping up efforts to develop its own high-end professional-visualization GPU that it can use in its iMac Pro and Mac Pro workstations, replacing the AMD Radeon Pro solutions found in the current generation. This graphics architecture will be built from the ground-up for the Metal 3D graphics API, as well as a parallel compute accelerator. Perhaps the 2022 debut of the Arm-powered Mac Pro could feature this GPU.



View at TechPowerUp Main Site
 
They're likely gonna need a node beyond 5nm, the big core cluster on their current chip is huge, 32 cores would mean a ridiculously large chip not to mention that they would also probably need to increase the size of that system cache. Of course some of those cores could be like the ones inside the small core cluster so it would be "32 core" just in name really.
 
They're likely gonna need a node beyond 5nm, the big core cluster on their current chip is huge, 32 cores would mean a ridiculously large chip not to mention that they would probably need to increase the size of that system cache a lot. Of course some of those cores could be like the ones inside the small core cluster so it would be "32 core" just in name really.

As long as they can cool it efficiently, ( incoming pun :nutkick: ) the size doesnt matter.
 
Last edited:
Hi,
Apple saying highend mac well looking at 20k.us here or what :-)
 
I am excited to see what performance they get.
 
As long as they can cool it efficiently, ( incoming pun :nutkick: ) the size doesnt matter.
It does, otherwise those dies will become more expensive, because production will have much lower yields.

Splitting up into multiple dies makes production easier in many ways, like Epyc/Threadripper/Ryzen.
 
They're likely gonna need a node beyond 5nm, the big core cluster on their current chip is huge, 32 cores would mean a ridiculously large chip not to mention that they would also probably need to increase the size of that system cache. Of course some of those cores could be like the ones inside the small core cluster so it would be "32 core" just in name really.

Yeah. For record: the M1 is 16-billion transistors for 4-big core + 4-little core + iGPU + Neural Engine + SOC. Renoir (Zen 2) is 8-core + iGPU + SOC for 10-billion transistors.

big.LITTLE doesn't seem to do much for high-end compute: you pretty much always want more big cores if you have a difficult task (like CPU-based Raytracing) running. big.LITTLE style has advantages in long-running compute however: maybe the LITTLE cores can feed a GPU in a GPU-heavy situation (GPU-based raytracing??).

-------

Hmmmm... I think I'm pretty open about my fanboyism towards SIMD-compute. The M1 is very disappointing from the SIMD-perspective: just 128-bit wide. Even if they extend up to 256-bits, those M1 cores are utterly huge compared to Zen. It seems unlikely that they can offer as much parallelism in a SIMD situation. But I'm willing to be proven wrong on this front.
 
First in house CPU now in house GPU. Apple is looking to ditch the entire backbone of current PC industry. Feels like going back in time TBH when Apple closed off everything. If the performance is there it can totally justify Apple doing so. Weird when consoles from MS and Sony look like PCs and Mac turning into Console like closing off.

Maybe that is what Intel / AMD / Nvidia need to come out with ever better products. Apple has the advantage of positive consumer perception (among its fans).
 
First in house CPU now in house GPU. Apple is looking to ditch the entire backbone of current PC industry. Feels like going back in time TBH when Apple closed off everything. If the performance is there it can totally justify Apple doing so. Weird when consoles from MS and Sony look like PCs and Mac turning into Console like closing off.

Maybe that is what Intel / AMD / Nvidia need to come out with ever better products. Apple has the advantage of positive consumer perception (among its fans).
Hi,
Unfortunately apple products are stupid expensive.
 
Unfortunately apple products are stupid expensive.

And they are segregated to a specific market, that's why it wont matter much to Intel/AMD/Nvidia/ARM.
 
Bloomberg reports that the new silicon will allow this workstation to be half the size of the current-gen Mac Pro workstation in form...
How clueless must you be, not to realize the CPU does not eat half the space in the chassis? Remove it altogether, remove the cooling that goes with it and you're still not looking at 50% space savings.
 
AMD Ryzen 5000 = Evolutionary
Apple M1 = Revolutionary

The M1 has made it obvious that ARM will dominate the desktop within the next 10 years. The average desktop user doesn't need an x86 CPU. x86 users are about to become a niche market. Efficiency has clearly won out over flexibility.

In terms of efficiency, the M1 is currently the best CPU and GPU on the market. Apple's more powerful chips will also be more efficient than any competing products.

A lot of PC users are already in denial over the M1's superiority, and they'll stay that way for a long time because they're stupid.
 
AMD Ryzen 5000 = Evolutionary
Apple M1 = Revolutionary

The M1 has made it obvious that ARM will dominate the desktop within the next 10 years. The average desktop user doesn't need an x86 CPU. x86 users are about to become a niche market. Efficiency has clearly won out over flexibility.

In terms of efficiency, the M1 is currently the best CPU and GPU on the market. Apple's more powerful chips will also be more efficient than any competing products.

A lot of PC users are already in denial over the M1's superiority, and they'll stay that way for a long time because they're stupid.
You have an interesting way of putting that Apple will price their tech to the point of refusal for any sane PC user.
 
AMD Ryzen 5000 = Evolutionary
Apple M1 = Revolutionary

The M1 has made it obvious that ARM will dominate the desktop within the next 10 years. The average desktop user doesn't need an x86 CPU. x86 users are about to become a niche market. Efficiency has clearly won out over flexibility.

In terms of efficiency, the M1 is currently the best CPU and GPU on the market. Apple's more powerful chips will also be more efficient than any competing products.

A lot of PC users are already in denial over the M1's superiority, and they'll stay that way for a long time because they're stupid.
Hello my friend, how was your day ? Nothing like name calling on Monday am I right ? ;)

On a side note, Apple is currently the only one doing high performance mainstream ARM cpu, Qualcomm don't have the money to follow up on Apple, and windows on ARM is still a dodgy OS. You should really avoid to make a comparison between a closed and vertically integrated system, and an open system where every actor need to do his part for it to "work".

Right windows and ARM doesn't look good because nobody seems a to have a solid plan to make a smooth transition, both on the software and hardware side. Qualcomm, AMD, or whever else would need to invest hard on ARM research. Don't forget that Apple is the most profitable company in the world with a gigantic R&D budget.
 
Hi,
Yeah not like apple is going to open the os so performance well as long as you can do what you want through the apple store you're all set lol
 
AMD Ryzen 5000 = Evolutionary
Apple M1 = Revolutionary

The M1 has made it obvious that ARM will dominate the desktop within the next 10 years. The average desktop user doesn't need an x86 CPU. x86 users are about to become a niche market. Efficiency has clearly won out over flexibility.

In terms of efficiency, the M1 is currently the best CPU and GPU on the market. Apple's more powerful chips will also be more efficient than any competing products.

A lot of PC users are already in denial over the M1's superiority, and they'll stay that way for a long time because they're stupid.

Something being revolutionary or not can only be determined in hindsight. The M1 may end up as that, but I have doubts and it’s impact on the market won’t be known for some time. It’s impressive on an efficiency vs performance standpoint and could be argued as the “best” ARM-based cpu + gpu silicon (requires some convoluted categorization to make claims beyond that).

On the other hand, I think the larger chips (like the hypothetical 32 core) could be revolutionary if they scale linearly and best the existing HEDT CPUs at real tasks. In that situation I could see Apple workstations (and maybe servers too!) and ARM becoming mainstream instead inhabiting relative niches in computing.

Anyways, exciting possibilities and I’m looking forward to seeing what Apple delivers in the future. In the meantime, let’s all remember that Apple’s marketing is just that while wait.
 
Are there any good reviews of those recent M1 laptops? Or aren't they not available yet? Saw a few benchmarks, but haven't really been searching lately.

I'm not sure my next laptop will be MacBook, but my MBP from late 2013 was the best decision at that time for my use. Got fed up with Windows, i lost a lot of data when it crashed. And yes, i had backups, but not all folders got marked manually. But that's another story...

Interesting to see where all this goes.
 
You have an interesting way of putting that Apple will price their tech to the point of refusal for any sane PC user.

The M1 Macs have made PCs overpriced junk IMHO. You can't buy a PC that's as efficient as an M1 Mac because they don't exist, yet.

I think M-based Macs are going to be more efficient and better values than PCs for a long time.
 
Last edited by a moderator:
Holy Molly, Techisfun is copy + Pasting comments from his last tirade.

Anyway, I wonder what the end cost will be. MAC's are usually way over priced. But I am enticed to buy a Mac Mini with one of these new M1 processors.
 
This SOC is useless for most of the computing world until someone licenses it from Apple to be used in a PC, MS writes a full version of Windows for it, *and* includes a good Rosetta-like emulator for x86/x64. Apple's advantage is they have experience doing hardware and code-based transitions, having done 3 of them now. MS has sorta done one, with Win98 - XP.

It's pretty clear that the M1 outcompetes any similar-tier Intel and likely Ryzen CPUs as the many reviews online note that most Intel apps run as fast or notably faster in bloody emulation on these M1s. That's both due to the SOC design and good programming in Rosetta. This can also be done on the PC side as MS also has the deep pockets to do it, but it'll take a long time as they lack the experience.

And let's be honest, they also need the motivation to do so. I doubt Apple will win many new customers with even notably superior hardware as how many people will switch from a low price laptop/small PC to a more expensive Mac, and that's even before considering the sunk cost fallacy.
 
AMD Ryzen 5000 = Evolutionary
Apple M1 = Revolutionary

The M1 has made it obvious that ARM will dominate the desktop within the next 10 years. The average desktop user doesn't need an x86 CPU. x86 users are about to become a niche market. Efficiency has clearly won out over flexibility.

In terms of efficiency, the M1 is currently the best CPU and GPU on the market. Apple's more powerful chips will also be more efficient than any competing products.

A lot of PC users are already in denial over the M1's superiority, and they'll stay that way for a long time because they're stupid.

The "Revolution" factor is not the cpu architecture itself or the CPU performance that is excellent in single thread but behind in multi thread. The real change there is it's the First ARM cpu that was design for Desktop/Laptop use and not a Phone SOC that they try to upscale to be used in a laptop. This show to the industry and the public that an Arm cpu can compete with x86.

The performance, Single thread mostly is impressive, but it's the first 5nm CPU on the market. We will see how it compare to 5nm AMD and 7 nm Intel CPU if they get out. Apple had to suffer Intel Fabs issue and now they got their way out of these and are able to use the top process in the world

Still the revolution is the image that arm can be top cpu (some other arm architecture already show that it can really be performant like the Fujistu A64FX). Even rumors now have AMD to resurect K12 to benefits from the trend.

There are many things that are impactful with the M1, but the most interesting one is they added in the core front end hardware decode of many x86 instruction to improve the speed they run x86 code in Rosetta 2. This can have a very huge impact on the future as we might see other manufacturer doing similar things and even become at some point able to natively run different ISA.

But this is something that is still unclear if Intel will let that go or if Apple got the license in their Deals with Intel. Other vendors tried to do similar things and they got lawsuit from Intel so time will tell. But AMD or what left of Via, could do that without too much trouble.
 
Back
Top