Wednesday, January 15th 2025

Apple's Custom "Hidra" SoC Reportedly Exclusive to Next-gen Mac Pro

Apple's top-end M4 Ultra desktop-class chipset is allegedly going to feature on upcoming Mac Pro and Mac Studio refreshes—new product unveilings could be on the company's schedule (WWDC 2025). Bloomberg's Mark Gurman has divulged intriguing M4-series information within his latest newsletter. The M4 Ultra SoC—codenamed "Hidra"—was previously believed to be the most powerful processor option available on both next-gen Mac Pro and Mac Studio platforms. Now, Gurman believes that Apple engineers have created a distinct custom chipset design—exclusively designed for the Mac Pro workstation product stack—that sits above their M4 Ultra SoC.

Somewhat confusingly he suggests that "Hidra" is the codename for this top-of-the-line processor. Rumors swirled last month about the cancellation of an alleged "Extreme" model, so there is a degree of uncertainty surrounding unannounced M4 SKUs. Potential customers could choose Apple's (potentially) more powerful "Hidra-equipped" Mac Pro workstation over the highest-end M4 Ultra-based Mac Studio model. Industry experts propose that "Hidra" will arrive with an increased number of CPU and GPU cores—exceeding the M4 Ultra's speculated makeup of a 32-core CPU and an 80-core GPU.
Sources: Mark Gurman/Bloomberg, Wccftech
Add your own comment

24 Comments on Apple's Custom "Hidra" SoC Reportedly Exclusive to Next-gen Mac Pro

#1
Daven
These Mac 'Pro' workstations have max of 192GB of RAM with no means to upgrade after purchase. They lack dual socket options. You cannot add Tegra, Instinct or client-based GPUs. The Mx SoC is not suitable for high-end workstations. They are basically a Mac Studio with PCIe slots for an additional $3000 with no real professional features.
Posted on Reply
#2
ZoneDymo
DavenThese Mac 'Pro' workstations have max of 192GB of RAM with no means to upgrade after purchase. They lack dual socket options. You cannot add Tegra, Instinct or client-based GPUs. The Mx SoC is not suitable for high-end workstations. They are basically a Mac Studio with PCIe slots for an additional $3000 with no real professional features.
yeah but its the church of Apple, so it will sell.
Posted on Reply
#3
kondamin
DavenThese Mac 'Pro' workstations have max of 192GB of RAM with no means to upgrade after purchase. They lack dual socket options. You cannot add Tegra, Instinct or client-based GPUs. The Mx SoC is not suitable for high-end workstations. They are basically a Mac Studio with PCIe slots for an additional $3000 with no real professional features.
That is their first iteration of the product, maybe they found out there are people willing to pay over 100k for a workstation making dual and quad socket solutions with terabytes of ram a thing with AI market potential
Posted on Reply
#4
Daven
ZoneDymoyeah but its the church of Apple, so it will sell.
To be fair, the Mac Pro in its current form was designed originally for 300W Intel Xeon W processors with dozens of cores. It supported up to 1.5 TB of RAM and you could add two dual GPU boards for four total GPUs. Then Apple changed to their own silicon designed up from a smartphone SoC and surprise, surprise, the MX Ultra was a pathetic workstation CPU/GPU combo versus what you can buy from the X86, data center GPU world.
kondaminThat is their first iteration of the product, maybe they found out there are people willing to pay over 100k for a workstation making dual and quad socket solutions with terabytes of ram a thing with AI market potential
It's only the first iteration using Apple silicon. They already had support for Xeons and Radeon Pros. If they stayed on that path, they could have had 128 Core Epycs and Instinct/Tegra inside that box by now.
Posted on Reply
#5
efikkan
DavenTo be fair, the Mac Pro in its current form was designed originally for 300W Intel Xeon W processors with dozens of cores. It supported up to 1.5 TB of RAM and you could add two dual GPU boards for four total GPUs. Then Apple changed to their own silicon designed up from a smartphone SoC and surprise, surprise, the MX Ultra was a pathetic workstation CPU/GPU combo versus what you can buy from the X86, data center GPU world.
Almost $10K for a "laptop" in a "cheese grater" (or perhaps more of an oversized phone if we're honest), and that's before adding multiple SSDs, HDDs etc. Oh wait, there is only 2 SATA ports. Shared memory, and no ECC, right? Not much of a workstation platform.

Just imagine what kind of Threadripper or Xeon W systems you can build for that price… (Or a modest one that still rocks this one and have money to spare.)

I wouldn't have high expectations for a custom new chip either, they simply can't compete without having special acceleration.
Posted on Reply
#6
Darmok N Jalad
ZoneDymoyeah but its the church of Apple, so it will sell.
Actually from what I read, most “church going” Apple customers are disappointed with the current Mac Pro. The only reason to buy one right now is for very specific add-in cards. This chip might be a more legitimate attempt at appeasing the workstation crowd, but the current design strategy of Apple Silicon really sinks Apple’s workstation endeavors.
Posted on Reply
#7
SOAREVERSOR
Darmok N JaladActually from what I read, most “church going” Apple customers are disappointed with the current Mac Pro. The only reason to buy one right now is for very specific add-in cards. This chip might be a more legitimate attempt at appeasing the workstation crowd, but the current design strategy of Apple Silicon really sinks Apple’s workstation endeavors.
Their workstations mostly go to commercial firms. For items like this MacOS or apple software is mandatory thus the ask. At this level cost is no object. Workstations, regardless of Windows or MacOS or the hardware vendor at this scale are generally leased. They are there for two years and then when the lease is up they are moved to the next thing and sent back to a vendor like CDW.

Worstations like this generally only do one task. Some will only edit video, some only CAD/CAM, others only AI/ML/DL. Buying or purchasing won't change that they will only do one thing. So the lack of flexibility is a non issue.
Posted on Reply
#8
Daven
Darmok N JaladActually from what I read, most “church going” Apple customers are disappointed with the current Mac Pro. The only reason to buy one right now is for very specific add-in cards. This chip might be a more legitimate attempt at appeasing the workstation crowd, but the current design strategy of Apple Silicon really sinks Apple’s workstation endeavors.
To meet workstation demands, Apple would need to…
  • Provide up to 1 TB of RAM or greater
  • Provide over 100 performance cores or greater
  • Provide 5090 level of GPU power or greater
  • Provide 16 TB or greater total SSD capacity
Posted on Reply
#9
dyonoctis
DavenTo be fair, the Mac Pro in its current form was designed originally for 300W Intel Xeon W processors with dozens of cores. It supported up to 1.5 TB of RAM and you could add two dual GPU boards for four total GPUs. Then Apple changed to their own silicon designed up from a smartphone SoC and surprise, surprise, the MX Ultra was a pathetic workstation CPU/GPU combo versus what you can buy from the X86, data center GPU world.


It's only the first iteration using Apple silicon. They already had support for Xeons and Radeon Pros. If they stayed on that path, they could have had 128 Core Epycs and Instinct/Tegra inside that box by now.
To be fair Apple already knows 99% their target customers: content creation. Apple gave up on anything related to HPC since they retired Xserve, and they never provided a driver for AMD HPC hardware. You can't install an instinct GPU even in the old Mac pro. In things like music production, video production, graphic design, photograpy, the Mac stays a favorite. Wich is in part due to the way MacOS handle "exotic" images files compared to windows. A mac can show you the preview of an AI/PSD/EXR/ Various RAW files natively in the finder. Wich is something that microsoft can't be bothered to implement.


Nvidia is also a company that they will never ever get involved with. There's a nasty bad blood between them, and their business strategy will clash with one another. Apple doesn't want to see a single Mac running CUDA. They've preferred to let Nvidia take over the 3D market rather than being involved with them again.



Then

Now
Posted on Reply
#10
SOAREVERSOR
dyonoctisTo be fair Apple already knows 99% their target customers: content creation. Apple gave up on anything related to HPC since they retired Xserve, and they never provided a driver for AMD HPC hardware. You can't install an instinct GPU even in the old Mac pro. In things like music production, video production, graphic design, photograpy, the Mac stays a favorite. Wich is in part due to the way MacOS handle "exotic" images files compared to windows. A mac can show you the preview of an AI/PSD/EXR/ Various RAW files natively in the finder. Wich is something that microsoft can't be bothered to implement.


Nvidia is also a company that they will never ever get involved with. There's a nasty bad blood between them, and their business strategy will clash with one another. Apple doesn't want to see a single Mac running CUDA. They've preferred to let Nvidia take over the 3D market rather than being involved with them again.
At the corporate level apple is dominant in yes creation and that's not changing but it's also really common with developers. It's dominant in that with some of the biggest players as well but those are mac pros. There are also some weird cases (electronic music creation and live performances) where they are also really the entire game. The terminal plays a huge role in this.

They are also huge in education. Especially in STEM fields Mac is the recommended computer.

They are largely out of CAD/CAM as well now as Quadro drivers dominate that space and as you said they hate nvidia.
Posted on Reply
#11
kondamin
efikkanAlmost $10K for a "laptop" in a "cheese grater" (or perhaps more of an oversized phone if we're honest), and that's before adding multiple SSDs, HDDs etc. Oh wait, there is only 2 SATA ports. Shared memory, and no ECC, right? Not much of a workstation platform.

Just imagine what kind of Threadripper or Xeon W systems you can build for that price… (Or a modest one that still rocks this one and have money to spare.)

I wouldn't have high expectations for a custom new chip either, they simply can't compete without having special acceleration.
And they would have to keep maintaining their x86 code base, which is something they are pretty much done with right about now.
Their product stack is mostly in-house now and that’s making and saving them a heck of a lot of money.

wouldnt surprise me to learn they are working on some sort of apple intelligence chip they can charge 20K a pop for to make use of those expansion ports.
Posted on Reply
#12
mechtech
so the cheese grater case is still a thing??
Posted on Reply
#13
AusWolf
Wait, that's a Mac Pro? I thought it was a cheese grater. :roll:
mechtechso the cheese grater case is still a thing??
Damn, you were quicker. :D
Posted on Reply
#15
R0H1T
kondaminmaybe they found out there are people willing to pay over 100k for a workstation making dual and quad socket solutions with terabytes of ram a thing with AI market potential
You mean they didn't find this out when they were selling 1GB RAM upgrades for $100 on their phones or at least another hundred bucks for 256GB(?) on their "Macs" even now :wtf:

Tim Apple must be slow!
Posted on Reply
#16
kondamin
R0H1TYou mean they didn't find this out when they were selling 1GB RAM upgrades for $100 on their phones or at least another hundred bucks for 256GB(?) on their "Macs" even now :wtf:

Tim Apple must be slow!
There wasn't much point in going above a certain amount of money for workstations.
Given what is now being wasted on "graphics" cards that limit is now removed.
Posted on Reply
#17
R0H1T
Yeah, you could argue Intel, then AMD, & definitely Nvidia have made that upper limit obsolete, especially if some many companies are willing to pay millions to get their work done quicker!
Posted on Reply
#18
efikkan
kondaminAnd they would have to keep maintaining their x86 code base, which is something they are pretty much done with right about now.
Their product stack is mostly in-house now and that’s making and saving them a heck of a lot of money.
Not sure whether having x86 for desktop/laptop and ARM for mobile vs. everything on ARM would make much of a difference in this regard, as their mobile software stack is quite different, even Safari on desktop vs. mobile is quite different underneath the hood.

But one thing is for sure, there are numerous benefits for Apple of having a walled garden, as they certainly get to control a certain type of user experience, and a tightly coordinated software suite tailored to the current generation's specific acceleration, which is great for energy efficiency and getting a usable experience from otherwise underpowered hardware. But it's not so great once you go outside those accelerated features, or wanting to invest in "powerful" hardware to be useful long-term, as you constantly have to upgrade to stay in that "sweetspot". Expensive gear from Apple is mostly used in certain corporate environments, which often have rapid upgrade cycles.

One thing Apple could have done on x86 is to have a competitive advantage over Windows and Linux is the adoption of higher ISA levels. This is something Windows and Linux is still struggling to this day, while backwards compatibility is great, the OS, drivers, libraries and majority of applications are still compiled for x86-64/SSE2 (from 2003). If the entire software stack moved to e.g. Haswell ISA level or better across the stack, it would probably unlock >10% performance on average, but probably 30% or more for many heavy workloads. Backwards compatibility for old software would be retained, it's just a matter of shipping new software compiled for recent hardware. In this sense Apple have an advantage when they move to later ISA levels for ARM, but they could have had an ever greater such advantage on x86, as it does do more work per instructions, especially with the additions after baseline x86-64 (faster memory operations, AVX, AMX, etc.).

Another thing that we lost with the abandonment of x86 was the plethora of ~3 year old used dirt cheap Xeon workstation chips on Ebay.
kondaminwouldnt surprise me to learn they are working on some sort of apple intelligence chip they can charge 20K a pop for to make use of those expansion ports.
It's only a matter of time before ASICs take over the "AI" market for specific use cases. I doubt that they would create it themselves though, probably wait and see which of the 1000+ companies developing such chips which have a promising product. ;)
Posted on Reply
#19
kondamin
efikkanNot sure whether having x86 for desktop/laptop and ARM for mobile vs. everything on ARM would make much of a difference in this regard, as their mobile software stack is quite different, even Safari on desktop vs. mobile is quite different underneath the hood.

But one thing is for sure, there are numerous benefits for Apple of having a walled garden, as they certainly get to control a certain type of user experience, and a tightly coordinated software suite tailored to the current generation's specific acceleration, which is great for energy efficiency and getting a usable experience from otherwise underpowered hardware. But it's not so great once you go outside those accelerated features, or wanting to invest in "powerful" hardware to be useful long-term, as you constantly have to upgrade to stay in that "sweetspot". Expensive gear from Apple is mostly used in certain corporate environments, which often have rapid upgrade cycles.

One thing Apple could have done on x86 is to have a competitive advantage over Windows and Linux is the adoption of higher ISA levels. This is something Windows and Linux is still struggling to this day, while backwards compatibility is great, the OS, drivers, libraries and majority of applications are still compiled for x86-64/SSE2 (from 2003). If the entire software stack moved to e.g. Haswell ISA level or better across the stack, it would probably unlock >10% performance on average, but probably 30% or more for many heavy workloads. Backwards compatibility for old software would be retained, it's just a matter of shipping new software compiled for recent hardware. In this sense Apple have an advantage when they move to later ISA levels for ARM, but they could have had an ever greater such advantage on x86, as it does do more work per instructions, especially with the additions after baseline x86-64 (faster memory operations, AVX, AMX, etc.).

Another thing that we lost with the abandonment of x86 was the plethora of ~3 year old used dirt cheap Xeon workstation chips on Ebay.


It's only a matter of time before ASICs take over the "AI" market for specific use cases. I doubt that they would create it themselves though, probably wait and see which of the 1000+ companies developing such chips which have a promising product. ;)
You can use an asic to run an existing model, it becomes useless when changes are made to the model.

so I can see Microsoft and open AI switch to having chat gtp 5 run on some asic servers for production as I have strong doubts we’re going to see a version 6 anytime soon (of considerable difference}

but I can’t see that happening for anyone that doesn’t require thousands of asic chips for their commercially viable ai product.
Posted on Reply
#20
Veseleil
efikkancheese grater
Exactly my thoughts when I saw the 2nd photo. I was a bit confused at first, seeing a kitchen tool beside an M4 logo.
Posted on Reply
#21
AusWolf
VeseleilExactly my thoughts when I saw the 2nd photo. I was a bit confused at first, seeing a kitchen tool beside an M4 logo.
An M4 branded cheese grater? Man, BMW has sunken low. :shadedshu: (joke)
Posted on Reply
#22
Veseleil
AusWolfAn M4 branded cheese grater?
Marketing doesn't know limits. :D
AusWolfMan, BMW has sunken low. :shadedshu: (joke)
Kinda not far from the truth. The series I consider the last true bimmers are imo the E36, E38, E39 at their respectable classes.
Posted on Reply
#23
efikkan
kondaminYou can use an asic to run an existing model, it becomes useless when changes are made to the model.
"AI" ASICs are still in their infancy, but even Nvidia knows their dominance of using GPUs to "brute force" these models is going to end eventually (as I've been saying for years), and is preparing for the "post-GPU era" as several news outlets covered just days ago. And we're not necessarily talking of implementing the entire model in hardware, but rather specialized hardware doing most of the heavy lifting, but also specialized for the type of desired workload, whether it's video, economics, geology etc.
VeseleilExactly my thoughts when I saw the 2nd photo. I was a bit confused at first, seeing a kitchen tool beside an M4 logo.
I can't take credit for it though, as it's what "everyone" has been calling it for years, even Wikipedia refers to it by that nickname. But at the very least Apple have (unintentionally) taught me another English word that otherwise wouldn't be in my vocabulary…

Similarly the Mac Pro from 2013 is nicknamed the "trash can", for obvious reasons, but it also turned out to be very unreliable.
VeseleilKinda not far from the truth. The series I consider the last true bimmers are imo the E36, E38, E39 at their respectable classes.
Agree, some would say even E60 and E90 (despite not having that classic look), but the quality has gone downhill, and I'm not going to consider a new one. It's just plastic junk that's going to be on the dump in 6-8 years…
Posted on Reply
#24
trsttte
kondaminYou can use an asic to run an existing model, it becomes useless when changes are made to the model.
Depends on how specific the asic is designed to be, and what kind of changes are made to the model. A gpu is itself an asic with a broader set of applications, with nvidia currently using and pushing for their tensor core implementation. But that won't last forever, there's a ton of companies designing alternatives
Posted on Reply
Add your own comment
Mar 28th, 2025 07:40 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts