• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

8-Core AMD Ryzen AI Max Pro 385 Benchmark Appears As Cheaper Strix Halo APU Launch Nears

The fact they've downgraded the iGPU as well is underwhelming to say the least. It could be a great gaming machine if it did retain the full 40-core iGPU, without that insane price tag of a top model. But this whole product lineup is obviously aimed at LLM hobbyists more than it is to gamers, so that's that.
iGPU is not "dowgraded". It's called segmentation. Eight cores and 32CUs on 385 SKU is still very good for iGPU gaming. That's twice as many CUs than on Strix Point. There's nothing to complain about here. Ask Intel, Nvidia or Apple whether they can offer any similar iGPU that is as good for gaming. The answer is no.

Besides, Framework is already taking pre-orders for their mini-PC with 385 APU. It's much cheaper than the halo SKU and this one will hit the mainstream beyond LLM hobbyists.

The top SKU 395 is new and its current price reflects its name - Halo. Come back during Black Friday and check deals if you need one.
 
Last edited:
its current price reflects its name - Halo
Also the halo effect, with people comparing vastly cheaper options to it and calling them shit because they’re not as strong. (Can’t people get their psychology in order for once!?)
 
There's no real useful use for an NPU at the moment, so it's mostly marketing.
At the moment there's no such use case in the desktop world.
Incorrect. There's growing number of apps that can use NPU in laptops and desktop PCs, also combining NPU and GPU for different stages of AI workloads. Aside from LLM processed locally, NPU can contribute to various plug-ins and functions in Photoshop, Premiere Pro, Lightroom, Da Vinci, OBS Studio, etc. Those who know what they are doing will appreciate it. Even Paint and Word can use NPU on Copilot+ machines to generate images and work with texts.

It is Microsoft's job to better inform and educate the public about the use of NPU by PC applications on their systems. It is also job of gevernments to invest more into Computer Science education, update Computer Science curriculum, upgrade school hardware and train teachers to deliver AI-based content in classrooms so that future generations are more literate about how NPU benefit in various applications. It's a process that will last a decade, even in OECD countries, until every child exiting elementary school is able to click a few things in apps that use NPU to produce the outcome they wanted.
 
Eight cores and 32CUs on 385 SKU is still very good for iGPU gaming.
Not enough to become a mainstream even over discrete laptop GPUs, let alone to become an alternative to a desktop PCs.
That's twice as many CUs than on Strix Point. There's nothing to complain about here. Ask Intel, Nvidia or Apple whether they can offer any similar iGPU that is as good for gaming. The answer is no.
I'm not talking about is anyone giving anything better, but about making an APU finally a good alternative to dGPUs. And the Max+ 395 APU is a move in a right direction, but its price is just too much, halo or not, being a technological breakthrough or just a demonstrator.
Its main features are still heavily leaning to an LLM crowd, and it's a fact.
What if that die space was all used up for the CPU and GPU cores, instead of AI acceleration? Apparently they don't plan to kill off their dGPU business, yet.
Besides, Framework
Which doesn't sell outside of the US, AFAIK.
Come back during Black Friday and check deals if you need one.
I'm not in the market for an APU that's in overall weaker than my current main rig, but I hope they'll become reality soon enough.
 
As others have pointed out, NPUs are mostly dead silicon for desktop usecases so far, one can basically always just use the GPU

Having the NPU is definitely still nice for desktop users. For those doing heavy AI tasks like image or video generation, the NPU can be used to offload a portion of the work. For games is can enable AI workloads to run without consuming GPU resources. For everyday features that are being integrated into operating systems like natural language recognition, automation, etc the NPU does so efficiently and again without impacting resources of other components.

It's the difference between sucking down 400w and sucking down 108w when using AI features (and those will become increasingly common).

I think the plan was to have wake-on-voice and such, which people don’t embrace. I’ve also seen presence detection being used, automatically putting the system to suspend when you leave.—Uhh, nice, but why can’t the monkey just press the button!? :banghead:

Neither of those are AI use cases. The plan for AI is to change the way people compute and enable entirely new things. No longer are computers bound by algorithms wherein each potential outcome has to be individually programmed in. We can now tackle use cases that, for regular algorithms, would require explicit programming of billions, trillions, and even more potential outcomes. In other words, things that were priorly infeasible are now feasible.
 
Not enough to become a mainstream even over discrete laptop GPUs, let alone to become an alternative to a desktop PCs.
What is "enough" and what is considered "alternative" depend on market segments, user needs and device offer. A topic-river...
I'm not talking about is anyone giving anything better, but about making an APU finally a good alternative to dGPUs.
It is a good alternative, but this process of transition will take time, several years, as this is the first generation of high performance mega-APUs with NPU on x86. As you know, laptop market has been hard-locked by Intel and Nvidia for years..., so it will take a huge effort to convince more OEMs to create more machines for AMD mega-APUs. But the process has already started. Even Dell and HP are finally more keen than before. There is no doubt about it.
A dozen of mini-PCs with MAX SKUs are coming, plus more laptop designs in pipeline. It's a slow thing in this industry. It does not happen over night or over one generation, ever.

8060S graphics has gaming performance similar to 4070M (see below), so 8050S will be similar or better than 4060M, which is what most laptops buyers actually buy when they decide to have a discrete GPU for gaming.
Screenshot 2025-06-01 at 18-43-42 NVIDIA GeForce RTX 4070 Laptop GPU vs AMD Radeon RX 8060S.png

Which doesn't sell outside of the US, AFAIK.
It does ship outside of the USA. I can order it now to ship to Europe.
the Max+ 395 APU is a move in a right direction, but its price is just too much, halo or not, being a technological breakthrough or just a demonstrator.
That's why I said that 385 SKU is for more mainstream devices. I can already select it now on Framework website for shipping in Q3. The system is almost twice as cheaper as the one with MAX 395.
 
Last edited:
Incorrect. There's growing number of apps that can use NPU in laptops and desktop PCs, also combining NPU and GPU for different stages of AI workloads. Aside from LLM processed locally, NPU can contribute to various plug-ins and functions in Photoshop, Premiere Pro, Lightroom, Da Vinci, OBS Studio, etc. Those who know what they are doing will appreciate it. Even Paint and Word can use NPU on Copilot+ machines to generate images and work with texts.

It is Microsoft's job to better inform and educate the public about the use of NPU by PC applications on their systems. It is also job of gevernments to invest more into Computer Science education, update Computer Science curriculum, upgrade school hardware and train teachers to deliver AI-based content in classrooms so that future generations are more literate about how NPU benefit in various applications. It's a process that will last a decade, even in OECD countries, until every child exiting elementary school is able to click a few things in apps that use NPU to produce the outcome they wanted.

I believe that he was talking about desktop PCs. The issue with NPUs on desktops are two-fold: they perform exceptionally poorly when compared to a GPU (the battery life/power management issue doesn't affect desktops with a fixed, "endless" power supply) - and that there are no desktop processors with an NPU present other Arrow Lake, and even then it doesn't matter: the Ultra 9 285's NPU is rated for only 13 TOPS, being ineligible for use with Copilot+. It's effectively dead weight.

Microsoft still believes the desktop died the day Windows 8 came out and has shown itself to be entirely willing to die on that hill. No effort has been made to support Copilot+ on a GPU, they require you to have an "AI processor" with a second generation NPU (the ones that do 35+ TOPS), and as of today, these are exclusively laptop processors.
 
NPU can contribute to various plug-ins and functions in Photoshop, Premiere Pro, Lightroom, Da Vinci, OBS Studio, etc.
Cool, I was not aware Davinci was making such widespread use of NPUs already. Do you know if it works in an hw-agnostic manner, for example through DirectML?
When it comes to Adobe products, AFAIK it's only a handful of features that has it enabled for local processing, and some of those are device dependent, such as working on either Qcom or Intel.

OBS has no upstream support at all, and all I know is a single plugin from Intel that uses OpenVINO, with no DirectML support atm.

I see it as more of a chicken and egg problem. Desktop devices still have no or really poor NPUs, the laptops that do have it barely see any use because there aren't many applications that make use of those in a significant way (albeit not 0, as per what you said).

It is Microsoft's job to better inform and educate the public about the use of NPU by PC applications on their systems
I don't think it's MS job to do marketing on that. Instead they could very well just actually provide good use cases of it, such as enabling audio noise removal, webcam cropping/enhancements, better file indexing/search and stuff like that built-in into the OS in a hw-agnostic manner, things that the end use would actually notice and care.
 
Microsoft still believes the desktop died the day Windows 8 came out and has shown itself to be entirely willing to die on that hill. No effort has been made to support Copilot+ on a GPU, they require you to have an "AI processor" with a second generation NPU (the ones that do 35+ TOPS), and as of today, these are exclusively laptop processors
Yes. Microsoft has little interest in many things beyond their immediate benefit...
I see it as more of a chicken and egg problem.
Yes, but not quite. It has to start with offering hardware, so that developers of software can test applications and code new ones to take advantage of hardware. There is no other way.

"Chicken and egg" metaphor is very simplistic. It doesn't apply well in this context. Also, to be pedantic, egg evolved first in evolutionary tree of life... chicken came hundreds of millions of years later...

there are no desktop processors with an NPU present other Arrow Lake, and even then it doesn't matter: the Ultra 9 285's NPU is rated for only 13 TOPS, being ineligible for use with Copilot+. It's effectively dead weight.
This is understandable, as the focus has been on developing NPU for mobility chips in the first place, which is fine, as laptop market is priority for such feature. Desktop CPUs will come next. Gorgon Point is coming later on in the year to AM5 platform, with 50 TOPS. AMD is developing new IO die for Zen6 that will feature stronger NPU. Intel will also offer better next time. They can't deliver everything on all platforms at the same time. It does not work like that in CPU industry. People need to be patient too.
I don't think it's MS job to do marketing on that. Instead they could very well just actually provide good use cases of it, such as enabling audio noise removal, webcam cropping/enhancements, better file indexing/search and stuff like that built-in into the OS in a hw-agnostic manner, things that the end use would actually notice and care.
It is Microsoft's job to tell us how and where we could use features and applications benefiting from NPU. It is their sacred duty to do this if they want consumers to learn, while developing and widening case use. As I said, Paint and Office can use NPU right now, but few know about it. Of course, there is a lot of work ahead, both hardware-wise and software-wise. Specific apps have plugs-ins for image (Zoom), noise suppression (AMD Adrenaline), plus aspects of Copilot work in Teams, albeit only in Enterprise package right now. It will take time to get everything to work for an average user.
 
Last edited:
I'm wondering how much worse the 8050S is compared to the 8060S is, yeah. I wonder what Medusa Point will be using. RDNA or UDNA? Would be crazy if AMD actually implemented CDNA into this one, would get all the nice features that aren't in RDNA.
I wouldn't hold my breath. IGP's usually don't get the latest version.
The fact they've downgraded the iGPU as well is underwhelming to say the least.
Sure, it would be great, but i'd never expect that. It's not how it works if you're asking me, and it's always been like that. 6 core APU's have less CU's than the 8 core.
 
It's the difference between sucking down 400w and sucking down 108w when using AI features (and those will become increasingly common).
The plan for AI is to change the way people compute and enable entirely new things. No longer are computers bound by algorithms wherein each potential outcome has to be individually programmed in. We can now tackle use cases that, for regular algorithms, would require explicit programming of billions, trillions, and even more potential outcomes. In other words, things that were priorly infeasible are now feasible.
This. Well said.
 
Sure, it would be great, but i'd never expect that. It's not how it works if you're asking me, and it's always been like that. 6 core APU's have less CU's than the 8 core.
Maybe I'm too optimistic, but I believe that 64CU APUs are possible, and that's the 9070XT level of performance with the current hardware. And just 40CUs with a more efficient arch UDNA might bring in the next gen could give us even more performance than that.
 
This. Well said.

The thing is, sure, a 5090 has a 600 W TDP. But since it's ~80 times faster than a 50 TOPS NPU (assuming a median of 4000 TOPS - which is what it's supposed to average with boost clocks factored in), it'll also complete the inference work that much faster. If a hypothetical task takes 100 seconds to complete at 100 W, but 1.25 secs at 600 W, it's not too difficult to figure out which one works best in a scenario where the power supply and cooling is not a constraint.
 
The thing is, sure, a 5090 has a 600 W TDP. But since it's ~80 times faster than a 50 TOPS NPU, it'll also complete the inference work that much faster. If a hypothetical task takes 100 seconds to complete at 100 W, but 1.25 secs at 600 W, it's not too difficult to figure out which one works best in a scenario where the power supply and cooling is not a constraint.
Average user does not give flipping monkeys about 5090, a halo GPU that is neither a relevant piece of hardware nor available to vast majority of everyday users. Irrelevant comparison in this context. On-processor NPU will be available to average Joe and will handle tasks in its own domain of capability. That's what it is designed for and meant to do.

In addition to developing NPU utilization ecosystem, Microsoft has many other zombies to sort out, such as USB-C port chaos.
 
The plan for AI is to change the way people compute and enable entirely new things. No longer are computers bound by algorithms wherein each potential outcome has to be individually programmed in. We can now tackle use cases that, for regular algorithms, would require explicit programming of billions, trillions, and even more potential outcomes. In other words, things that were priorly infeasible are now feasible.
I agree with TekCheck, well said. However, we can only hope intentions are as altruistic as this. Sadly, this world is filled with jerks who don't understand the boundaries of decent civility or worse, people who are so slimy that they fully count as evil.(Edit: this last bit was not aimed at anyone here in the forums, it was a general comment about the society we live in.)
 
Last edited:
Yes, but not quite. It has to start with offering hardware, so that developers of software can test applications and code new ones to take advantage of hardware. There is no other way.
Yes, but at the same time they could have pushed their DirectML adoption harder and start it off with the existing GPUs out there.
Transition over an NPU would've been way more smoother this way.
It is Microsoft's job to tell us how and where we could use features and applications benefiting from NPU.
Again, I don't think it should be on the end user, but rather MS should be doing that for developers. Almost no user is aware of what kind of NPU they have within their mobile phones, as an example.
It will take time to get everything to work for an average user.
That I do agree with. My opinion is that the strategy is not the most efficient one, but as long as they keep pushing it forward it should eventually reach a nice status anyway.
 
Average user does not give flipping monkeys about 5090, a halo GPU that is neither a relevant piece of hardware nor available to vast majority of everyday users. Irrelevant comparison in this context. On-processor NPU will be available to average Joe and will handle tasks in its own domain of capability. That's what it is designed for and meant to do.

In addition to developing NPU utilization ecosystem, Microsoft has many other zombies to sort out, such as USB-C port chaos.
The Intel options are coming out soon enough though. The B50 looks pretty good and sips power, only $300.
 
Maybe I'm too optimistic, but I believe that 64CU APUs are possible, and that's the 9070XT level of performance with the current hardware. And just 40CUs with a more efficient arch UDNA might bring in the next gen could give us even more performance than that.
Current Strix Halo die is already sizeable, ~442 mm2. Another 24 CUs would take it beyond 600 mm2. Tom from MLID received some preliminary information postulating 48 CUs on Medusa Halo top SKU, unclear on which architecture. This is a highly specialized topic that we don't have enough information to know at the moment. It boils down to trade-offs in yields versus die size, thermal management, design complexity and costs. It's clear to me that AMD intends to push hard in this new, mega-APU segment, to compete with Apple, Nvidia and Intel. Apple is the only one that have fully integrated solution in their closed ecosystem, so here AMD has a chance to offer x86 alternative.

At Zen6 moment, they will be able offer wider line-up, from top 24 big cores with even stronger iGPU and NPU, all the way to one CCD with 12 cores and smaller iGPU. I'd like them to dare to package such lower SKU for AM5 platform, as the platform farewell, before moving to AM6. This would require them to have two separate IO dies, which is also postulated as 'little Medusa Halo'. We will live to see how this pans out.

I wouldn't hold my breath. IGP's usually don't get the latest version.
Yes. This also depends on cadence alignment of product lines. Phoenix APUs were released with RDNA3 just a few weeks after discrete RDNA3 cards.
 
Please remember this news is about NPUs. Lets stay civil and remain on-topic please.
 
I agree with TekCheck, well said. However, we can only hope intentions are as altruistic as this. Sadly, this world is filled with jerks who don't understand the boundaries of decent civility or worse, people who are so slimy that they fully count as evil.

Unfortunately any system that can be abused will be. I'm sure there will be positive and negative outcomes with this technology. We've already seen a bit a both.
 
Back
Top