• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

8-Core AMD Ryzen AI Max Pro 385 Benchmark Appears As Cheaper Strix Halo APU Launch Nears

Cpt.Jank

Staff
Staff member
Joined
Aug 30, 2024
Messages
159 (0.56/day)
It looks as though AMD might be planning to finally commercialize the more affordable version of its Ryzen AI Max APUs that have proven to be capable of powering impressively high-end gaming experiences. The first set of benchmarks of the new Strix Halo APU, dubbed the AMD Ryzen AI Max 385, have appeared on Geekbench, and the new APU is putting up some impressive numbers. AMD originally said that the Strix Halo line-up would be available between Q1 and Q2 2025, so the timing makes sense.

One major difference between the Ryzen AI Max 395 and the 385 is the iGPU, which is downgraded from the Radeon 8060S to the 8050S. When AMD launched the Strix Halo line-up, it revealed that AI Max Pro 385 would have an eight-core CPU paired with 32 graphics cores, instead of the 16-core CPU and 40-core iGPU setup. While we don't yet have GPU benchmark results for the 8050S, the CPU results put up by the APU are impressive on their own, with 2,489 points in the single-core benchmark and 14,136 points in the multicore benchmark. The laptop the new Ryzen silicon was tested in was an HP ZBook Ultra G1a with 32 GB of RAM. The results put the 385 only slightly behind the AI Max+ 395 in certain configurations, but in a similar HP ZBook Ultra G1a laptop, the Ryzen AI Max+ 395 comes out ahead of the 385 by as much as 45%. It's unclear just how much laptops with this new Ryzen AI Max Pro 385 APU will cost, but they will almost certainly be cheaper than the current crop of Ryzen AI Max+ laptops, which generally run well north of $2,000.


View at TechPowerUp Main Site | Source
 
What's "AI" about these that normal Ryzen CPUs/APUs aren't?

NPU has 50 tops of performance just by itself. Compared to the 8840hs, it's NPU only has 16 tops.

So these new chips will be able to handle all your AI operations efficiently and without bogging down the CPU. Past AI products have been including the CPU core's contribution to the tops output (8840hs for example has 36 tops if you include the CPU cores) but you don't really want to use your CPU cores for AI tasks, it's less efficient.
 
What's "AI" about these that normal Ryzen CPUs/APUs aren't?
1. Marketing BS
2. It has an extremely powerful iGPU with 256bit memory, and can assign something like up to 96GB to the iGPU. If you are doing deep learning tasks, the huge memory pool can pull it ahead of many discrete GPUs. Some benchmarks I saw put it at over twice the speed of a RTX5090 once you run tasks larger than what fits inside the RTX5090 memory. That's only really because it has so much more memory than any discrete GPU, not due to the cpu/npu/gpu part inside (well, the quad channel memory controller also helps).

But it's mostly just the marketing BS.
 
NPU has 50 tops of performance just by itself. Compared to the 8840hs, it's NPU only has 16 tops.

So these new chips will be able to handle all your AI operations efficiently and without bogging down the CPU. Past AI products have been including the CPU core's contribution to the tops output (8840hs for example has 36 tops if you include the CPU cores) but you don't really want to use your CPU cores for AI tasks, it's less efficient.

I still don't fully understand what this new stuff means, I know it means it can handle more AI stuff, but what makes it so much different than a regular CPU or GPU processing power... its all 0's and 1's at the end of the day no?
 
It has an extremely powerful iGPU
The Ryzen AI 9 HX 370 doesn't though, so "extremely powerful iGPU" can't be the criterion for that "AI" moniker.

I'd say that it's the NPU with enough TOPs to satisfy Copilot+ branding that makes AMD want to add "AI" to the name.

But it's mostly just the marketing BS.
Interesting. If it's just marketing BS, does that mean that the 50 TOPs of the NPU are and will remain useless, or are not aimed at AI workloads?
 
What's "AI" about these that normal Ryzen CPUs/APUs aren't?
It's a specialized part. 256GB/s bandwidth and an APU that can use it as well as an NPU.
Interesting. If it's just marketing BS, does that mean that the 50 TOPs of the NPU are and will remain useless, or are not aimed at AI workloads?
It's not actually marketing BS, he just doesn't know what he's talking about. They're aimed at AI workloads, you can use the NPU with AMD GAIA.
 
If it's just marketing BS, does that mean that the 50 TOPs of the NPU are and will remain useless, or are not aimed at AI workloads?
There's no real useful use for an NPU at the moment, so it's mostly marketing.
Anything interesting you'll be making use of the iGPU which supports more data formats and is faster than the NPU anyways.

Main reason to have an NPU is for local processing without using much energy. Think of text suggestions in your phone's keyboard, or those gallery features such as searching for people or objects.
At the moment there's no such use case in the desktop world.
 
I still don't fully understand what this new stuff means, I know it means it can handle more AI stuff, but what makes it so much different than a regular CPU or GPU processing power... its all 0's and 1's at the end of the day no?
A lot of it comes down to data types and operations that are mainly used in ai workloads. cpus for example don't support stuff like fp4 or int4 operations which are heavily used for weights in ai stuffs. you can do those ops on the cpu but not natively supported and simd stuffs will only work on int8 smallest or fp32 smallest - some have extensions for fp16 - which means more expensive and larger data
 
What is "ai"?
When you said to your device:
- Good morning, Siri
And got answer:
F**k you, why wake up me so early!
 
I still don't fully understand what this new stuff means, I know it means it can handle more AI stuff, but what makes it so much different than a regular CPU or GPU processing power... its all 0's and 1's at the end of the day no?

An NPU is an accelerator like a GPU, it's specialized in handling a certain workload and gains performance through that specialization. Specifically, that workload involves matrix multiplication and convolution. A majority of their hardware is dedicated to accelerating these operations and it includes additional capabilities for pushing through more low precision operations as AI models will typically use those to increase performance.
 
An NPU is an accelerator like a GPU, it's specialized in handling a certain workload and gains performance through that specialization. Specifically, that workload involves matrix multiplication and convolution. A majority of their hardware is dedicated to accelerating these operations and it includes additional capabilities for pushing through more low precision operations as AI models will typically use those to increase performance.

let me guess, future motherboards are going to have a dedicated NPU socket, CPU socket, and our gpu will still be in the slots.

i really hope that is not the future, but i wouldn't be surprised if it is.
 
NPU logic is integrated part of modern mobile and desktop APUs and maybe soon in all CPUs.

I understand that, I am just saying maybe it will gain traction if companies see AI as a money maker, I could see dedicated NPU sockets becoming a thing in x amount of years.
 
I understand that, I am just saying maybe it will gain traction if companies see AI as a money maker, I could see dedicated NPU sockets becoming a thing in x amount of years.
Technically possible? Yes! Will be implemented as kind of chip in special socket on PC motherboard? No! All graphic cards from RTX 3000 and newer has very good matrix ops performance. This is valid also for last few generations (rDNA 3&4) of AMD graphic cards, no matter that is better in cDNA(computeDNA). If there was a win-win situation, it would have already been realized in the way you imagine.
 
I understand that, I am just saying maybe it will gain traction if companies see AI as a money maker, I could see dedicated NPU sockets becoming a thing in x amount of years.
Not very likely, everything is moving on to the SOC.
sad with tiling/chiplets it doesn’t really matter how big they get.

on the server side where they use hbm equipping a card with loads of ram is t as big of an issue.
 
LOL absolutely no one writes about something they care about here, like IGP performance?

It's all NPU bashing, and yeah I'm with you there 100%, but it's kind of done by now, here and in other threads.
 
LOL absolutely no one writes about something they care about here, like IGP performance?

It's all NPU bashing, and yeah I'm with you there 100%, but it's kind of done by now, here and in other threads.
I'm wondering how much worse the 8050S is compared to the 8060S is, yeah. I wonder what Medusa Point will be using. RDNA or UDNA? Would be crazy if AMD actually implemented CDNA into this one, would get all the nice features that aren't in RDNA.
 
let me guess, future motherboards are going to have a dedicated NPU socket, CPU socket, and our gpu will still be in the slots.
PCIe just does it all. These are, for instance, used by people that use older (non-AI-stuff) mini-PCs or SoMs (often called SBCs, which is bsically a marketing term): https://coral.ai/products/m2-accelerator-dual-edgetpu/

I don’t hold it likely they’ll introduce a special slot—that thing, which I believe is a couple years old by now only does 2×4TOPS, whereas even Hawk Point and Meteor Lake’s NPUs already do 16 or 11.5 TOPS.

As others have pointed out, NPUs are mostly dead silicon for desktop usecases so far, one can basically always just use the GPU, and even on laptops, there’s A LOT of die space used for limited gains. I think the plan was to have wake-on-voice and such, which people don’t embrace. I’ve also seen presence detection being used, automatically putting the system to suspend when you leave.—Uhh, nice, but why can’t the monkey just press the button!? :banghead:

Of course, it’s a bit of a chicken-and-egg situation, but who knows how using that space for more GPU (maybe clocking it slower under partial load, which would then be lower, due to the size-up) or maybe some cache could have influenced energy efficiency.
 
So much talk about Ai and Ai enemies in games are still stupid as bricks. And we won't reach that even when ALL CPU's will have NPU's, just like physics are still shit in games despite GPU's being able to do insane computations in parallel with graphics all done on GPU so no traversal between CPU and GPU. And yet still nothing. The closest I've even heard glimpses of this is done in idTech 8 used in Doom The Dark Ages. And physics there are still relatively basic although whatever they show is highly impressive (like grass affected by weapon blasts and the cloud physics in the intro sequence). All that is real time physics, but in game it's not that much that would be super obvious which is a shame.
 
APUs are the future of PCs. No more large waffer dies spend on us peasants when you can sell them for more than gold to AI data centers. RIP DIY PC building. It was nice to know you for 30 years.
 
LOL absolutely no one writes about something they care about here, like IGP performance?

It's all NPU bashing, and yeah I'm with you there 100%, but it's kind of done by now, here and in other threads.
The fact they've downgraded the iGPU as well is underwhelming to say the least. It could be a great gaming machine if it did retain the full 40-core iGPU, without that insane price tag of a top model. But this whole product lineup is obviously aimed at LLM hobbyists more than it is to gamers, so that's that.
 
Back
Top