• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Skips NPU Upgrade for Arrow Lake Refresh, AMD Cancels Medusa Halo in Latest Rumors

The context you left out is important, the CEO wanted to step down, and according to all of the talk that went on when EVGA exited the market, there was likely a non-compete clause from Nvidia for EVGA to not work with any competitors, and IMO that definitely sounds like something the leather jacket would force upon their partners. EVGA was forced out of the market because Nvidia was pushing all sorts of rules on their partners while undercutting them with FE cards, and didn't seem to care when their best business partner was forced to exit. Your argument makes no sense especially when you choose to attack AMD instead to make Nvidia look like a hero.

More likely is Nvidia wouldn't allow them to make any Nvidia GPU's, and the same happened with Acer and Asrock, which is why they only make AMD and Intel GPU's.

Except we know where Nvidia cheating on benchmarks back then got them to the near monopoly they are today. Intel also cheated on benchmarks for decades, but chose to rest on their back foot for too long.

The point is none of these GPU companies should be influencing game developers to not optimize for competitors, just look at CP2077 for example, it didn't get FSR3.1 until recently, a 5 year old game which most people wouldn't care about replaying now except enthusiast Nvidia customers.
So let me get this straight, whenever a company doesn't want to work with nvidia anymore (like EVGA), it's nvidias fault. And whenever a company doesn't work to work with AMD anymore (like asrock), it's nvidias fault. You seem like a very unbiased individual.
The point is none of these GPU companies should be influencing game developers to not optimize for competitors, just look at CP2077 for example, it didn't get FSR3.1 until recently, a 5 year old game which most people wouldn't care about replaying now except enthusiast Nvidia customers.
As an owner of an AMD gpu I totally want nvidia to be sponsoring games, since they make sure that those games are super optimized for my amd GPU. Like doom, cyberpunk and alan wake. As also an owner of an nvidia gpu i totally don't want amd to be sponsoring games cause they work like crap on my nvidia gpu. So no, sponsoring isn't inherently bad, when a consumer first company like nvidia does it it's fine, great even.
 
Why are you posting performance figures from a 15 year old synthetic benchmark and not the gaming numbers?

I mean who wants to play Doom at 60fps, right?

View attachment 408359

Oh, and have you heard of dark mode?
Because Strix Halo is not a gaming chip. AMD isnt even marketing it as such.

It can be used for gaming, sure, but people going out of their way to buy it are doing so to run LLMs primarily, and other compute workloads.
 
Last edited:
You shouldn't be happy about it, because AMD will turn into the next Intel.

With no competition AMD will have very little reason to innovate, and you'll be stuck with minor refreshes year after year and high prices because there's no one to challenge them.

As much as I dislike Intel, I'm hoping they get their act together and start competing again, which will continue to push AMD to be better.
AMD revenue(~$25B) is still roughly half of Intel's(~$53B). AMD have a long way to go to be a more widely recognized brand than Intel to the general consumer, so we'll see them continue push the envelope.
 
With no competition AMD will have very little reason to innovate, and you'll be stuck with minor refreshes year after year and high prices because there's no one to challenge them.
If you read the latest interview with Intel's CEO, you will discover that Intel has fallen out of top 10 semiconductor companies. AMD has several other companies to compete with in diverse segments, not only Intel.
Cancellation of Medusa Halo is pretty logical.
We don't know anything about this. It's an online gossip. It was also an online gossip that Medusa Halo would launch. AMD has never confirmed such line-up on their roadmaps. So, the gossip about the launch was followed about the gossip that it's cancelled... Do you see how gossip-sphere works?
IMO, what Intel needs to do is start using their own fabs and come up with a completely new architecture.
Their problem with fabs is that they do not have confidence in most recent nodes for client market. The compute and graphics tiles both for Arrow Lake and Nova Lake are produced by TSMC. That tells you something.
I don't think anybody is denying that intel's problems are of its own making.... But the lack of intel will be a monopoly for the diy market for amd cpus, which is what lead intel astray in the first place, you are cheering for a repeat in the process.
Well said.

AMD revenue(~$25B) is still roughly half of Intel's(~$53B). AMD have a long way to go to be a more widely recognized brand than Intel to the general consumer, so we'll see them continue push the envelope.
It's not that long way to go. The changing trend in revenues and margins is clearly visible. It's a matter of a few years when revenues could be equal. Intel has been experiencing a slow decline. With no AI and console gaming portfolio, with increasing dependence on TSMC's more advanced and performant nodes, it will be gradually even more challenging for them in coming years. Don't forget that Nova Lake needs to close a gaming gap of around ~25% to X3D CPUs. Plus, Zen6 X3D CPU will bring more performance, so the gap to close would be even higher.
Revenues Intel 2025 Q1.png
Revenues AMD 2025 Q1.png
 
AMD revenue(~$25B) is still roughly half of Intel's(~$53B). AMD have a long way to go to be a more widely recognized brand than Intel to the general consumer, so we'll see them continue push the envelope.
Intel has a net loss of around $1b, while AMD has a profit of around that much. So that $53b, they're burning it all.

AMDs valuation is double that of Intel's today. So financially speaking, AMD is in a much better position.

As for brand recognition, the general consumer doesn't really care. Half of them don't know what Intel or AMD even are. They just want a PC that works. For those that are aware, be it consumer, enterprise, or data center, they know AMD has the better product.

What Intel has going for them is their OEM agreements. Somehow, they still have OEMs in their pockets. I'd wager this is more to do with bureaucracy more than anything. Intel has used nasty business practices in the past and I bet no OEM wants to open that can of worms by severing their relationship with Intel.
 
Serious question, who is actually going out of their way to buy Strix Halo? It’s overpriced and uses an older GPU architecture.
Many folks messing with LLMs. Head over to other more technical forums (rather than consumer-focused ones like TPU) and you'll see quite some people with Strix Halo laptops.
Lots of folks were also more interested in it as a mini-computer rather than a laptop.
The solution with soldered LPDDR memory and without an option to upgrade components is NEVER going to be mainstream. Buyers don't want yet another Apple-like, locked-in ecosystem. It's too expensive and niche in mobility devices, so if it does not become available in DIY, it's not going to work in a long run. Mini-PCs will also need to evolve into socketed systems, beyond current BGA lock-in.
Disagree, if soldered LPDDR is how you can get a 256-bit+ bus with faster frequencies, so be it. Lots of people are willing to go this route, not everyone is dead set on caring about upgradeability.
Heck, people are buying Mac minis and clustering those.

We are not talking about regular mainstream gaming here anyway, there are better options for your casual consumer that just wants to play games.
This is a weird place in between people looking for a quasi-HEDT system for a fraction of the price, and consumers who just can't be bothered with building their LEGO-setup (and there are many of those, sales of mini-PCs are growing a lot).
 
The point is none of these GPU companies should be influencing game developers to not optimize for competitors, just look at CP2077 for example, it didn't get FSR3.1 until recently, a 5 year old game which most people wouldn't care about replaying now except enthusiast Nvidia customers.
AMD seems to care, as it is on their featured games list. Have you played it, because the AMD logo is on the games’ into screen.

1752951739317.png


Stuffing your foot in your mouth again.
 
Don't forget that Nova Lake needs to close a gaming gap of around ~25% to X3D CPUs. Plus, Zen6 X3D CPU will bring more performance, so the gap to close would be even higher.
In 1440p, which is the most relevant for flagships as it's either native or upscaled to 2160p (DLSS Quality), we have the following:
9800X3D is
6% better than 7800X3D
11% better than 14900K
15% better than 285K

If the 10800X3D would be 9% better than the 9800X3D is that plausible? Too little? How much is the core increase from 8 to 12 going to contribute, besides IPC and possibly higher frequency?
So that would make the 10800X3D 25% better than the 285K.
But that isn't the maximum performance that Intel has achieved up to now. The 14900K is a little higher, compared with this the 10800X3D would be 21% better.
So your estimation is spot on.
I think Intel can definitely surpass the performance of the 14900K. But by how much?

The next question is does Intel need to achieve absolute parity? Or is a little below acceptable? How much, 5%, 10%? Is slightly less performance for less power draw acceptable? Maybe similar power draw but lower temps?
Is the FPS number the ONLY thing that matters? Or just the thing that matters most?

The problem with these single-CCD X3D chips is that when it comes to gaming people use them for comparisons, but for other tasks they use the other chips with higher core count, thus the Intel chips have to be great in all workloads otherwise they get dismissed.

We'll see.
 
Disagree, if soldered LPDDR is how you can get a 256-bit+ bus with faster frequencies, so be it. Lots of people are willing to go this route, not everyone is dead set on caring about upgradeability.
Heck, people are buying Mac minis and clustering those.
They can cluster as many as they like, but you will agree that's probably not the best way to do hardware scaling. Apple could easily sell naked PCBs with all chips on it, so that users could daisy-chain those in one single and simple rack, without wasting time, effort and materials on all those individual retail boxes.

In addition, tech companies will need to hit more sustainability and modularity goals in 21 century, and therefore educate the public that soldered systems are not the best tech industry could possibly do to achieve more green credentials and be more user friendly.
We are not talking about regular mainstream gaming here anyway, there are better options for your casual consumer that just wants to play games.
This is a weird place in between people looking for a quasi-HEDT system for a fraction of the price, and consumers who just can't be bothered with building their LEGO-setup (and there are many of those, sales of mini-PCs are growing a lot).
I am ok with mini-PC. As I said, they just need to evolve and start offering socketed options too, in a smaller package than mini-ITX. It's not a rocket science to offer consumers more solutions.
 
The 8060s is certainly not as fast as a RTX 5060 not even even close. But nice try
 
The 8060S is faster than the 4060 in some games, though it's not a gaming GPU and I think it's impressive for what it is, especially in the Z13 Flow not using the full 120W TDP of the APU.
Hopefully AMD hasn't completely cancelled an APU with a powerful iGPU, as it's much more interesting than nerfed down laptop dGPU's.

 
  • Love
Reactions: ARF
Many folks messing with LLMs. Head over to other more technical forums (rather than consumer-focused ones like TPU) and you'll see quite some people with Strix Halo laptops.
Lots of folks were also more interested in it as a mini-computer rather than a laptop.


Seems like those people would be better served by remotely connecting to either a cloud-hosted machine with much more processing power or connecting to a home desktop with as much as 512GB of RAM for $10k (like the new Mac Studio), as opposed to spending over $3k on a laptop with a measly 96GB of LLM memory. Every $1k only nets 32GB of LLM memory as opposed to over 50GB on the Mac for example. But even if you settled for nVidia DIGITS, an SSH connection to a remote setup would take mere kilobytes/sec of bandwidth, and free up local resources on the laptop for basic tasks like web browsing or word processing. Seems less advantageous to buy the Strix Halo laptop.
 
They can cluster as many as they like, but you will agree that's probably not the best way to do hardware scaling. Apple could easily sell naked PCBs with all chips on it, so that users could daisy-chain those in one single and simple rack, without wasting time, effort and materials on all those individual retail boxes.
I do agree that's not ideal, but if people are doing it, it does show that there's a desire for such arrangements.

In addition, tech companies will need to hit more sustainability and modularity goals in 21 century, and therefore educate the public that soldered systems are not the best tech industry could possibly do to achieve more green credentials and be more user friendly.
I honestly don't think this is relevant in the bigger scenes. The DIY desktop is a niche segment, laptops far outsell it, and most people don't ever upgrade it, and when they do, it's often just storage.
Thus it often makes no difference if the cpu is not upgradeable (as most laptop CPUs have been for years), and the same will slowly apply to memory as well.
I am ok with mini-PC. As I said, they just need to evolve and start offering socketed options too, in a smaller package than mini-ITX. It's not a rocket science to offer consumers more solutions.
I think socketed CPUs is not relevant for most cases. But I agree that having both the option of modular (SO)DIMM/CAMM or faster soldered memory can be good depending on the target audience.
Seems like those people would be better served by remotely connecting to either a cloud-hosted machine with much more processing power or connecting to a home desktop with as much as 512GB of RAM for $10k (like the new Mac Studio), as opposed to spending over $3k on a laptop with a measly 96GB of LLM memory. Every $1k only nets 32GB of LLM memory as opposed to over 50GB on the Mac for example. But even if you settled for nVidia DIGITS, an SSH connection to a remote setup would take mere kilobytes/sec of bandwidth, and free up local resources on the laptop for basic tasks like web browsing or word processing. Seems less advantageous to buy the Strix Halo laptop.
Those things are not mutually exclusive. And apparently lots of people disagree with your idea since they have bought strix halo anyway.
 
In 1440p, which is the most relevant for flagships as it's either native or upscaled to 2160p
1440p is neither most prevalent nor most relevant for flagships. Those two things are separate variables. High frame rate games, such as e-sports, are typically played on 1080p high refresh rate monitors, both with flagship and non-flagship CPUs and GPUs. Always good to be aware of this and not make assumptions by lumping resolutions to specific tier of processors or graphics cards. Sure, better CPUs and GPUs will produce better results on higher resolution displays, but that's not the point here. The point is three ways in which a CPU can contribute to gaming and be better or worse than others:

1. 'Floor tests': CPU's contribution to gaming is established in 720p/1080p tests, regardless of actual resolution people play on. Testing in native 720p/1080p is fundamental, as it tells us the extent to which a CPU could maximally contribute. Also, over 60% of global PC population still games on 1080p displays. Those floor metrics are still very much relevant and not becoming obsolete any time soon.

2. Lower settings: Resolutions above, 1440p/4K, are more GPU bound in general, so CPU's impact will be lower, of course. In 4K, top 30 CPUs are within distance of 5-6%, which is not surprising. A CPU can still contribute more in higher resolutions if game settings are lowered from Ultra to High/Middle. In this case, there is more job for CPU to do by lowering demands from GPU. That's another reason why CPUs are tested in lower resolutions, to see the extent of possible contribution to gaming, no matter which resolution gamers use, with or without upscalers.

3. CPU-intense games: In addition, some games are more CPU-intense than others, which is another factor to consider during testing, and reviewers need to select a fairly balanced share of such games to show us CPU's contribution.

Under those scenarios, 720p/1080p native, lower graphics settings in higher resolutions and CPU-intense games, X3D CPUs will definitively do this job better, on average. Extra L3 cache has been identified as a culprit driving better performance in games that benefit from it.
15% better than 285K
It's more than that, ~20% in 1080p and ~25% in 720p.
If the 10800X3D would be 9% better than the 9800X3D is that plausible? Too little? How much is the core increase from 8 to 12 going to contribute, besides IPC and possibly higher frequency? So that would make the 10800X3D 25% better than the 285K.
I expect '10800X3D' to be way faster than 9% in relation to 9800X3D. Waaay more, ~30%. New node, more 3D cache, higher clocks, more cores, etc. Current gap between 9800X3D and Arrow Lake is already ~20%, depending on measurements. The gap between 10800X3D and 285K, or '385K' refresh will be higher than that. You can see now how difficult task ahead Nova Lake CPUs have. They would need to lift up gaming performance by ~50% in comparison to Arrow Lake, in order to win in gaming against Zen6 X3D CPUs.

When 5800X3D was launched in 2022, it was on par in gaming with 12900K (see TPU review). From there, next two generations of X3D CPUs increased the gap against Intel's CPUs, more and more each time. 7800X3D was faster in high single digits than Raptor Lake and 9800X3D is faster in double digits against Arrow Lake. Intel will need to take a rabbit out of a hat to close the ever widening gap in gaming performance. They are getting more and more behind, gen after gen. They will need to sort out inter-tile latency to improve memory performance and SSD performance for Direct Storage feature (currently Arrow Lake CPUs throttle the speed of Gen5 SSDs by 2GB/s, which was measured, published and acknowledged by Intel), and offer another layer of cache or more L3 cache.

4 extra cores on Zen6 R7 will contribute to games that benefit from more than 8 cores, and therefore this will lift up overall CPU contribution in test of 40-50 games by a few fps on average. The alleged uplift on 3D cache die to 96MB will further offer CPU benefits in games that can use more cache. Higher clocks too. Lower latency new Infinity Fabric too. All those features will add a few percentages in fps numbers to different games. Not all games will benefits from all those features at the same time, but each game will benefit from a few, to form an overall uplift.
But that isn't the maximum performance that Intel has achieved up to now. The 14900K is a little higher, compared with this the 10800X3D would be 21% better.
So your estimation is spot on.
My estimation was very conservative.
The problem with these single-CCD X3D chips is that when it comes to gaming people use them for comparisons, but for other tasks they use the other chips with higher core count, thus the Intel chips have to be great in all workloads otherwise they get dismissed.
That's Intel's problem to deal with, as it is them who offer a generic CPU only. AMD offers more specialised CPUs in desktop segments. If you do mostly gaming, there are X3D CPUs on offer, if you do mostly productivity workloads, there are vanilla CPUs, if you do both, there are higher core count X3D CPUs, if you need more graphics, there are G desktop APUs. Plenty of choice. Intel has not managed to diversify the offer of desktop CPUs, hence their current CPUs are jack of all trades, master of none. Some buyers still enjoy such CPUs, but increasing number does not. This is reflected in gradual loss of market share, which we all can see.
 
1440p is neither most prevalent nor most relevant for flagships.
It's relevant because if you're paying top dollar for expensive builds (in which the GPU has a massive contribution) you will play at the highest playable settings the build can achieve, otherwise what's the point?
You're using the CPU in a particular build, for which the GPU sets the performance level, so if you have a flagship GPU you will be using higher graphical settings. It's likely the CPU will be a flagship as well, therefore its performance at 720p/1080p is irrelevant in this context.
I doubt that people using a 4090 and 5090 buy CPUs to pair with these cards based on the 720p/1080p performance charts.
1. 'Floor tests': CPU's contribution to gaming is established in 720p/1080p tests, regardless of actual resolution people play on. Testing in native 720p/1080p is fundamental, as it tells us the extent to which a CPU could maximally contribute. Also, over 60% of global PC population still games on 1080p displays. Those floor metrics are still very much relevant and not becoming obsolete any time soon.
Yes, those are relevant, making it so you are CPU bound reveals if there is progress from one architecture to another and if there is how much is it.
3. CPU-intense games: In addition, some games are more CPU-intense than others, which is another factor to consider during testing, and reviewers need to select a fairly balanced share of such games to show us CPU's contribution.
Things can change within the same game, some scenes are more GPU bound and others more CPU bound. Daniel Owen made some videos on this subject.
Intel has not managed to diversify the offer of desktop CPUs, hence their current CPUs are jack of all trades, master of none.
That's exactly how AMD's non-X3D chips are, or would you argue otherwise?
So if AMD has exactly the same category of CPUs as Intel why not keep an apples to apples comparison?
I expect '10800X3D' to be way faster than 9% in relation to 9800X3D. Waaay more, ~30%. New node, more 3D cache, higher clocks, more cores, etc. Current gap between 9800X3D and Arrow Lake is already ~20%, depending on measurements. The gap between 10800X3D and 285K, or '385K' refresh will be higher than that. You can see now how difficult task ahead Nova Lake CPUs have. They would need to lift up gaming performance by ~50% in comparison to Arrow Lake, in order to win in gaming against Zen6 X3D CPUs.
The latest review (9950X3D) is missing a lot of CPUs, but I can make some extrapolations using past data.
If I compare the 9950X3D with the 5800X3D, apparently there will be about ~12 maybe 15% difference in 1440p with a 5090, the 9950X3D has more cores/threads (although by means of two CCDs), more regular cache, higher frequency, higher IPC, DDR5 vs DDR4, and a two generation advantage, smaller node etc.
It's a somewhat forced example as the CPU configs are a bit different than that of the 9800X3D vs. 10800X3D but I'm using what's available.
So in that case the gains were definitely there but hardly 30%. That is after two generations.
And you're expecting a 30% uplift from one generation to the next.
Optimistic to say the least. If it happens then great.
 
Good for Intel and AMD! They're listening to us when we tell them we don't want this AI crap on our personal PC's. AI needs to stay in the cloud where it belongs..
 
That's really not relevant and not where I was going.

I was pointing out that these NPUs are not powerful and are not providing any new capabilities or 'killer' app functionality.

If they were, we'd know about them on our much more powerful desktop systems.
NPUs were never about peak performance, but efficiency for sustained AI tasks like real-time video/audio processing or small LLM/subject recognition. That's why AMD and Intel both prioritized the laptop. The problem is: intel's H and HX chips are largely based on the desktops' SKUs, which will make their ARL refresh laptop weaker in that aspect vs the competition.

Good for Intel and AMD! They're listening to us when we tell them we don't want this AI crap on our personal PC's. AI needs to stay in the cloud where it belongs..
Both of them are still planning to develop chips with powerful NPUs, though....that news is just about Intel not changing the chiplet configuration, and AMD not making a new APU with a big GPU, but Ryzen AI and Lunar Lake are not being discontinued.

Local A.I is also not going anywhere, the NPU isn't only used by copilot, more and more apps are implementing tools that can be accelerated by an NPU. And god knows PC laptops need that, seeing how shitty Nvidia/AMD dGPU are unplugged compared to a MacBook. PC laptops can compete plugged, but get rag-dolled by a MacBook when unplugged, often getting half or even a third of the performance, without even having an advantage in battery life.

Cloud processing is also expensive, just ask OpenAI if they are profitable yet, and not operating at a loss :D. Adobe introduced credits and other limited uses that get used up every time you use their cloud-only generative AI.

I don't want to see Ads in my already kinda expensive subscription/paid for desktop application, so I do not wish for an over-reliance on the cloud. Companies aren't going to take that additional cost for us. Ads in windows are probably why there's a free version of Copilot in the first place. And also why they increased the price of OneDrive 365.
 
Those things are not mutually exclusive. And apparently lots of people disagree with your idea since they have bought strix halo anyway.

How many is “lots”? Do we have sales figures? And how many of those sales were of large RAM models? Last I checked there were multiple configurations of Strix Halo available. In fact, the lowest MAX configuration sporting 32GB of RAM would be perfectly fine for gaming and even most professional workloads, but less desirable for specifically running large LLMs. The unfortunate fact too is that most OEMs don’t give AMD the priority they deserve in the mobile space. Given the already high price, it’s very likely that Strix Halo will remain a low-volume part, with the even more expensive high capacity RAM models being an even rarer niche. And I’m sure the OEMs would agree.
 
Company wont learn, if its not ready, delay the release until it is ready, far better than just releasing a half arsed product because of a desire to meet some kind of schedule.
 
It's relevant because if you're paying top dollar for expensive builds (in which the GPU has a massive contribution) you will play at the highest playable settings the build can achieve, otherwise what's the point?
"Highest playable settings" mean different things in different games for different people. 1440p/30Hz can often be tolerable in slow games, such as steady flights in Flight Simulator, but 1440p/60 is often not tolerable in faster games. In that case, lower settings allow CPU to bring more fps and help GPU that lost steam in native rendering and Ultra settings. The most capable CPU in 1080p will be able to do those things better than less capable ones in 1080p.
You're using the CPU in a particular build, for which the GPU sets the performance level, so if you have a flagship GPU you will be using higher graphical settings. It's likely the CPU will be a flagship as well, therefore its performance at 720p/1080p is irrelevant in this context.
It doesn't really matter what a CPU does in 1440 and 4K if we already know what it does at 1080p. If you know how much CPU contributes in 1080p, that's all you ever need to know to be confident that such CPU will help you more in varied scenarios in higher resolutions than other CPUs. The higher the resolution, the more CPUs come closer to each other. As I said, in 4K there are 30 top CPUs literally 5-6% away from other other. So, which one to choose as 4K gamer if top 30 CPUs are very similar? Again, look at which one is better in 1080p. Only 1080p will tell you which CPU will give you more juice at higher resolutions with any tweaked settings outside of Ultra.
I doubt that people using a 4090 and 5090 buy CPUs to pair with these cards based on the 720p/1080p performance charts.
And why do monitor vendors sell 4K/240 monitors with dual mode 1080p/480Hz? Ever wondered?
 
Why is the X3D talk trickling down here?? hahaha.
 
Back
Top