• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Plans "Arrow Lake Refresh" for H2 2025 with Higher Clocks and Upgraded NPU

Can't Intel just figure out that all E-core CPUs for Ultra 3 (and lower) + all P-core CPUs for Ultra 5/7/9, are just fine ?
That wouldn't be good. Intel's biggest market is businesses, and that's the exact opposite of what businesses need. If you want a little micro-PC with an Ultra 3, you want 1or 2 P cores for single-threaded tasks so it feels snappy. And for the bigger workstations, a 4-core E-core cluster provide far more multithreading power than 1 P core that takes up the same space or power, so you actually want to load up on E-cores in the higher SKUs.
 
That wouldn't be good. Intel's biggest market is businesses, and that's the exact opposite of what businesses need. If you want a little micro-PC with an Ultra 3, you want 1or 2 P cores for single-threaded tasks so it feels snappy. And for the bigger workstations, a 4-core E-core cluster provide far more multithreading power than 1 P core that takes up the same space or power, so you actually want to load up on E-cores in the higher SKUs.
Which AMD is currently thriving in... without E-cores. Honestly, this looks like IA64 vs. x86-64 from 2000s all over again.

When old E-cores are getting closer and closer to P-cores (in transistor count/complexity), and you have to make make "new" E-Core to pick up "magic energy efficiency" metric - you f***ed up, and should rethink your plans from this point onwards.

PS. I'm pretty sure most office workers would be just fine with quad/octa E-cores even today vs. one or two P-Cores + "LPE-core".
 
Last edited:
Maybe next gen gaming will get some love from both sides if it wasn't for X3D gaming on Zen5 would have been a letdown also.
Unfortunately, the endless attention for this gimmick has become a hindrance for innovation, and I seriously hope the rumors of Intel adding more L3 cache is untrue.
Adding lots of L3 doesn't make the CPU faster at logic, which is what makes the CPU responsive in games and applications, instead it mostly helps bloated code, especially when you exaggerate the conditions by running artificially low GPU loads etc. No one will run high-end GPUs at 720p/1080p/low details in reality, so we shouldn't care about edge-cases.

I would much rather have ~5% higher IPC than a whole chiplet of L3 cache, especially if it meant meaningful front-end improvements which for AMD are sorely needed.

Outside of gaming the 285k was super solid I just think the i5 is too compromised to matter for gaming anymore and that was their bread and butter for gaming for so long.
While I don't posses their binning data, it seems like Intel "screwed up" with the clocks of the Ultra 5 245K, which is the counterpart of Ryzen 5 9600X, and if it was tuned properly would be an excellent choice for gaming. I don't know whether this was a marketing decision of Intel or not, but the end result is Intel losing out on a lot of sales instead of shifting volume up to 285K.

It's the same thing in reverse from when Zen2 released the 3900X obliterated the 9900k at MT task and was like 5% slower in realistic gaming scenarios I had both so I am fully aware. You still had the Intel brigade saying Zen2 was trash etc.
I don't play team sports, and strive to have a representative selection of both to validate software, and have had/used most except for the very latest. It's certainly not as simple as one being better at everything, and Zen 2 and 3 are certainly good where they shine. Zen to this day scale very well with batch workloads, and now extremely well with heavy AVX-512 loads. Intel consistently have had a lead in snappiness/responsiveness, which is why many applications "feels" better, even though most productive benchmarks only benchmark a big batch load, and is therefore often not representative of the time spent in the application. This also applies to gaming, and even benchmarks with .1% low measurements doesn't tell the whole picture about smooth rendering. There is also variance in input latency, which can be just as annoying, if not worse.

For now I'm fairly "annoyed" by Intel for constantly changing their hybrid designs, so in worst case I may have to get sample(s) from Alder Lake/Arrow Lake/Nova Lake to validate proper scaling of my software... :(
 
Which AMD is currently thriving in... without E-cores. Honestly, this looks like IA64 vs. x86-64 from 2000s all over again.
AMD is thriving in the heavy workstation market with Threadrippers, but that's not what I'm talking about. I'm talking about cheaper, maybe $2000-3000 workstations with consumer CPUs from Dell or Lenovo. You run professional software, but still on a consumer-grade CPU since you can't afford a threadripper or don't need that much multithreading power or memory bandwidth or PCIe lanes. The E-Cores are absolutely killer for these because they provide multithreading power. The 285K matched the 9950X in multithreading despite an 8 thread deficit.
When old E-cores are getting get closer to P-cores (in transistor count/complexity), and you have to make make "new" E-Core to pick up "magic energy efficiency" metric - you f***ed up, and should rethink your plans from this point onwards.
Huh? When is this happening? If this is about the Low Power Island, that's not at all accurate. That's Intel putting the iGPU, memory controller, and a couple e-cores on a separate part of the CPU (like Ryzen's IO die), so the rest of the CPU can be turned off if you're not using it.
 
AMD is thriving in the heavy workstation market with Threadrippers, but that's not what I'm talking about. I'm talking about cheaper, maybe $2000-3000 workstations with consumer CPUs from Dell or Lenovo. You run professional software, but still on a consumer-grade CPU since you can't afford a threadripper or don't need that much multithreading power or memory bandwidth or PCIe lanes. The E-Cores are absolutely killer for these because they provide multithreading power. The 285K matched the 9950X in multithreading despite an 8 thread deficit.

Huh? When is this happening? If this is about the Low Power Island, that's not at all accurate. That's Intel putting the iGPU, memory controller, and a couple e-cores on a separate part of the CPU (like Ryzen's IO die), so the rest of the CPU can be turned off if you're not using it.
Intel 18A-PT PowerVia Node is (1.8nm) with back side power, 52 Cores suppior with Ultra low power cores with Nova Lake and Razer Lake.

Cheers
 
Huh? When is this happening? If this is about the Low Power Island, that's not at all accurate. That's Intel putting the iGPU, memory controller, and a couple e-cores on a separate part of the CPU (like Ryzen's IO die), so the rest of the CPU can be turned off if you're not using it.
It already happened, it's called Arrow Lake (P-core and E-core being closer in performance).
LPE-Core (third type of "x86 core" that's getting added), is the "new thing" in Nova Lake because "usual E-cores" are just too high power (apparently).
It's NOT going to work how Intel wants it to work, because Microsoft's OS is too dumb to actually make it work right (Linux can, but that's besides the point).
 
It was never really amazing. Today and on time of release it was always cheaper to just buy a new mobo and a new faster CPU than what the 5800x 3d cost on it's own. Not to mention that besides the gaming performance (which is pretty average for todays standards) the processing power is that of a calculator - even a 170$ i5 13600k just walks all over it. I always thought it's a CPU for people that are too bored to buy a new mobo and setup their PC (which is a valid reason to buy this, don't get me wrong) than it was really a good sensibly valued option.
Im sorry but please make sure you can back up these statements beyond "trust me bro"


It was ~$150 cheaper than the "equivalent" 12900k/ks and not much more than the 12700k which both required DDR5 to really shine which was INSANELY expensive at the time in comparison to a performant DDR4 kit. Near release it was 2-3 times the cost for RAM alone plus then the motherboard replacement being another decent chunk of change especially ones focusing PCI-e 5 connectivity.
I can still pick up a 5700X3D for ~£200 which is the same processor with just a slightly lower clock bin. I cannot build any platform for that kind of money no matter what you say.

Even now a 5700x3d is within 3-10% of equivalently priced options without the RAM/Mobo upgrade being needed at the moment and can either put those funds to GPU (which we all know are extortionate at the moment) or wait for both Intel and AMDs next gen offerings which should offer a decent performance upgrade vs current offerings.



So if you are on AM4 at the moment and have the ability/funds to upgrade to a 57/5800x3D but they dont stretch to a full rebuild you are mad to think of any other option if they are fairly focused on gaming. You do not however build a new AM4 rig in the TYOL2025 as there are better options out there both from Intel and AMD in everything else.

AMD has more longevity as you should still have at least one more generational upgade on current AM5 and their current options are one of if not the best option for gaming and they arent a slouch in productivity either.

Intel has SOME interesting options from things like 14600/14900 depending on what your focus is and the Core Ultra stuff if you are purely productivity focused but you have the knowledge that all platforms are EOL with no real upgrade options puts me off advising investing massively in either of their options.
 
Im sorry but please make sure you can back up these statements beyond "trust me bro"


It was ~$150 cheaper than the "equivalent" 12900k/ks and not much more than the 12700k which both required DDR5 to really shine which was INSANELY expensive at the time in comparison to a performant DDR4 kit. Near release it was 2-3 times the cost for RAM alone plus then the motherboard replacement being another decent chunk of change especially ones focusing PCI-e 5 connectivity.
I can still pick up a 5700X3D for ~£200 which is the same processor with just a slightly lower clock bin. I cannot build any platform for that kind of money no matter what you say.

Even now a 5700x3d is within 3-10% of equivalently priced options without the RAM/Mobo upgrade being needed at the moment and can either put those funds to GPU (which we all know are extortionate at the moment) or wait for both Intel and AMDs next gen offerings which should offer a decent performance upgrade vs current offerings.



So if you are on AM4 at the moment and have the ability/funds to upgrade to a 57/5800x3D but they dont stretch to a full rebuild you are mad to think of any other option if they are fairly focused on gaming. You do not however build a new AM4 rig in the TYOL2025 as there are better options out there both from Intel and AMD in everything else.

AMD has more longevity as you should still have at least one more generational upgade on current AM5 and their current options are one of if not the best option for gaming and they arent a slouch in productivity either.

Intel has SOME interesting options from things like 14600/14900 depending on what your focus is and the Core Ultra stuff if you are purely productivity focused but you have the knowledge that all platforms are EOL with no real upgrade options puts me off advising investing massively in either of their options.
A 12700f + a b660 msi board (hub used and reviewed that exact combo) was 430 euros. The 5800x 3d on its own was 450. Full stop. Not going to argue with this, take it or leave it.
 
A 12700f + a b660 msi board (hub used and reviewed that exact combo) was 430 euros. The 5800x 3d on its own was 450. Full stop. Not going to argue with this, take it or leave it.

OK dude, thank you for confirming you need to pay 200% for an "equivalent" upgrade. Please read a complete post before spitting out the first pro intel thing you think of.
 

OK dude, thank you for confirming you need to pay 200% for an "equivalent" upgrade. Please read a complete post before spitting out the first pro intel thing you think of.
The 5700x 3d did not exist back then but sure, whatever makes you feel good.
 
If mobile Arrow Lake also gets a refreshed NPU then all the Arrow Lake chips will have a redesigned SoC tile. That means the media engine, memory controller, and LPE cores could also be updated, perhaps even replacing the Crestmont LPE cores with Skymont cores. Although I imagine Intel would consider a newer node than N6 to be necessary for Skymont LPE cores.
 
Unfortunately, the endless attention for this gimmick has become a hindrance for innovation, and I seriously hope the rumors of Intel adding more L3 cache is untrue.
Adding lots of L3 doesn't make the CPU faster at logic, which is what makes the CPU responsive in games and applications, instead it mostly helps bloated code, especially when you exaggerate the conditions by running artificially low GPU loads etc. No one will run high-end GPUs at 720p/1080p/low details in reality, so we shouldn't care about edge-cases.
...
+1 -- I woulndn't have agreed with this take until my first X3D processor experience.

It very often puts FPS in the places you want it least (Average FPS + Highs) but does much less when the system needs to go back to ram and load things in (i.e. stutters) - so you can end up with wild swings in performance especially in badly coded UE5 games and console ports.

Nothing quite like looking at an FPS graph that says 180 but the game feels like Gsync is off at 80 FPS at 4K when fully GPU bound.
 
Nothing quite like looking at an FPS graph that says 180 but the game feels like Gsync is off at 80 FPS at 4K when fully GPU bound.
you spit the words out of my mouth..Amen!! well said!!
 
+1 -- I woulndn't have agreed with this take until my first X3D processor experience.

It very often puts FPS in the places you want it least (Average FPS + Highs) but does much less when the system needs to go back to ram and load things in (i.e. stutters) - so you can end up with wild swings in performance especially in badly coded UE5 games and console ports.

Nothing quite like looking at an FPS graph that says 180 but the game feels like Gsync is off at 80 FPS at 4K when fully GPU bound.
TBF you also had a high end CPU (a 13700k, right?) when you swapped to the 9800x 3d so yeah, it stands to reason that it feels the way it does, I had a similar experience, I 100% preferred my 12900k over the 3d. But my friend that upgraded from a 5800x 3d he was shocked about how fast the 9800x 3d is, not just in games, but in general usage as well - even for MT workloads.
 
TBF you also had a high end CPU (a 13700k, right?) when you swapped to the 9800x 3d so yeah, it stands to reason that it feels the way it does, I had a similar experience, I 100% preferred my 12900k over the 3d. But my friend that upgraded from a 5800x 3d he was shocked about how fast the 9800x 3d is, not just in games, but in general usage as well - even for MT workloads.
Yes had a 13700kf @5.5 flat w/ ring OC and a 7600 DDR5 CL34 kit that was tuned on subs - was a great setup.
 
I do enjoy gaming on the ARL platform I have..its more "consistent" with the game feel rather on the AMD Platform (dips and the input latency at times are worse).
Not a problem on my machine.
 
Not a problem on my machine.
single CCD's don't have "much" issues, or your looking at the wrong stats..you can scour the whole Internet for those issues, YMMV. (normal users won't really complain)
 
It very often puts FPS in the places you want it least (Average FPS + Highs) but does much less when the system needs to go back to ram and load things in (i.e. stutters) - so you can end up with wild swings in performance especially in badly coded UE5 games and console ports.

Nothing quite like looking at an FPS graph that says 180 but the game feels like Gsync is off at 80 FPS at 4K when fully GPU bound.
I'm glad more people are starting to realize it, and hopefully the awareness will eventually affect the quality of games, applications, drivers, OS, hardware, etc.

There are many types of stutter, and just as many causes. But one thing to be aware of is that some stutter may not be easy to measure in software, especially when it comes to "microstutter". I became aware of this many years ago while writing a rendering engine, when I noticed running a latency optimized Linux kernel resulted in a much smoother experience. Even running side by side next to a more powerful GPU, the difference was quite noticeable. (I hope to revisit this sometime on modern OS', and see how it behaves today, but I don't have high hopes considering how Win 11 "feels".)

I would argue that very frequent tiny stutter can be worse than an occasional slow frame, even if the slow frame is much more significant (this may be subjective). And some stutter may affect the kernel, and in such cases it may also affect the clock, and in such cases you simply can't trust the measured results. This is why the measured results may look "perfect", but you certainly can "feel" the difference. So in such cases you should trust your observation more than the "measurement".

I wish that we could have proper deep dive "reviews" of gaming setups done by skilled engineers using debug tools and possibly even customized tools to single out the various sources of latency in the system. But nevertheless, I know what the findings would look like; especially in software there is more crap than you can ever fathom.
 
Consistent micro stutter is far more aggrivating vs the odd dip in frames.
 
I'm looking to build a intel game streaming server and this situation is not very nice.. buy arrow lake now, wait for refresh, or wait a year for nova lake

300 EUR for a core 7 ultra and 130 EUR for a B860 mobo is not a bad price..
 
I'm looking to build a intel game streaming server and this situation is not very nice.. buy arrow lake now, wait for refresh, or wait a year for nova lake

300 EUR for a core 7 ultra and 130 EUR for a B860 mobo is not a bad price..
Honestly used arrow lake chips are super cheap right now - you can get the 7 for ~225 eu.
 
Where? i just checked ebay and there are some used in the US from 244 EUR-- even new for 271 EUR
But didn't find anything under 300 in europe
Maybe went back up I was shopping around last week and was able to nab this in the states:
1752158822504.png


It dipped below $250 for a new one not long ago -- seems like eu still very expensive for AL.
 
Nothing quite like looking at an FPS graph that says 180 but the game feels like Gsync is off at 80 FPS at 4K when fully GPU bound.

This sums up my experience as a whole with AMD and Radeon in general lol.. I appreciate influencers in the way they're promoting competition between companies but at the end of the day they do not use these CPUs day to day and just pump numbers for content.. which is ironic because they're all chasing 3D Mark scores with Intel CPUs as we speak :laugh: . I hope Intel lowers pricing on Nova Lake based on all the bad rep they got with the Arrow Lake launch, I would be the minority to say that Intel still has the more sophisticated CPU and smoother Platform based on my testing.
 
Back
Top