• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core Ultra Arrow Lake Preview

It's an easy sell because Intel have conveniently ruined the last TWO generations by letting 13th and 14th gen CPUs degrade to the bone.

I doubt people who just had their Intel CPU ruined would rush to buy another Intel? And Intel downplays issue, saying it affected only a small number of CPUs.
 
The entry level model with 6 pcores (245) will be plenty what a gamer needs for years to come, more future proof is the 8P version (265) when looking at this ultra line up. Although i doubt this will do many consumers as lga 1700 socket is a stronger offering as it seems, despite its discontinuaiton.
The gen, 16 th gen after, especially the 17 will probably cause stronger consumer sales, especially amongst gamers.
This presentation magnifies the improvement results as highest vs highest component is compared, but these delta differences will decrease further down the line up. Making it ever more interesting to see what the first 6p and 8p (245 265) can do.
 
The entry level model with 6 pcores (245) will be plenty what a gamer needs for years to come, more future proof is the 8P version (265) when looking at this ultra line up. Although i doubt this will do many consumers as lga 1700 socket is a stronger offering as it seems, despite its discontinuaiton.
The gen, 16 th gen after, especially the 17 will probably cause stronger consumer sales, especially amongst gamers.
This presentation magnifies the improvement results as highest vs highest component is compared, but these delta differences will decrease further down the line up. Making it ever more interesting to see what the first 6p and 8p (245 265) can do.
Tell me one reason why gamers need those 6 cores instead of x3D?
How much any future generation will give is irrelevant at this point because we're still waiting for the 15th generation.
This presentation magnifies (maybe) the improvement of the highest vs degrading components. And hopefully this is true, because otherwise it will be degradation vs. degradation.
 
Tell me one reason why gamers need those 6 cores instead of x3D?
the commentary was only about intel cpus not amd. The 4 core era is now being abandoned but not yet 'closed' (from a gamers perspective), i did research on it and see
that probably in 2026 game titles with a 4 core will really start to stutter and barely guarantee a 60fps framerate.
A modern 6 core is therefore due its intrinsic design improvements, implying lets say an older 3770 4core and its performance divived through 2 and than multiplied by
three, much faster. And close, equal or over the benchmark scores of even 8 cores like mine 12700k. But before this 6 core performance is at the level
the 4 core is now i think we will be somewhere in the year 2030.

x3D is a good cpu design decision, interesting you mention it as i thought about it too,
when actually seeing the kind of meagre amount of cache of e.g. the 14900, forgive me when wrong but also not that generous cache levels
in this 15th gen

How much any future generation will give is irrelevant at this point because we're still waiting for the 15th generation.
This presentation magnifies (maybe) the improvement of the highest vs degrading components. And hopefully this is true, because otherwise it will be degradation vs. degradation.
in terms of degradation i assume you mean the issue the 14th gen is affected with hence the 15th gen maybe also has similar problems
 
Last edited:
the commentary was only about intel cpus not amd. The 4 core era is now being abandoned but not yet 'closed' (from a gamers perspective), i did research on it and see
that probably in 2026 game titles with a 4 core will really start to stutter and barely guarantee a 60fps framerate.
A modern 6 core is therefore due its intrinsic design improvements, implying lets say an older 3770 4core and its performance divived through 2 and than multiplied by
three, much faster. And close, equal or over the benchmark scores of even 8 cores like mine 12700k. But before this 6 core performance is at the level
the 4 core is now i think we will be somewhere in the year 2030.

x3D is a good cpu design decision, interesting you mention it as i thought about it too,
when actually seeing the kind of meagre amount of cache of e.g. the 14900, forgive me when wrong but also not that generous cache levels
in this 15th gen


in terms of degradation i assume you mean the issue the 14th gen is affected with hence the 15th gen maybe also has similar problems
For more than 10 years the optimal number of cores for gaming is 8 and I don't see any improvements in that direction.
The 8 cores have or don't have enough performance to run games flawlessly. Both companies ship more cores for every other reason but not for games.

Yes, we hope the degradation problem will be fixed in the new generation.
 
For more than 10 years the optimal number of cores for gaming is 8 and I don't see any improvements in that direction.
The 8 cores have or don't have enough performance to run games flawlessly. Both companies ship more cores for every other reason but not for games.

Yes, we hope the degradation problem will be fixed in the new generation.
could you elaborate on this that in your opinion already earlier than 2014 for pc gaming 8 cores where needed?
The 8 cores have or dont have enough performance to run games flawlessly you mean with this that this implies e.g. older 8 cores vs newer?
 
could you elaborate on this that in your opinion already earlier than 2014 for pc gaming 8 cores where needed?
The 8 cores have or dont have enough performance to run games flawlessly you mean with this that this implies e.g. older 8 cores vs newer?
Yeah, something like that.
Yes, older vs newer.

I'd be interested if Intel or AMD did something to increase the number of P cores in some workable way, but...
 
Considering these are intel's slides, new series are as expected slower than previous gen in gaming even though they ditched the HT. Add that to Zen5's SMT disabled review on TPU and 24H2's uplift... 9800x3D would be overkill against these. Still it's good that Intel ditching those 400w horrible P/W monsters. With any new chip, whenever a company needs to resort high power consumptions and high clock speeds, it always ends bad for them. Because those are last resorts... I'm optimistic about next gen Intel processors. They just need to perfect these now.
 
Would be interesting to see E-core single thread and multi thread benchmarked.
12 E-core @ 4.3 GHz = 7600X or 13600K in multi threading. With an extra 30% performance for Arrow it means the equivalent of 7700X.
In Single Thread they are weak, but these tasks are taken over by P-core.

ecores.jpg
 
Nice daisy chain bottleneck design.
Let's have two pci-express 4 links go to a a single pci-express 4x link With a another pci-express 4 linked added, back to a final Pci-express 4x link.
So you can have12 pci-express lanes trying jam them self through a single 4x pci-express lane.
 
Considering these are intel's slides, new series are as expected slower than previous gen in gaming even though they ditched the HT. Add that to Zen5's SMT disabled review on TPU and 24H2's uplift... 9800x3D would be overkill against these. Still it's good that Intel ditching those 400w horrible P/W monsters. With any new chip, whenever a company needs to resort high power consumptions and high clock speeds, it always ends bad for them. Because those are last resorts... I'm optimistic about next gen Intel processors. They just need to perfect these now.
It's a course correction i agree also some slightly slower performance ok, indeed they need to improve upon this base design therefore i think 17th g will be it. But this not changes the fact a pc builder has to pay top dollar for negligible fps improvement, for a lg 1851 mb plus the cpu (therefore interested to see what the entry mid range models do...), also i look different at it because i have a mini itx build. How long will it take for an affordable durable mid spec mb to be released for such build. Im not optimistic about the commercial success for this line up
 
Last edited:
ECC is supported by the architecture, though not on the Z890 chipset, nor by the processor models being announced today.
Intel begs to differ. Alas, you still need a motherboard with an enabled chipset (probably W880).
 
Well, 550 at the wall that's pretty low for a 4090. probably underutilised. CPU total power is very suspiciously sitting at 125W and that's the least demanding game, to get 165 difference in warhammer 14900K must be running at 250W and that's not possibe for a game. This could mean the results have been doctored. And they say in their slides, at 125W and 250W the performance is exactly the same.
Did you see the Intel slide with 447 Watts. That was not Total system draw. An MSI Live Stream showed that a 14900K and 3090 or 4090 could pull over 1100 Watts from the wall.
 
WTaF. So you know the 9800X3D price? You'll actually know soon and it will be a surprise. I suspect it's going to be $399 and leaks show it smashing the 7800X3D across the board, games and productivity.

i'm a speculative man - AMD’s playing hard to get with the 7800X3D, limiting stock to drive up prices. Essentially to position the 9800X3D as a premium performance Lord-saviour which IMO we'll be lucky if it launches at the 7800X3D's original MSRP. I bet HIGHER! Can't blame AMD though - people are willing to pay ridiculous amounts of money for top-grade gaming magic beans. AMD deserves some of that loot, they've come a long way with Ryzen and the Beanstalk.
 
I am sure that the 2-3 percent, plus or minus, can turn into millions of $ loss or gain. For the shareholders of the two rivals, of course. For home use and most users, I give a small example:
1. Processor set to maximum 65W (PL1/PL2) IccMax 170A
2. Processor set to PL1 125W/PL2 200W (56 seconds) IccMax 250A
The RTX 3070Ti feels nothing. It returns the same results.

Cyberpunk 2077_CPU PL 65W.jpg
Cyberpunk 2077_CPU PL2 200W.jpg
 
Last edited:
kinda, it only uses 1 chipset vs AMD's dual promontory for the E variants

Right, that is another consideration.

Besides Intel offering more useful PCIe lanes, Z890 needs only 1 chipset!

X870E is objectively the worse of the 3 options, sacrificed 4x gen5 for the ASM4242, for which 4x gen4 would be sufficient

Intel Z890 PCIe lanes
16X gen5 PEG
4X gen5 nvme
4X gen4 nvme or motherboard vendor choice
4X gen4 for integrated TB4
8X gen4 downstream to chipset

AMD AM5 PCIe lanes
16X gen5 PEG
4X gen5 nvme
4X gen5 nvme (X670E) OR integrated TB4 ASM4242 (X870E)
4X gen4 downstream to chipset.
 
I am sure that the 2-3 percent, plus or minus, can turn into millions of $ loss or gain. For the shareholders of the two rivals, of course. For home use and most users, I give a small example:
1. Processor set to maximum 65W (PL1/PL2) IccMax 170A
2. Processor set to PL1 125W/PL2 200W (56 seconds) IccMax 250W
The RTX 3070Ti feels nothing. It returns the same results.

View attachment 367208View attachment 367209
Would you be so kind to share how you realsised this undervolt, with which application? Do you also have a power limit applied to your GPU, if not why? Thanks
 
Would you be so kind to share how you realsised this undervolt, with which application? Do you also have a power limit applied to your GPU, if not why? Thanks
The CPU's power limits (PL1/PL2) and current limit (IccMax) should be configurable in BIOS (UEFI) in every consumer motherboard.
Some motherboards call PL1/PL2 as long/short term power limit.
 
The CPU's power limits (PL1/PL2) and current limit (IccMax) should be configurable in BIOS (UEFI) in every consumer motherboard.
Some motherboards call PL1/PL2 as long/short term power limit.

As im aware not every setting is compatible with everyone even when equal components, could you provide safe parameters to apply for a 12700K?
 
As i'm aware not every setting is compatible with everyone even when equal components, could you provide safe parameters to apply for a 12700K?

Using lower power/current limits will not damage your CPU nor make it unstable, it only limits how hard it can be pushed thermally and electrically.

The i7-12700K has an IccMax of 240A and Intel-recommended power limits of PL1=125W, PL2=190W, Tau=56s.
The i7-12700 (non-k) uses IccMax=220A, PL1=65W, PL2=180W, Tau=28s, as well as lower frequencies.

Motherboard manufacturers rarely follow these limits, so configuring them could be a start. Configuring PL1=PL2=125W would also be an easy way to limit power without significant (or any) practical changes in most applications except benchmarks.

If you also lowered your frequencies to i7-12700 levels, the CPU would operate more efficiently in general, but that will require more tinkering/care.

People usually undervolt for increased efficiency, but that requires thorough testing in order not to make your CPU unstable, there's no real safe value that can be good for everybody/any CPU.
 
While we only have a rough idea of the performance yet, achieving this performance without SMT(HT) is a great achievement, and it greatly reduces the risk of crippling hardware bugs, like we've seen so many of in recent generations.

As we can see with Intel's P-core design goals, this generation is about future scalability, along with the removed constraints from dropping SMT whcich will probably take a couple of iterations to fully leverage, does at the very least indicate Intel have great improvements coming down the line. (But if you're sitting on an Alder Lake or Raptor Lake, you might want to skip this one…)
I hope this added flexibility means they will be bolder in making their big Xeons more powerful using more execution units.

And in these slides, Intel didn't mention that its new CPUs will have AVX-512, like the AM5 CPUs do
The seemingly missing AVX-512 is a grave mistake, so the performance advantage of Zen 4/5 here will only increase over time.

and it also didn't show any competitors to AMD's "3D cache" CPUs.
<snip>
Intel should have added more L2 and L3 cache memory to improve gaming performance, instead of adding this excessive amount of E-cores.
I'm glad Intel doesn't add piles of L3 cache, as this is mostly an "optimization" (or rather compensation) to help bloated software, and I don't want to give game or application developers an incentive to stop optimizing their code. Large L3 only significantly helps select applications and games, and the rest of the current games are largely not bottlenecked by faster CPUs from either vendor.

We can't fully predict what future games will look like, but if they are computationally dense, they'll scale better with CPUs that have more powerful cores. If they are even more bloated, then there is a chance large L3 caches will help.

But the E-cores are a gimmick though…

For these reasons, I still think the AM5 platform is more advantageous.
It depends on what your use case(s) are. If it's only gaming and "light" office use, then priorities will be quite different more of "power user"/prosumer which typically do a mix of "productive" applications and potentially gaming. At least with the Alder/Raptor Lake family, Zen4/5 has offered a compelling alternative for gamers, especially for more mid-range builds as the efficiency offers savings in terms of lower cooler requirements.

But as I was saying in another thread, Zen 5 looks very appealing at first glance to prosumers with 12 and 16 core options, AVX-512, etc., but then looking at the finer details there are serious compromises on the IO side. With precious PCIe lanes tied up to USB4, only 4 lanes to chipset, and 2nd and 3rd M.2 shares lanes with the graphics card, and limitations for each motherboard of how chipset lanes and SATA ports are configured, prosumers are likely to end up in a situation where their choice of motherboard limits them somehow down the road.

Arrow Lake' Z890 motherboards seem to be better in this regard, but the last point is valid here too, as it seems like very few Z890 motherboards are close to offering the features of their chipset (which is so absurd in my mind), and the well featured motherboards for Arrow Lake will likely be limited to mostly W880.

But this is what you get with mainstream platforms these days; significant compromises. It seems to me that mainstream is increasingly becoming platforms only for either pure gaming or "light office use", and once a user needs a little more then they are either hamstrung, or needs to keep upgrading systems more often. If you want a system to last 5-8 years of heavy use, then Threadripper and Xeon-W quickly becomes more attractive, despite their more limited availability.

I wish tech sites like this gave more attention to these platforms(high-end workstations), so hopefully the vendors get more feedback and the increased attention would drive motherboard vendors to make better lineups. But hopefully not with primarily an OC focus (like the limited YouTube coverage typically do), but more of a real-world prosumer focus.
 
Last edited:
Back
Top