• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core Ultra "Arrow Lake-S" Launches in October

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,894 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
With AMD having launched its new processor generation, all eyes are now on the Core Ultra 200 series "Arrow Lake." We finally have some idea about its launch date, thanks to a leak by Benchlife.info. Apparently, Intel is launching the Core Ultra 200 "Arrow Lake-S" series processor family on October 10, 2024. At this point it's not known if this will be a low-key media event, or a completely online launch, since Intel has pulled the plug on the Innovation 2024 event, which would have been the launchpad for these processors. The October 10 date aligns with past rumors that pointed to a Q4-2024 launch for at least the -K and -KF desktop processor SKUs targeting PC enthusiasts and gamers; with the series putting on bulk in 2025. We've gone into the probable SKUs Intel will launch in its first wave in our older article, here.



View at TechPowerUp Main Site | Source
 
The 265K doesn't look bad. It only lost 100 MHz vs the 14700K.
8+12 looks good.
I wonder how 8P cores look like without HT.
 
Desktop CPU innovation has slowed. Nowadays, all eyes are on NPUs, SoCs and compute GPUs. Given the following:

13900K to 14900K - 2.5% avg application increase
7950X to 9950X - 3.5% avg application increase

I am not holding out hope that Arrow Lake avg application increase will be anything significant. Especially with the removal of HT, the decrease in clock speed and an emphasis on E-cores.
 
Well, is it

Baseball Referee GIF by StickerGiant
?
 
The 265K doesn't look bad. It only lost 100 MHz vs the 14700K.
8+12 looks good.
I wonder how 8P cores look like without HT.

On a per core comparison, looks pretty good. Especially MT, which is a little surprising without HT. I guess it shouldn't be suprising, the E-cores purportedly have IPC similar to Raptor cores, just with less cache.

 
Honestly I can't trust a single CPU these guys release for the next 2-3 years now.

And neither should you. These products were on the drawing board before 13-14th gen production. Nuff said. Are you sure you're buying what they say it is? The nagging feeling 'it might have degraded already' just wouldn't let me go honestly.

Intel took a fatal arrow to the knee in my book
 
Desktop CPU innovation has slowed. Nowadays, all eyes are on NPUs, SoCs and compute GPUs
That’s a mish-mash of terms. NPUs are, generally, a part of an SoC. Or a specialized ASIC, I guess, but that’s irrelevant for consumer use. Every modern desktop CPU IS an SoC. GPUs are compute units kinda by default, ever since the advent of GPGPU. We don’t have plain “graphics cards” strictly speaking and we haven’t had those for years. None of the above indicates that desktop CPU innovation has “slowed”. Arrow Lake is a principally new for Intel way of building a CPU. It’s about as innovative as it gets.

Unless we only talk about performance increases for desktop tasks when we are talking “innovation” the argument doesn’t hold water.

Honestly I can't trust a single CPU these guys release for the next 2-3 years now.

And neither should you. These products were on the drawing board before 13-14th gen production. Nuff said. Are you sure you're buying what they say it is? The nagging feeling 'it might have degraded already' just wouldn't let me go honestly.
An understandable concern, but they are so wildly different from the previous Raptor chips in every single way that it’s very unlikely that those defects have carried over.
 
An understandable concern, but they are so wildly different from the previous Raptor chips in every single way that it’s very unlikely that those defects have carried over.
I'm sure Intel told the world this was another 'grounds up' architecture thing, whatever, I've heard this bullshit too often. AMD told me that too, numerous times, and it was a lie, every single time. All development here is iterative. And what's stopping them from inventing a whole new set of issues? As long as they are struggling for power (and market dominance), the danger zone remains.
 
That’s a mish-mash of terms. NPUs are, generally, a part of an SoC. Or a specialized ASIC, I guess, but that’s irrelevant for consumer use. Every modern desktop CPU IS an SoC. GPUs are compute units kinda by default, ever since the advent of GPGPU. We don’t have plain “graphics cards” strictly speaking and we haven’t had those for years. None of the above indicates that desktop CPU innovation has “slowed”. Arrow Lake is a principally new for Intel way of building a CPU. It’s about as innovative as it gets.
You know what I mean. Don't make me start talking about TOPS!
 
AMD told me that too, numerous times, and it was a lie, every single time.
Yes, clearly Zen was just an iteration of Bulldozer. Every single time, right?

All development here is iterative.
Arrow Lake is about as big of a change for Intel as Zen was for AMD from Bulldozer. Like, it’s a CPU built on entire different principles. The only iterative parts, arguably, are the cores themselves that are on the compute chiplet. Was Raptor issue in the cores themselves? We don’t know.

And what's stopping them from inventing a whole new set of issues? As long as they are struggling for power (and market dominance), the danger zone remains.
Sure, hypothetically. Same can be said for all current players. Intel had a blunder. That’s true. And, since AL is so new in its entirety, taking a wait and see approach is wise overall since the first gen of anything will likely have teething issues. Not like Zen really matured until Zen 2.

You know what I mean. Don't make me start talking about TOPS!
I have heavy doubts that the AI performance will actually turn out to be too relevant in the long run for consumer tasks. And where it will be it can and will be handled by GPU. Mobile chips might have a need for a NPU, but desktop? Meh. AI is an overrated gimmick for most people. The whole AI PC thing is just a marketing fad.
 
Yes, clearly Zen was just an iteration of Bulldozer. Every single time, right?


Arrow Lake is about as big of a change for Intel as Zen was for AMD from Bulldozer. Like, it’s a CPU built on entire different principles. The only iterative parts, arguably, are the cores themselves that are on the compute chiplet. Was Raptor issue in the cores themselves? We don’t know.


Sure, hypothetically. Same can be said for all current players. Intel had a blunder. That’s true. And, since AL is so new in its entirety, taking a wait and see approach is wise overall since the first gen of anything will likely have teething issues. Not like Zen really matured until Zen 2.


I have heavy doubts that the AI performance will actually turn out to be too relevant in the long run for consumer tasks. And where it will be it can and will be handled by GPU. Mobile chips might have a need for a NPU, but desktop? Meh. AI is an overrated gimmick for most people. The whole AI PC thing is just a marketing fad.
The exception makes the rule, right? :D

As always, I am in wait and see mode :)
 
Do TDP numbers even mean anything at this point?
 
Do TDP numbers even mean anything at this point?
Theoretical consumption at base clocks with turbo disabled? But yeah, I catch your drift, they could at least show a "realistic range under load +-%" of sorts instead of just one number.
 
I wonder how 8P cores look like without HT.


HT trading places with 1 more integer unit. 6 ALUs in Lion cove, so it's the same performance. Looking better less clutter and stutter in Games.
 
Desktop CPU innovation has slowed. Nowadays, all eyes are on NPUs, SoCs and compute GPUs. Given the following:

13900K to 14900K - 2.5% avg application increase
7950X to 9950X - 3.5% avg application increase

I am not holding out hope that Arrow Lake avg application increase will be anything significant. Especially with the removal of HT, the decrease in clock speed and an emphasis on E-cores.
The new E cores are expected to be much faster than their predecessors. I expect significantly higher performance in multithreaded workloads that aren't bound by memory bandwidth and a slight increase in single threaded performance due to the lower clocks.
 
Do TDP numbers even mean anything at this point?

Long story short, no.

The ultra 7/9 skus still show a “TDP” of 125w, so until reviews are out I’d expect similar boost behavior in terms of power and temps; short duration of slamming the 250w mark in MT followed by clocks dropping 400-500mhz to get temps/power in check at stock.
 
Theoretical consumption at base clocks with turbo disabled? But yeah, I catch your drift, they could at least show a "realistic range under load +-%" of sorts instead of just one number.
Theoretical consumption at base clocks? Or “power consumption under the maximum theoretical load.”? And “The TDP is the maximum power that one should be designing the system for.”? It makes no sense. If I design a homelab pc with what intel says in mind, I could easily end up with a power supply that is barely enough.


Intel should either spec the CPUs TDP more appropriately or change their own definition.
 
Desktop CPU innovation has slowed. Nowadays, all eyes are on NPUs, SoCs and compute GPUs. Given the following:

13900K to 14900K - 2.5% avg application increase
7950X to 9950X - 3.5% avg application increase

I am not holding out hope that Arrow Lake avg application increase will be anything significant. Especially with the removal of HT, the decrease in clock speed and an emphasis on E-cores.
Isn't it because Moore's Law and Denard's scaling no longer apply? That's why graphics are so big and expensive and processors eat more power at the price of higher performance and we're still stuck on 16gb of ram?
 
Isn't it because Moore's Law and Denard's scaling no longer apply? That's why graphics are so big and expensive and processors eat more power at the price of higher performance and we're still stuck on 16gb of ram?
Those are slowed, but there are some perf/watt and perf-density gains to be had in both areas still.

Clocks, cache, threads etc don't care. If Intel puts out a new architecture and lithography with similar or slightly better performance and one that uses much less power than the current gen hog and I'll be shopping for Intel next gen while I wait for 9800X3D this Q4.
 
Desktop CPU innovation has slowed. Nowadays, all eyes are on NPUs, SoCs and compute GPUs. Given the following:

13900K to 14900K - 2.5% avg application increase
7950X to 9950X - 3.5% avg application increase

I am not holding out hope that Arrow Lake avg application increase will be anything significant. Especially with the removal of HT, the decrease in clock speed and an emphasis on E-cores.

The problem with that comparison is that the 13900K and 14900K are the same silicon. No changes were made between those two, only a clock increase, so the application increase is equal to the frequency increase. Zen 4 to Zen 5 was an architectural upheaval that yielded very little application performance improvements mostly due to a clock decrease offsetting any core improvements, but also due to various architecture decisions that are not yet yielding any direct improvements.

Arrow Lake is in a similar league to Zen 5 being a somewhat substantial architecture shift, but it would appear Intel has not left a lot of frequency on the table. They have, however, reduced thread count by removing HT.
 
Frequency was never on the table, not unless you accept degradation and melting things. 5.2 in games and 4.6 in CB is more realistic, to maintain 253 watts. The loss of threads can't be a problem as Windows runs hundreds and even thousands of threads, can't have a dedicated core for each one anyway. we need is IPC and it doesn't get better than that for now.
 
When will we see the new boards for these chips? this could be interesting.
 
When will we see the new boards for these chips? this could be interesting.

Since it is an all new socket, only a simultaneous release makes any sense so October 10th for CPUs and boards (assuming 10/10 is a hard launch and not just the reveal/announcement).
 
Must admit much more tempted by the Ultra 265(K) than 9990X/9700X. Unless v-cache is ungimped and we see almost zero regression in productivity workloads in the X3D models, I'd still prefer Arrow Lake (power and performance pending) than having to deal with core parking BS now even on non X3D dual ccd models. We are hearing 100W+ peak power usage from high end models for Arrow Lake and no need for stupid voltages as we have clock-speed regression.
 
Back
Top