• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel "Arrow Lake-S" to See a Rearrangement of P-cores and E-cores Along the Ringbus

Not having to change sockets as often as Intel has the last decade (considering your date of 2012 from then on) with AMD is a thing too you know, something buyers factor in when purchasing a system & longevity with upgrading options in mind. How long is AM4 still going? How many socket changes did Intel do since AM4 came out? MT solves the problem of limited cores, core limitation is just not a thing like it use to be - 12 threaded bottom of the line for AM5. The later socket will once again prove longevity like AM4 (maybe not 8 yrs but then who knows at this point in time) but a heck of a lot better than Intel. 3 gens on 1700 & then BUST :ohwell:
Not having to change your motherboard is a good thing if it saves you money. Sadly with AM4 that wasn't the case for the vast majority of it's lifespans, simply because the pricing on the AM4 cpus was freaking whack (im still laughing at 450$ 5800x 3d). I mean with that kind of money you can just buy a locked i7 with a brand new b660 and end up better off.

With that said I agree with you - intel should support more sockets, but PROPERLY. Not the way AMD is doing it.
 
12900K
View attachment 353570
12900KS
View attachment 353571
13900K
View attachment 353572

So in answer to your question, yes.

Overall the 12-13th gen were massive improvements in IPC from the previous Skylake based systems, are still faster core for core against Zen, and didn't have the core count/process issues of Rocket Lake.
My 13700k is a massive upgrade over my 9900k. Probably my biggest feels like improvement since I went from i5 750 to Haswell.

I actually now regret not going DDR5, as I think in some workloads its probably hindering the performance and its crazy that the improvement I am noticing would have been even higher. However I am not going to swap out the board and swap the RAM at this point, not rebuilding again.

I think the two most sensible things Intel can do on Arrow Lake is ditch HTT, its out dated and inefficient now, and also ship CPUs lower on the v/f curve.
 
Last edited:
1 core 1 thread, if you want more threads, get more cores. Intel has made a good choice ditching HT, and i bet AMD will follow.
 
1 core 1 thread, if you want more threads, get more cores. Intel has made a good choice ditching HT, and i bet AMD will follow.
What system do you have?
 
Look at my specs.
So turn HT off then, you don't need it and it only consumes more power and generates heat. It offers no performance improvements, at least so I've been hearing this last month or two. I've also been hearing many on this forum stating that that you don't need P-cores either, because the E-cores are the same performance, so to save even more power/heat, you should disable them too.
 
Last edited:
So turn HT off then, you don't need it and it consumes power and generates heat.

Have you?

My comment was about intel not having Ht on arrow lake, or did you not get that?

My current specs have nothing to do with my comment
 
Have you?

My comment was about intel not having Ht on arrow lake, or did you not get that?

My current specs have nothing to do with my comment
But Intel CPU's don't need HT because they have so many cores. Don't backpaddle on your statement. The sentiment you shared has been something that has been spoken by many here since well before this new CPU, in fact ever since it got leaked that Intel were going to abandon it.

I have a brain, so yes, I disabled HT. But I won't tell you what I found after trying this for 2 weeks... I'll let you put your money where your mouth is first. Please come back and share.
 
Last edited:
But Intel CPU's don't need HT because they have so many cores. Don't backpaddle on your statement. The sentiment you shared has been something that has been spoken by many here since well before this new CPU, in fact ever since it got leaked that Intel were going to abandon it.

I have a brain, so yes, I disabled HT. But I won't tell you what I found after trying this for 2 weeks... I'll let you put your money where your mouth is first. Please come back and share.

Where does this 1 core 1 thread, if you want more threads, get more cores. say intel? As i already said that you are conveniently ignoring, my comment was about arrow lake ditching HT, NOT about my current setup. go away and grow up /ignored

How i run my rig has nothing to do with Arrow lake or this thread.
 
But Intel CPU's don't need HT because they have so many cores. Don't backpaddle on your statement. The sentiment you shared has been something that has been spoken by many here since well before this new CPU, in fact ever since it got leaked that Intel were going to abandon it.

I have a brain, so yes, I disabled HT. But I won't tell you what I found after trying this for 2 weeks... I'll let you put your money where your mouth is first. Please come back and share.
I have, I even made a whole thread about it. My 14900k was running with HT off the whole time. Got amazing performance and very low power draw and temperatures.
 
My 13700k is a massive upgrade over my 9900k. Probably my biggest feels like improvement since I went from i5 750 to Haswell.

I actually now regret not going DDR5, as I think in some workloads its probably hindering the performance and its crazy that the improvement I am noticing would have been even higher. However I am not going to swap out the board and swap the RAM at this point, not rebuilding again.

I think the two most sensible things Intel can do on Arrow Lake is ditch HTT, its out dated and inefficient now, and also ship CPUs lower on the v/f curve.

Bro might skip DDR5 altogether at this stage
 
Performance per watt was already a strong point for intel. That's the only thing they really don't need to improve.
Compared to chinese longsooon cpus maybe, you mean? Yeah, that could be... :roll:

Intel rocked 8086 to Pentium III; AMD was just a backup clone
Intel failed with Netburst; AMD rocked K7/K8 and derivatives
Intel rocked Core architectures; AMD failed with Bulldozer

Currently, AMD is doing better with Zen and Intel is playing catch up with respect to process nodes, core counts and efficiency. Changes such as the ring bus interconnected topography might be steps in the right direction. However, I don't see much changing for gamers and the data center. Intel is losing ground here. People can say you can underclock Intel all you want but you can do the same with AMD. I want out of the box efficiency while staying close to the highest possible absolute performance levels across a variety of applications. This is AMD hands down right now. Zen 5 will only improve things by an estimated 15-20%. It remains to be seen if Intel's new gamble with Arrow Lake will change matters much.

Both companies desktop chip run much better when undervolted and tuned down in TDP


7950x at around 100w with undervolt is just nuts, still going to come ahead of 13700k (same number of cores) with same amount of tuning time dedicated to it no matter the sample you take. TSMC has risen making mobile chips therefore their nodes are tuned for low power from the start.

TPU and nearly all other sites conclude and state intel's K lineup out of the box inefficiency problems. Completely agree with those. The K cpus out of the box are almost - maybe not even almost - unusable.

Also TPU and all other sites conclude that the non k or T cpus are the most efficient CPUs humanity has ever laid eyes upon.


That's simply not true though and I don't get why people keep repeating it. Take an R7 7700x, limit it to whatever power you want, it will lose - by a lot - to an i7 13700k in both performance and efficiency. Computerbase already tested this


A 7700x at 142w is slower than a 13700k at 88 watts. At 125w it is tied to a 13700k at 65w. It needs almost twice the power for similar performance. I get that a lot of people don't really care about efficiency when it comes to desktop CPUs, but that's not a reason to spread falsehoods.


The only scientifically proper way.

Otherwise you are not testing efficiency but out of the box settings. Which is fine as well of course, but your conclusion can't be about efficiency if you are not testing for efficiency.

You are comparing 8 cores to 16, that is how multithreaded efficiency shines. Intel needs TWICE the cores to be more efficient. Now, take a 12700k against a 7950x wonder wich one is more efficient at 100w. You are clearly new to multicore processing, efficiency scaling and V/F curves.

Definitely not true. Intel cpus are notorious for sticking to their pl2 limits. You literally cannot exceed the pl2 limit. Even spikes are limited to the pl4.

So an Intel cpu with a 35w PL2 limit will stick to 35 watts.

you never opened intel power gadget on intel laptops, i see

My i9-13900KS has never disrespected the power limit configuration that I have set. These processors can not and will not draw 400 W and/or heat to 115°C unless they are manually configured to do so.

Yeah, on a desktop board where you manually input your limit. But they are talking about mobile cpus. You know, these things you buy with a screen attached, a battery, they also have locked bios where you can't adjust anything. Vendors are allowed to do whatever they want on the power limit even when they are clearly set by intel. Also, everyone can take throttlestop and tune that to their liking and also find out how much these limits are not enforced on the vast majority of consumer products.

Speaking of unknown stuff, you ver heard of Apple? They used to make laptops with intel mobile cpus, you know, with i9 9880h wich is a 45w CPU

Now take a look at this video around 50 seconds in:

But yeah, intel nominal TDP. They never lie.

This is a tech forum man. I get you people hate intel but please, please, can we at least TRY to stick to facts? It's really not that hard. Stop spreading missinformation, please.

Intel does not provide any factual information. Actual proven testing does. Welcome to the real world where i deal with refurbishing and upgrading machines on a daily basis. I look at power draw from the wall and INTEL MADE power reporting software.
 
Last edited:
Compared to chinese longsooon cpus maybe, you mean? Yeah, that could be... :roll:



Both companies desktop chip run much better when undervolted and tuned down in TDP


7950x at around 100w with undervolt is just nuts, still going to come ahead of 14900k with same amount of tuning time dedicated to it no matter the sample you take. TSMC has risen making mobile chips therefore their nodes are tuned for low power from the start.



You are comparing 8 cores to 16, that is how multithreaded efficiency shines. Intel needs TWICE the cores to be more efficient. Now, take a 12700k against a 7950x wonder wich one is more efficient at 100w. You are clearly new to multicore processing, efficiency scaling and V/F curves.



you never opened intel power gadget on intel laptops, i see



Yeah, on a desktop board where you manually input your limit. But they are talking about mobile cpus. You know, these things you buy with a screen attached, a battery, they also have locked bios where you can't adjust anything. Vendors are allowed to do whatever they want on the power limit even when they are clearly set by intel. Also, everyone can take throttlestop and tune that to their liking and also find out how much these limits are not enforced on the vast majority of consumer products.

Speaking of unknown stuff, you ver heard of Apple? They used to make laptops with intel mobile cpus, you know, with i9 9880h wich is a 45w CPU

Now take a look at this video around 50 seconds in:

But yeah, intel nominal TDP. They never lie.

MacBooks have always been form over function. The problem isn't the TDP, the problem is the machine's exceptionally poor design. I have a mid-2012 here that's been decommissioned for some time. It has a 45 watt processor coupled with a 45 watt GPU, but a 90 watt MagSafe. So, underdimensioned power supply - and cooled by a single heatpipe that basically handles both the CPU and the GPU. That's one of the "fat" unibody models too, the last of them in fact, before Retina display MBPs took over.
 
Because of TVB, which pushes the CPU beyond the rated Turbo boost speed, which is exclusive to the I9. The I7 and below cannot exceed the set PL2. And you can even turn it off if you want to. You should not judge the behavior of every Intel CPU by basing yourself on the I9. Those SKUs have been using exclusive automatic overclocking tech for a while now.

Anandtech also said that the firmware of those boards seems to be problematic. The 14600k behave in a radically different way by pulling less power than the rated PL2 (181w)
View attachment 353626


View attachment 353620
Yeah, it's like - except for i9 except for macbook pros except for Asus high end products... seems like intel has no control over their own products at all, don't they?

Yes, intel has all the tools to have their CPUs respect the limits. They also allow for them to be easily bypassed by anyone, from end user to board and laptop manufacturers.

MacBooks have always been form over function. The problem isn't the TDP, the problem is the machine's exceptionally poor design. I have a mid-2012 here that's been decommissioned for some time. It has a 45 watt processor coupled with a 45 watt GPU, but a 90 watt MagSafe. So, underdimensioned power supply - and cooled by a single heatpipe that basically handles both the CPU and the GPU. That's one of the "fat" unibody models too, the last of them in fact, before Retina display MBPs took over.
Everything you pointed out should lead to the cpu using LESS, not MORE power than the nominal TDP :roll:
And we are talking about almost than DOUBLE the nominal value.
Now imagine, this thin and light macbook does it. Sure huge beefy gaming notebooks will comply to 45w :roll:
You are all really joking right? I can't read this stuff on a tech forum. Have you ever actually tested the stuff you talk about?

Again, Intel architectures are in last place behind all others (ARM, AMD, GPGPU) in IPC, efficiency and value. AMD has gone from 2% data center market share to 33% in five years and Intel supporters are talking about ISO. Lololololol!!!!!

Now to be fair, they are working with different less efficient nodes. But the fact that AMD can beat them with a mobile-oriented node and stil be more efficient is rough. They have been almost sitting all the way from haswell to rocket lake, so much that apple had time to beat their absolute performance with mobile CPUS.
 
Last edited:
Everything you pointed out should lead to the cpu using LESS, not MORE power than the nominal TDP :roll:
And we are talking about more than DOUBLE the nominal value.
Now imagine, this thin and light macbook does it. Sure huge beefy gaming notebooks will comply to 45w :roll:
You are all really joking right? I can't read this stuff on a tech forum. Have you ever actually tested the stuff you talk about?

Which is why I question, is that reported consumption value actually accurate? And that is a much newer model than the one I have. My Ivy Bridge Mac was very throttle happy. Anyway, not relevant to a desktop computer with much loftier power envelopes.
 
Which is why I question, is that reported consumption value actually accurate? And that is a much newer model than the one I have. My Ivy Bridge Mac was very throttle happy. Anyway, not relevant to a desktop computer with much loftier power envelopes.

You answered to an answer to a 35w limit being respected, 35w are mobile limits.

The reported consumption is OBVIOUSLY accurate, as that clock and performance would never come with 45w being enforced.


With cooling mods as in the video, that cpu will come close to 9900 non K 95w performance, wich is another CPU that should be limited to 65w but will default to unlimited and use more than 100w on many boards without any prompt even without XMP enabled:
I have been building almost only on intel since i do hackintosh and i have deep tested maybe a hundred different boards (using wall meter for efficiency evaluation) in my life because i have to choose wich ones go in production on my machines. Every time i have to work with AMD i am impressed. They are such a tiny company yet they have so many strong points over intel, like efficiency, being honest in their specs, making stuff like pcie bifurcation avaiable on the whole lineup..
 
Last edited:
You are comparing 8 cores to 16, that is how multithreaded efficiency shines. Intel needs TWICE the cores to be more efficient. Now, take a 12700k against a 7950x wonder wich one is more efficient at 100w. You are clearly new to multicore processing, efficiency scaling and V/F curves.
I'm comparing CPUs at the same category (i7 / r7) and the same MSRP. Yes, you described exactly what Intel does, it offers 16 cores instead of AMD's 8, that's why Intel is both faster and more efficient. So I should buy the 7700x over the 13700k, why? Because it has less cores? Is that a good thing? Cause you make it sound like it's a good thing, lol.
you never opened intel power gadget on intel laptops, i see
I opened it on my 6900hs. 35w TDP. Sure, it hits 80w.

What about my 5600h? TDP 45, hits 65.

What about my 5800u? 25w max tdp, hits 45.
 
I have been building almost only on intel since i do hackintosh and i have deep tested maybe a hundred different boards (using wall meter for efficiency evaluation) in my life because i have to choose wich ones go in production on my machines. Every time i have to work with AMD i am impresse. They are such a tiny company yet they have so many strong points over intel, like efficiency, being honest in their specs, making stuff like pcie bifurcation avaiable on the whole lineup..

Very honest indeed :wtf:

That's why they totally don't take anti-competitive measures the second their product takes a clear lead. By which metric is AMD tiny again? Their market cap is more than twice that of Intel.

What about my 5600h? TDP 45, hits 65.

Mine does too. I think it's got something to do with that 1.35*TDP for PPT rule (which effectively makes it a 60W processor).
 
Very honest indeed :wtf:

That's why they totally don't take anti-competitive measures the second their product takes a clear lead. By which metric is AMD tiny again? Their market cap is more than twice that of Intel.



Mine does too. I think it's got something to do with that 1.35*TDP for PPT rule (which effectively makes it a 60W processor).
The efficiency part makes me chuckle every time.


The only efficient parts amd has are their mobile chips and their desktop APUs. The monolithic ones. Those are freaking great. The actual desktop chips are just not. Multiple dies cannot and are not efficient. Not for actual desktop usage. An almost idle 7950x (running syncthing, steam and discord) hits 70w. Not a spike, average. Same crap running on an intel part, 15 watts. But yeah, that amd efficiency man....

Sure, let's close all background apps. Now let's go browse youtube and watch a video with no background apps running. Spikes to 60+ watts and averages 45 just scrolling youtube comments!

Whenever people say AMD are efficient, it makes it obvious to me they have never tried them or they've never compared it to an equivalent intel chip.
 
Yeah, it's like - except for i9 except for macbook pros except for Asus high end products... seems like intel has no control over their own products at all, don't they?

Yes, intel has all the tools to have their CPUs respect the limits. They also allow for them to be easily bypassed by anyone, from end user to board and laptop manufacturers.
It seems that you are confusing the TDP with the PL2. The TDP is only a valid metric if you aren't using an unlimited PL2 and got the CPU running at the PL1 (rated the same as the TDP) after 56 seconds. The catch is that lot of "motherboard auto" settings went for unlimited PL2 :D. Puget did a test with the I9 running with the real default spec, there didn't measure the power draw, but they lost 40~30°c compared to the motherboard settings. AFAIK There's not laptop outhere that can handle an unlimited PL2

But yhea, saying that Intel CPUs will always run way beyond the PL2, even when it's manually set by the user is the take of someone who doesn't know what he's talking about. There's lot of people in the SFF community that are tuning their i9 for low power and cool it with a low profile air cooler with sucess.

1720431443131.png


Power Consumption - The Intel 9th Gen Review: Core i9-9900K, Core i7-9700K and Core i5-9600K Tested (anandtech.com)

Intel Core i9 13900K: Impact of MultiCore Enhancement (MCE) and Long Power Duration Limits on Thermals and Content Creation Performance | Puget Systems
1720431715506.png
 
I'm comparing CPUs at the same category (i7 / r7) and the same MSRP. Yes, you described exactly what Intel does, it offers 16 cores instead of AMD's 8, that's why Intel is both faster and more efficient. So I should buy the 7700x over the 13700k, why? Because it has less cores? Is that a good thing? Cause you make it sound like it's a good thing, lol.

I opened it on my 6900hs. 35w TDP. Sure, it hits 80w.

What about my 5600h? TDP 45, hits 65.

What about my 5800u? 25w max tdp, hits 45.
Efficiency is a performance per watt metric. MSRP has noting to do with it. We are comparing the respective mainstream platforms and the MSRP also has very little to do with their real prices anyway, just like intel official TDP

No one said vendors are not allowed to do the same with AMD chips, just that you cannot use TDP to measure efficiency in any shape or form, as most benchmark numbers on the internet are done on unlocked power limits. Also, provide some evidence for your numbers, as i have bought many AMD machines and they were all reaching above average performance (compared to online results in benchmarks like cinebench and geekbench) on platforms with enforced TDP (chinese machines, huawei laptops and minisforum mini pc)

Now, to provide some actual evidence i use a platform i trust, notebookcheck. The Minisforum 5600h machine sticks to apparent 42w power limit and outperforms their 12900h one by the same vendor in all but synthetic heavily multithreaded scenarios.

Again, you bring out stuff that doesn't matter in the initial discussion, and keep failing.

Very honest indeed :wtf:

That's why they totally don't take anti-competitive measures the second their product takes a clear lead. By which metric is AMD tiny again? Their market cap is more than twice that of Intel.



Mine does too. I think it's got something to do with that 1.35*TDP for PPT rule (which effectively makes it a 60W processor).

Yeah, show me some evidence, like european lawsuits for anti competitive behaviour. You are young and never heard of pentium4, i guess.

Hey, take a look at wikipedia!

Number of employees: https://en.wikipedia.org/wiki/AMD 26.000
https://en.wikipedia.org/wiki/Intel 124,800

Total assets: AMD 68 billion, intel 190 billion

Using market cap to compare company size? Like what, i will say a Ferrari is bigger than a truck because it cost more? Yeah.

It seems that you are confusing the TDP with the PL2. The TDP is only a valid metric if you aren't using an unlimited PL2 and got the CPU running at the PL1 (rated the same as the TDP) after 56 seconds. The catch is that lot of "motherboard auto" settings went for unlimited PL2 :D. Puget did a test with the I9 running with the real default spec, there didn't measure the power draw, but they lost 40~30°c compared to the motherboard settings. AFAIK There's not laptop outhere that can handle an unlimited PL2

But yhea, saying that Intel CPUs will always run way beyond the PL2, even when it's manually set by the user is the take of someone who doesn't know what he's talking about. There's lot of people in the SFF community that are tuning their i9 for low power and cool it with a low profile air cooler with sucess.
Again, the initial point was made from the user that said intel cpus stick to their PL2 and TDP out of the box.

I never said anything like never or always about intel not enforcing their tdp, as that is not "intel" that controls it, i am talking about what actually happens in most consumer products, like very common Apple laptops, and i even stated that some business laptopt will actually respect the intel limits.

Most machines will not, either just ignoring the PL and going with the thermal limit, or enforcing a higher tier PL2 with no prompt, running i9 9900 non K with unlimited Pl2: i now clearly recall having it happen on a gigabyte h310 from a client build, wich resulted in awful efficiency for such a terrible VRM and incurring in VRM temp limit. It was the default behavior, OOB, for that combination, i enforced a 80w limit wich resulted in much better benchmark scores.

I built hundreds of itx systems, thanks. I ran passive i9 and i5, i tested so much stuff in my life, don't worry. And i am very careful and precise with my statements, and answer to actual quotes.

Here is one article i have written on a similar topic, as you migh see english is not my native language but the points still stand: https://www.pizzaundervolt.com/choosing-a-computer-for-audio-production-laptop/
 
Last edited:
Efficiency is a performance per watt metric. MSRP has noting to do with it. We are comparing the respective mainstream platforms and the MSRP also has very little to do with their real prices anyway, just like intel official TDP

No one said vendors are not allowed to do the same with AMD chips, just that you cannot use TDP to measure efficiency in any shape or form, as most benchmark numbers on the internet are done on unlocked power limits. Also, provide some evidence for your numbers, as i have bought many AMD machines and they were all reaching above average performance (compared to online results in benchmarks like cinebench and geekbench) on platforms with enforced TDP (chinese machines, huawei laptops and minisforum mini pc)

Now, to provide some actual evidence i use a platform i trust, notebookcheck. The Minisforum 5600h machine sticks to apparent 42w power limit and outperforms their 12900h one by the same vendor in all but synthetic heavily multithreaded scenarios.

Again, you bring out stuff that doesn't matter in the initial discussion, and keep failing.
If msrp has nothing to do with let's then compare a 14900k to a 7600x at iso wattages. That would be silly, no?


Regarding your "actual evidence" your own links prove you wrong. The 5600h which you claimed never exceeds 42 watts is hitting 70w in your link. And of course it's slower than the 12900h, again, according ot the links you provided
 
If msrp has nothing to do with let's then compare a 14900k to a 7600x at iso wattages. That would be silly, no?


Regarding your "actual evidence" your own links prove you wrong. The 5600h which you claimed never exceeds 42 watts is hitting 70w in your link. And of course it's slower than the 12900h, again, according ot the links you provided
You can compare 14900k to ryzen 6900hs at 35w if you want to go at different core count. That would be such a slaughter, not funny - i9 would not even able to keep some cores active probably. I'd compare the core count and relatively similar multicore performance under load, wich is why 7950x come into play - and it's priced closer to 13700k than the 7700x is anyway.

Clearly, you are not used to see that, but 75 is the temperature, not the wattage. Intel users are not used to these numbers, i understand. And no, the 12900h is only faster in cinebench, wich doesn't matter for 99% of users, the PcMark numbers are better for the 5600h wich is older and has a lot less cores.

I understand now how you can state intel has some advantages, probably misread all the graphs you see and never actually tested these machines.
csm_cpu_metrik_r15_loop_7fa5102a3c.jpg


 
Last edited:
You can compare 14900k to ryzen 6900hs at 35w if you want to go at different core count
Oh I've done that, the 14900k is faster. Tested CBR23- close to 16k for the 14900k, around 13k for the 6900hs. Both CO / UVed.

Clearly, you are not used to see that, but 75 is the temperature, not the wattage. Intel users are not used to these numbers, i understand
Oh really? So when it says "POWER CONSUMPTION - LOAD MAXIMUM = 73.4w" it's the temperature, not the wattage? Okay, my mistake.


Dude, are you troll posting or is this serious?

the 12900h is only faster in cinebench, wich doesn't matter for 99% of users, the PcMark numbers are better for the 5600h wich is older and has a lot less cores
Uhm, no, the 12900h was faster on the whole test suite they used. If you click the Performance Rating bar it gives you the average across their whole testing suite. The 12900h was faster than the 5900HX. The 5900HX is much faster than the 5600h. Now link the dots and tell me how the hell can the 5600h be faster than the 12900h when even the 5900hx isn't???
 
Back
Top