• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched

People who don't know how to set a power limit on their CPU shouldn't be buying high end CPUs....

Here is how mobile ADL scale with power vs Ryzen 6000
View attachment 259434

So just because Intel CPU use more power for more perf, Intel must suck? :roll:, some people have really skewed perspective
You don't know where to swim in your word gymnastics, but you ended up in the worst possible scenario, showing how horrible AL mobile is. Even with more cores it still loses out to AMD's octa-cores of the 5xxx generation lol

I don't understand how anyone can brag about a mobile chip not getting the best performance balance with limited TDP. I won't even go into the fact that now the U line has only 2 performance cores.
 
So it looks like these improvements you mentioned are only increasing specific benchmark performance by just 1-2% going from Alder Lake to Raptor Lake. If this is the case there are only two takeaways under this generation transition:
  • 5-6% higher clocks
  • 8 more E-cores

It look like on this specific benchmark with the score you provided, it will be around 1% increased IPC. It look Intel really focused on improving the multicore performance in this generation and not much on single thread performance. But geekbench is geekbench. I think you need a suit of many software theses days to really get an average IPC increase.

Just a recap: IPC = Instruction per clock.

Instruction per clock * Frequency = Single Thread performance.

Still want to see the score in games vs Zen4 but I now think it will probably be much closer than initially thought. Not sure if Zen 4 or Raptor Lake will win, but there will be a fight for sure. At least until Zen4 X3D.
 
You don't know where to swim in your word gymnastics, but you ended up in the worst possible scenario, showing how horrible AL mobile is. Even with more cores it still loses out to AMD's octa-cores of the 5xxx generation lol

I don't understand how anyone can brag about a mobile chip not getting the best performance balance with limited TDP. I won't even go into the fact that now the U line has only 2 performance cores.
Alder Lake mobile is both the most power efficient and has most performance and that is exactly what the graph is showing!
 
Last edited by a moderator:
13900 will be competing with 7950x and AMD already said that Zen4 will be 40% faster than Zen3 in MT, so there's no reason for 7900x not to equal 13700k in performance while being more efficient, plus, buying Intel will leave you with a dead end platform, as Raptor will be the last CPUs for current platform, while AMD will keep AM5 for at least 3 gens.
AMD says a lot of things. Intel says a lot of things. If you believe them you're a fool.
 
You don't know where to swim in your word gymnastics, but you ended up in the worst possible scenario, showing how horrible AL mobile is. Even with more cores it still loses out to AMD's octa-cores of the 5xxx generation lol
According to graph at 75W:
1- 5800H=11985HK=~12000 CBR23
2- 6900HS=~14000 CBR23
3- 12700H=12900HK=~16000 CBR23

5800H is about 25% slower then 12700H/1290HK at 75W.
AMD`s best is about 12% slower then Intels best at the same 75W.
You clearly read the graph wrong :(
 
Ignorant, unabashed AMD fanboy! Alder Lake mobile is both the most power efficient and has most performance and that is exactly what the graph is showing! Reported for blatant lying!
The efficiency of intel chips is so bad in TDP limited scenarios that they end up losing to a Hexa-core(5600u) sometimes, not to mention the offboard GPU losing to an AMD iGPU.

Check for yourself the tests in real-life scenarios like Blender, video coding, gaming. On 90% of notebooks the AL mobile efficiency is terrible.





IdeaPad-Flex-5i-14IAU7-convertible-review-Core-i5-1235U-done-right.642379.0.html
 
Nice gift from Intel for a socket that in the verge of phased out :)
 
According to graph at 75W:
1- 5800H=11985HK=~12000 CBR23
2- 6900HS=~14000 CBR23
3- 12700H=12900HK=~16000 CBR23

5800H is about 25% slower then 12700H/1290HK at 75W.
AMD`s best is about 12% slower then Intels best at the same 75W.
You clearly read the graph wrong :(
Don't want to get too much in the discussion there about what is best, but 75w, that is more the realm of desktop than laptops. Laptop is sub 65 watt. Then yes, you have high power laptop for gamers and stuff, but they are a pain to use as their fans are so loud.
 
Stay on topic.
Stop your arguing/bickering/insulting.
 
Don't want to get too much in the discussion there about what is best, but 75w, that is more the realm of desktop than laptops. Laptop is sub 65 watt. Then yes, you have high power laptop for gamers and stuff, but they are a pain to use as their fans are so loud.
Those % are about the same from 35w to 75w.
No need to be picky :)
 
Now I believe Geekbench are noteworthy benchmarks, especially if it shows particular brands excels in it :p
 
Let's see the full range of numbers before we jump to conclusions :)
 
A possible outcome is the below:

12900 5.1GHz -> 13900 5.6GHz
12700 4.9GHz -> 13700 5.2GHz/5.3GHz
12600 4.8GHz -> 13600 5.0GHz
12500 4.6GHz -> 13500 4.7GHz
12400 4.4GHz -> 13400 4.4GHz
 
According to graph at 75W:
1- 5800H=11985HK=~12000 CBR23
2- 6900HS=~14000 CBR23
3- 12700H=12900HK=~16000 CBR23

5800H is about 25% slower then 12700H/1290HK at 75W.
AMD`s best is about 12% slower then Intels best at the same 75W.
You clearly read the graph wrong :(

You have any idea on the ballpark of the 6-core ADL mobile parts? My 5600H does a good ol' 9230, and it's a 45W spec processor. I have no complaints about this level of performance on the go, it's basically a desktop 5600GE end of the day.

CINEBENCH_R23_CPU_Multi_Core_9230.jpg
 
You have any idea on the ballpark of the 6-core ADL mobile parts? My 5600H does a good ol' 9230, and it's a 45W spec processor. I have no complaints about this level of performance on the go, it's basically a desktop 5600GE end of the day.

View attachment 259600
No idea, I was just stat what the graph said:)
 
Single-core or even 2-core performance should be at a power below the stated long-term/base TDP of 65W.

For MT performance, you should blame hardware reviewers and motherboard manufacturers. The latter especially most often use high or no power-current limits and tons of load voltage (leading to effectively overvolted operating conditions, i.e. voltages exceeding values in the CPU-fused voltage–frequency curve), making default settings far from being true Intel defaults. They are allowed to, since power limits are not a processor specification and any current/voltage is allowed if below the specified limit and temperatures do not exceed TjMax.

Hardware reviewers seem generally clueless about all of this.

If Intel-recommended PL1 (65W) and Tau time for locked processors (recently usually 28s) were actually respected, due to how the algorithm works the CPU would go from 200W to 65W (PL1) within 10 seconds, making PL2 influence on long benchmarks like Cinebench scores limited.

People who want to efficiently use their 65W CPU at 65W no matter what, should tune their motherboard settings accordingly.
To me, when you advertise something as 65W TDP, while the actual power usage is higher, it is not an honest claim. This goes to both Intel and AMD, or any chip company. Intel is often called out for this practice simply because their claims are generally the most misleading when a 65W TDP chip actually pulls in substantially higher numbers. You can enforce that TDP for this i9 chip, but at a substantial lost to performance, because the truth is that it NEEDs that much power to provide high performance. In the case of i7 and i9 non K versions, I feel Intel needs to be realistic and bump that way outdated and misleading "65W TDP" marketing up to a more meaningful/accurate value.
 
Well said @Solid State Brain. People over-dramatisize the meaning of TDP in modern CPUs and frankly this is getting old.
If you really want 65W or 95W or whatever, you can set it as a limit in BIOS in seconds. You will lose some performance, and that's all

Modern methods of retaining performance without challenging a mid range air cooler exist a plenty.
not for OEM or stock boards
That's why this annoys me

Users either get locked to low power settings - and locked performance (look at all the pissed off intel laptop users we get in the throttlestop forum)

It's becoming:

1.CPU's are reviewed on high end unlocked supercooled platforms and everyone bases performance off those values
2. Home users get boards that lock the power limits down, and users never see that performance

So just because Intel CPU use more power for more perf, Intel must suck?
As long as they actually get more performance for that power consumption...

Newer intel advertising got more accurate or more honest, but they still have some pretty shitty efficiency: The only time they aren't bottom of the charts is when the E-cores are used.
Intels P cores are not efficient by any metric.

Ironically, 11th gen was pretty good single threaded, but pure garbage MT.

1661744667903.png



You cant discuss the performance of the P cores as if they have the efficiency of the E-cores - very little can or will use both, other than a few specific workloads and synthetic tests.
The E-cores do nothing for gamers, for example.

To me, when you advertise something as 65W TDP, while the actual power usage is higher, it is not an honest claim. This goes to both Intel and AMD, or any chip company. Intel is often called out for this practice simply because their claims are generally the most misleading when a 65W TDP chip actually pulls in substantially higher numbers. You can enforce that TDP for this i9 chip, but at a substantial lost to performance, because the truth is that it NEEDs that much power to provide high performance. In the case of i7 and i9 non K versions, I feel Intel needs to be realistic and bump that way outdated and misleading "65W TDP" marketing up to a more meaningful/accurate value.
TDP is thermal design power, not "total wattage" so they do both have some leniency here.


Seeing 65W TDP becoming 95W peak or similar was fine if those peak values weren't constant - because short boosts wouldnt overwhelm a 65W TDP designed cooler.
Intels 10700 broke that by making 65W become 215W, and it's been meaningless ever since.
 
As long as they actually get more performance for that power consumption...

Newer intel advertising got more accurate or more honest, but they still have some pretty shitty efficiency: The only time they aren't bottom of the charts is when the E-cores are used.
Intels P cores are not efficient by any metric.

You cant discuss the performance of the P cores as if they have the efficiency of the E-cores - very little can or will use both, other than a few specific workloads and synthetic tests.
The E-cores do nothing for gamers, for example.

Efficiency when doing synthetic workloads don't correlate to actual gaming, maybe you should look closer to power usage during gaming.
 
Efficiency when doing synthetic workloads don't correlate to actual gaming, maybe you should look closer to power usage during gaming.
What about rendering (video) workloads?
 
Efficiency when doing synthetic workloads don't correlate to actual gaming, maybe you should look closer to power usage during gaming.
which is erratic, and people get misled
I've already had arguments with people about that here, who will use 4K 60FPS results and show temps and wattages, in which case you could run a 2500K and get the same performance

If you're going to claim they're efficient, don't test and claim that efficiency when the CPU is underclocked and undervolted due to not having anything to do


These mangled claims mixing up the power efficiency when its not even boosted or using the higher clock speeds, performance and wattages are entirely a problem with misunderstanding - because the moment you're not GPU limited, you'll suddenly find your wattages and heat output shoot up massively.


Single threaded efficiency charts don't lie. They dont go away. They don't suddenly become wrong or meaningless because you found a niche situation where the CPU doesn't show those behaviours - in those same situations, the more efficient CPU's get even better, too.
 
which is erratic, and people get misled
I've already had arguments with people about that here, who will use 4K 60FPS results and show temps and wattages, in which case you could run a 2500K and get the same performance

If you're going to claim they're efficient, don't test and claim that efficiency when the CPU is underclocked and undervolted due to not having anything to do


These mangled claims mixing up the power efficiency when its not even boosted or using the higher clock speeds, performance and wattages are entirely a problem with misunderstanding - because the moment you're not GPU limited, you'll suddenly find your wattages and heat output shoot up massively.


Single threaded efficiency charts don't lie. They dont go away. They don't suddenly become wrong or meaningless because you found a niche situation where the CPU doesn't show those behaviours - in those same situations, the more efficient CPU's get even better, too.


See how 12700K use roughtly the same power as 5800X?

Using synthetic workload is probably the stupidest thing I have seen for gaming CPU

12600K demolish 5600X in term of efficiency too, but no everyone only care about synthetic benchmarks LOL
 
Last edited:

See how 12700K use roughtly the same power as 5800X?

Using synthetic workload is probably the stupidest thing I have seen for gaming CPU
What about the other moments where it's not?
Like i've said all along, what about the moments you're not limited, and the CPU has to work harder?

In this shot, the intel CPU has the higher performance. Zero argument that they can do higher performance.

FPS is a good 20 higher. Winner.
CPU went from 102W to 125W (22.5%) and 80 to 106FPS (32.5%)
The higher your CPU usage gets, the less efficient it's going to be.
Over time, that's going to happen more and more often, and the moment you hit a title that's maxing out your CPU only one of those CPU's is going to hit 200W+

While i would agree that's acceptable for the higher FPS, it's not more efficient - that gain did not scale.
Compared to the plain 5800x, it did indeed do better.



1661747303348.png




But going back to what i'm bashing my head against the wall saying over and over:

That power consumption has to be worth it. We're seeing less efficiency here, but if a game ever wants more cores and more threads? That power consumption will go up and keep going up, because the CPU is less efficient over all. THAT is what synthetic testing shows you.


And before you argue about "but no game ever uses 100%" go google it. There's constant complaints about it online all over the web, currently most intel users pre 9th gen with 4-core i7's are experiencing the joys of 100% usage in several games, most recently the spiderman port. It wont be too long until that's 6 cores maxing out as the above screenshot shows, with 8 cores not long after.
84% usage on a 16 thread CPU? Yeah, that's a massive hint that you need to be prepared for what your 100% loads need to be sustained.
 
What about the other moments where it's not?
Like i've said all along, what about the moments you're not limited, and the CPU has to work harder?

In this shot, the intel CPU has the higher performance. Zero argument that they can do higher performance.

FPS is a good 20 higher. Winner.
CPU went from 102W to 125W (22.5%) and 80 to 106FPS (32.5%)
The higher your CPU usage gets, the less efficient it's going to be.
Over time, that's going to happen more and more often, and the moment you hit a title that's maxing out your CPU only one of those CPU's is going to hit 200W+

While i would agree that's acceptable for the higher FPS, it's not more efficient - that gain did not scale.
Compared to the plain 5800x, it did indeed do better.



View attachment 259833



But going back to what i'm bashing my head against the wall saying over and over:

That power consumption has to be worth it. We're seeing less efficiency here, but if a game ever wants more cores and more threads? That power consumption will go up and keep going up, because the CPU is less efficient over all. THAT is what synthetic testing shows you.


And before you argue about "but no game ever uses 100%" go google it. There's constant complaints about it online all over the web, currently most intel users pre 9th gen with 4-core i7's are experiencing the joys of 100% usage in several games, most recently the spiderman port. It wont be too long until that's 6 cores maxing out as the above screenshot shows, with 8 cores not long after.
84% usage on a 16 thread CPU? Yeah, that's a massive hint that you need to be prepared for what your 100% loads need to be sustained.

Sorry my edit came in late, check out the 12600K vs 5600X
Which CPU is more efficient and better future proof? I would say the 12600K, well it came out a year later after all
 
To me, when you advertise something as 65W TDP, while the actual power usage is higher, it is not an honest claim. This goes to both Intel and AMD, or any chip company. Intel is often called out for this practice simply because their claims are generally the most misleading when a 65W TDP chip actually pulls in substantially higher numbers. You can enforce that TDP for this i9 chip, but at a substantial lost to performance, because the truth is that it NEEDs that much power to provide high performance. In the case of i7 and i9 non K versions, I feel Intel needs to be realistic and bump that way outdated and misleading "65W TDP" marketing up to a more meaningful/accurate value.

TDP has not been a reliable indicator of power consumption for a very long time. For Intel, it's a sustained power level around which certain minimum (base) operating frequencies are configured and guaranteed to be maintained regardless of silicon quality, if certain parameters (having a rather low bar) regarding cooling and power delivery are met. It's also a general target point for OEMs to base their cooling on.

The basic idea behind turbo boost algorithms is allowing short-term processing power bursts by taking advantage of the cooler's (and system's) thermal inertia before system limits are reached. This is possible also because CPUs are very good at not self-destroying themselves and can sustain being at their preset thermal limit without long-term issues, if done within reason.

To me, it's rather nice that CPUs are allowed to use much more power than the base level—in the past, when there was no AMD competition, frequencies were usually so low on locked models that they couldn't even reach the stated TDP under any realistic circumstance. This means that now even locked CPUs in a boosted state can be considered 'overclocked' relatively to the past, and that using a better cooler will lead to better sustained performance if power limits are unlocked or increased.

Of course, you have to configure such limits according to your system needs and capabilities; OEMs will generally do this on locked-down systems (typically laptops). It's gaming motherboard makers who have started breaking the system by providing basically unlocked limits by default.

But then, since power limits are actually flexible and intended to be adapted to one's configuration, what to test in CPU reviews? Non-OEM motherboard manufacturers tend to use "gamer" settings, and Intel does not care about it, so it's up to hardware reviewers to make sensible choices here, not just blindly use "motherboard defaults".

I expect that the more cores will be added in future processor generations, the greater the gains with increased power limits, but what's certain is that any CPU operating near the frequency limit will run inefficiently. For efficiency, there's no other choice than using lower frequencies and thus lower voltages, and gamers have to come to terms with that. An unlocked CPU that runs efficiently when pushed to the maximum is a CPU with deliberately low operating frequencies.
 
Back
Top