• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i5-13600K

best for gaming it also has good app performance. I'm not sure if I can switch from 12600k? It is more logical to wait for the 14th generation.
If you only need gaming performance, there is no reason to switch. But it is just a simple CPU swap.

14th gen will require a new motherboard. We do not know much about Meteor Lake right now, except that it will use multiple dies in a single package. But rumors say that it will not be able to clock as high as Alder/Raptor Lake, which are on a very mature process. 14th gen will be the first one using the new Intel 4 process. I would not be surprised if gaming performance was actually lower because of that.
I do not think it is a major architecture change for the actual CPU cores. I think the main purpose is to switch to the multi-chip approach. It will have much better efficiency, but not peak performance. Mobile is the main focus here.
 
ComputerBase did some testing on the 13600K on lower power limits, and it had some pretty good results, at 125W it's only ~5% slower than the stock 181W.
Here's a simple chart using their results including also the i9 and i7, Y axis is their score based on 9 workloads (7-Zip, Agisoft PhotoScan Pro: Align Photos (84 JPEGs), Blender Benchmark: Quick Benchmark, Cinebench R15, Cinebench R20, Corona 1.3 Benchmark, DigiCortex Simulation: BenchLarge, HandBrake, POV-Ray):
1667612700686.png
 
Last edited:
And why would i want efficiency cores for that?, simply put more performance cores that handle the same thing with greater performance. On a laptop it makes sense on a desktop?, nah, just give me more big cores


Except they haven't, and there are lots of downsides: you need to run windows 11, and even then they're problematic with older software (and even with newer soft), and that zen4 feels smoother and more responsive.
And also, i'm not going to pay intel for something i will have to disable on first boot(it was bad enough with integrated gpus, at least they added the F cpus for that), give me a pure homogeneous ultra high performance classic cpu, that's what i pay for, not some laptop-gimmick.
i don't only care for gaming but it's my main concern, i do all sort of stuff but i'm a very heavy multitasker(think of games+browser/s with 1400 tabs+WA dekstop+PDFs+illustrator/indesign+discord+assorted TSRs all running at the same time) and i won't downgrade to win11
I have a 13600K on Asus Z690 Strix A D4 @ 5.9 GHz high and all cores 5.6 GHz on 2 year old custom waterloop with Corsair 420 radiator (3 x 140mm Artic fans in push configuration) with Samsung B-Bie 4000 CL16 (2 x 16 GB) @ 3871 MHz; FSB is 100.1 & auto ring is 4500. This processor from Newegg their New Jersey warehouse. This is a golden cpu, it only hits 65C after 3-4 hours of gaming. This overclock is linpack stable.
I have kept my E cores enable to handle background tasks. My E cores @ 4.2 GHz heavy & 4.4 GHz miedium to light workloads.
If you have the right cooling solution then you can really push a good 13600K processor. I used the Asus IA overclocking feature to overclock my 13600K; my IA overclocked my 13600K to 68% overclock with my cooling rated as 167 with Asus cooler rating system. I did set the cpu voltage manually to 1.34v as auto was setting it to 1.41v idle but 1.26v under full load benching @ 5.6 GHz all cores.
This is my first i5 processor; I normally get the processor just below the king of the hill like Intel 12700K or AMD 5900X which I had both as my main system watercooled. The 12700K now in my wife’s system & 5900X in my daughter’s system now.
This 13600K is better performance price point than I have seen in over 20 years. I have been building PCs since the days of the 286. I can finally say I’ve gotten my golden processor that overclocks outstandingly.
i view running Windows 11 as a positive as it has matured into a good OS finally after the latest update.
 
Last edited:
Would it be possible to retest 13600K vs. 7600X with 4090? Latest results from 54 games by HUB suggest that 7600X has an edge over 13600K both in 1080p and 1440p, 5% and 4% respectively. This is significantly different from the original results on TUP with 3080.
 
DDR 4 versus DDR 5 and 13600K versus 5800X3D in 15 tests

 
Last edited:
I have a 13600K on Asus Z690 Strix A D4 @ 5.9 GHz high and all cores 5.6 GHz on 2 year old custom waterloop with Corsair 420 radiator (3 x 140mm Artic fans in push configuration) with Samsung B-Bie 4000 CL16 (2 x 16 GB) @ 3871 MHz; FSB is 100.1 & auto ring is 4500. This processor from Newegg their New Jersey warehouse. This is a golden cpu, it only hits 65C after 3-4 hours of gaming. This overclock is linpack stable.
I have kept my E cores enable to handle background tasks. My E cores @ 4.2 GHz heavy & 4.4 GHz miedium to light workloads.
If you have the right cooling solution then you can really push a good 13600K processor. I used the Asus IA overclocking feature to overclock my 13600K; my IA overclocked my 13600K to 68% overclock with my cooling rated as 167 with Asus cooler rating system. I did set the cpu voltage manually to 1.34v as auto was setting it to 1.41v idle but 1.26v under full load benching @ 5.6 GHz all cores.
This is my first i5 processor; I normally get the processor just below the king of the hill like Intel 12700K or AMD 5900X which I had both as my main system watercooled. The 12700K now in my wife’s system & 5900X in my daughter’s system now.
This 13600K is better performance price point than I have seen in over 20 years. I have been building PCs since the days of the 286. I can finally say I’ve gotten my golden processor that overclocks outstandingly.
i view running Windows 11 as a positive as it has matured into a good OS finally after the latest update.
Absolutely fantastic CPU. Congratulations on the purchase.
 
Why is the performance of the overclocked CPU so much worse in "Web Browsing" and "Microsoft Office"?
Those should be a single core workloads so 5.1GHz VS 5.6GHz should give it a nice boost in both.
Especially in the "Speedometer 2" it's 36% slower when overclocked.... why?
speedometer.png
 
In fact,13600KF processor has greater potential,it can even beat 12900K processor completely
 
At about 1.100v 13600K gets more power efficient than 7700K or 7900K.

H1bYZ0v.jpg


25K from 144W.
If you want more energy efficiency better to limit the current(A) than the power(PL). Thus for the same power better performance is obtained.
 
You know you can limit the power supply to Zen 4 CPUs, right? And who knows, maybe even the 13900K at certain voltage can be more efficient than the 13600K at its stock V/f curve. As a matter of fact, I am sure of it.

AMD's new Zen 4 platform requires very expensive motherboards. Even the "cheapest" B650 chipset board costs well over $200, while Intel motherboards can be found for around $100. Sure, these might not be the latest and greatest Z790, but there won't be any big differences when opting for a cheaper B660 board, for example.

I am curious if the author actually verified this on a $100 board.
 
Last edited:
You know you can limit the power supply to Zen 4 CPUs, right? And who knows, maybe even the 13900K at certain voltage can be more efficient than the 13600K at its stock V/f curve. As a matter of fact, I am sure of it.

I am curious if the author actually verified this on a $100 board.
Who in their right mind runs a 13900K on a $100 board.
 
Who in their right mind runs a 13900K on a $100 board.

There's some multi-quoting going on, and the question was (AFAICT) about 13900K efficiency at 13600K power limits, and if the hypothesis that the i9 would be more efficient in that scenario would be true on both higher-end and budget boards.
 
You know you can limit the power supply to Zen 4 CPUs, right? And who knows, maybe even the 13900K at certain voltage can be more efficient than the 13600K at its stock V/f curve. As a matter of fact, I am sure of it.



I am curious if the author actually verified this on a $100 board.
The 13600K is not power limited, just undervolted and slightly overclocked on the E cores. More cores CPUs are more efficient than the less cores ones at the same power, no question about it.
 
Can someone clue me in as to why there's such a difference in the performance gap between the 13600K and 13900K when comparing different reviewers?

TPU puts it at a measly 3% at 1080p, which makes the 13600K a fantastic buy. However, I just watched GN's 7900X3D video, and there the 13900K consistently pulls ahead by at least 10%, often more. The 13600K is still the sensible gamer's best buy, but getting the 13900K doesn't seem as ridiculous if you go by their charts.

Is it because they're testing with an RTX 4090, and that somehow makes the differences more pronounced?
 
I doubt the 3080 is ever a bottleneck at 1080p. Maybe it's the game selection. Some games are much more sensitive to clocks and cache.

I still don't think 10% more performance at almost double the price is "not ridiculous". The 13600K should be able to achieve clocks very close to the 13900K if you really want to overclock it. Then the difference will be even smaller.

The only person who buys an i9 for just gaming is someone who doesn't care about value. Money is of no concern to them and they can buy whatever is the fastest. And that's perfectly fine, but the word "sensible" is not in their dictionary. ;)
 
Are there any downsides to running the uncore at a much lower clock, compared to everything at a lower clock but synchronized?

For example:
- cores and uncore at 4 GHz
- cores at 5 GHz, uncore at 4 GHz

The VID is set based on the uncore multiplier, and from my early testing, both of these options are stable in Prime95 at the same voltage.

Obviously raising the uncore would increase performance in cache-sensitive apps, but it drastically increases power consumption.

What I'm wandering is if the synced 4 GHz option can ever be faster than the other one.

I'm planning on running 4 GHz for most games, but I want the option to increase the clock speed for badly threaded games like some Unreal Engine garbage.
 
I'm planning on running 4 GHz for most games
Why? I got my games running @ 5GHz all P-cores. (i7 12700K)

Does it run too hot over 4GHz P-Cores?
 
Why? I got my games running @ 5GHz all P-cores. (i7 12700K)

Does it run too hot over 4GHz P-Cores?

Because it's a complete waste of power, especially when I play at 4K60 with Vsync.

At least when the uncore goes higher (because then the voltage is higher). 5 GHz P-cores with 4 GHz uncore is about 10 W more in Prime95 compared to 4 GHz synced. I'm just wondering if there's ever a disadvantage.

For example, when I play Destiny 2, I get like 20-30% CPU usage at 4 GHz (6C/12T, E-cores disabled). Power consumption is 25-35 W. Raising the clock (and voltage) would increase power consumption with no performance benefit.
 
I did some more testing and it seems Raptor Lake is pretty good at retaining efficiency if you only increase the core clock speeds while keeping the uncore clock the same. I did some comparisons to my old 9700KF.

All tests with E-cores disabled.

9700KF @ 4.2 GHz (uncore 3.9 GHz)
CPU-Z bench ST score - 489
CPU-Z bench MT score - 3784
Prime95 non-AVX power - ~102 W

13600KF @ 5.0 GHz (uncore 4.0 GHz)
CPU-Z bench ST score - 806
CPU-Z bench MT score - 6228
Prime95 non-AVX power - ~106 W

That's 65% faster at the same power draw (using the same DDR4 memory). I know it's a synthetic benchmark (not cache sensitive), but it's still impressive.

9700KF @ 4.5 GHz (uncore 4.2 GHz)
CPU-Z bench ST score - 531
CPU-Z bench MT score - 4134
Prime95 non-AVX power - ~140 W

13600KF @ 3.3 GHz (uncore 3.3 GHz)
CPU-Z bench ST score - 530
CPU-Z bench MT score - 4088
Prime95 non-AVX power - ~45 W

So this is the same performance at 1/3 the power.

In Destiny 2, running 5.0/4.0 results in 5-10 W higher power draw compared to 4.0/4.0, so that's not significant in terms of heat output. I might just stick with 5.0/4.0 permanently.
 
I wanted to test the E-cores today, but it seems to be a pointless exercise on Windows 10.

The first issue I immediately noticed is that frequency scaling gets disabled and all P-cores and E-cores constantly run at max clocks and voltage. I couldn't get it to work even with ParkControl.

Also, ParkControl shows that all P-cores (the first 12 logical processors) are parked in idle, even with core parking disabled. But I don't know if that's true, since power draw is the same.

It's a mess. I'm not gonna bother anymore until I install Windows 11 on another drive. I just wanted to see how E-cores deal with background video encoding.
 
Back
Top