• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i5-13600K

When Zen 4 released, excepting the 7950X, all I saw was parity with Alder Lake but at a higher price.

I think a lot of folks intuitively knew this was coming. AMD needed to do a 2-gen type leapfrog of Intel, since AL was already demonstrably superior to Zen 3. They didn't do that, so naturally now they are clearly behind.

With AMD's 2 year release cycle, this is likely to just get worse. This time next year we'll have Meteor Lake, and AMD will still be on Zen 4. In 2024 Intel will release Arrow Lake, and that will be what Zen 5 goes up against.

I find it highly unlikely, that AMD would be competitive against an Intel part 2 generations in the future with Zen 5 using the same socket and so on, when they are effectively most of a generation behind right now. It's a total repeat of the late 2000s and early 2010s.
I can't guarantee any future predictions, but Intel have surged back to parity with AMD because their troubled 10nm process finally ironed out enough kinks to make a viable product, and that product is only viable because they've found a way to make it eat 300W+ without catching fire. Performance/Watt matters immensely in every single segment except the high-end, liquid-cooled, DIY-enthusiast segment, which is an absolutely miniscule segment in terms of units sold. What matters for efficiency is process node, since I believe Intel's chip design engineers to be of a high calibre. From a process node perspective, Intel's last 3 years have been a complete shit-show:

1666377853166.png


Their 10nm first failed to launch at 8th-Gen, with only some completely defective mobile i3's actually making it out of the foundry. It took until Alder Lake to be viable for desktop and server, and rebranding it intel 7 isn't progress, it's just a new marketing name. I am taking a look at that roadmap above and seeing that right now, Intel are transitioning to 4nm, EUV, and a new Foveros packaging ALL AT ONCE. Just bear in mind they've spend FOUR GENERATIONS floundering around trying to get a single jump from 14nm to 10nm 7nm to work viably, and the result is only "good" if you pump insane amounts of power through it. I definitely do not have the confidence you do that Intel are just going to return to form as the leading global semiconductor foundry. Their foundry execution record over the last 5-6 years has been abysmal and they are so far behind TSMC now that I doubt they will truly catch up before the end of this decade.

Additionally, Intel aren't ahead of AMD in IPC, it's just that they clock their CPUs to the moon at 2.5x the power consumption. Intel are playing catch-up with AMD in terms of cache, interconnect, IMC latency, chiplet technology, MCM scaling from Ryzen 5 to 8-die EPYC server CPUs. 14th Gen will have many of the things (for the first time) that AMD have had for nearly five years now. That's five years of field-tested experience AMD have that Intel don't, and a lot of that is down to manufacturing process, not design. Again, I don't think I need to iterate how poor Intel's manufacturing and process node track record has been for the last half-decade, and I don't have any real reason to expect a massive reversal of competence out of the blue...

Going back to Ryzen 7000, sure - it may only be as good as Alder Lake, but it's coolable, efficient, and you know that a PCIe 5.0, DDR5 AM5 motherboard will be good for Ryzen 8000, 9000, 10000 and possibly beyond. Socket 1700 is already a dead end, to be replaced next generation.
 
Last edited:
MSFS seems like literally the only scenario where that upgrade would make any sense (a 17% uplift at 1080p according to Eurogamer, but crucially starting from pretty low fps to begin with), though I hope you can get a good price when selling your 12700K, as it's not exactly a cost-effective move.
Thanks both with my 240mm aio unless I change that I'm going to be limited with what I could overclock the 13th series with heat.

I'd be interested in how to overclock per core I'll have to have a mess in the bios settings I've got the current running stable I've tried 5.1ghz but over 1.32v on the core it gets too hot stress testing.
 
Watch that power draw. :roll:
Nope, nopity NOPE.

What a clown world we live in.

I am so glad I upgraded to my current system, I was worried that I would be done in by both this generation from Intel and AMD's, but this platform will last a good while yet and I have options to make my power draw even more friendly while keeping my performance and temps in check. :)
This CPU has a very high potential for undervolting. Stock 1.35V, UV 1.1V. Power draw minus 100W, temperatures down from 80-100 to ~60-67 degrees.
 
One of the best review formats there is. Thanks for this. Instead of the constant flood of DOOM videos on youtube about 90c CPU's... you at least point out gaming temps and power consumption under gaming loads. Where the vast majority use these. Wonderful graphs too. Always recommend your reviews!
 
Looks like poor W1zz has been putting in the OT with all these reviews!!


3.1% relative diff at 4k over the 5600 I just bit the bullet on............hard to go wrong with a $170 CAD sale price ($125 USD)

Probably wait until black Friday to pick up a mobo, then upgraded for another 6 years (on the cheap) :)

Media encoding is nice though. Like most modern CPUs it looks like OC'ing scores lower for most thing vs auto/stock.

wasn't long ago 1440p saw smaller margins at the top of the table too with 4k sitting at ~1% difference... now with RPL/Zen 4 vs 5600/my 9700K we're seeing a 15% shift at 1440p. 15% at this level is more than acceptable... but in my experience the single core 9700K is limiting performance or diminishing consistent visual eye candy in pacier select titles. I like my heavier multiplayer titles to run silky smooth hence compelled to upgrade.

Now we're hearing 40-series/potentially RDNA3 is transfering the bottleneck to the CPU... hope thats just a smelly-wallet-pinch-4090/4K-thing with mid-segment cards at 1440p delivering a finer balance :p
 
Last edited:
Going back to Ryzen 7000, sure - it may only be as good as Alder Lake, but it's coolable, efficient, and you know that a PCIe 5.0, DDR5 AM5 motherboard will be good for Ryzen 8000, 9000, 10000 and possibly beyond. Socket 1700 is already a dead end, to be replaced next generation.

This is why I've just built my very first AMD system ever. Been building PCs since the 8088 and never felt the desire to assemble an AMD rig, not even in the early 2000s. My 7700X with a -30 undervolt boosts to 5.5GHz and runs in the high 30s / low 50s C whilst gaming at 26-28 ambient. It's also pulling barely 50W doing so. Sure, it's not as fast as the new 13th Gen Intel parts but it makes virtually no difference to me at 4K/120 with my 4090 (which I have also, coincidentally, undervolted since my monitor won't go over 120fps. This gives me a power draw of around 270W on the 4090 and it utterly demolishes my 3090's frame rates with that card pulling 350-400W).

Makes for a whisper silent system that runs cool and super efficient at a total system gaming power draw well under 350W and pretty much maxing out my 4K/120 screen! Looking forward to getting a PCIe Gen 5 SSD when those launch and then this rig will be next to perfect for my needs!

Edit: Just checked, CPU runs at 39-53C at 26C ambient and draws 51W in gaming loads! AMD could easily have made this default behaviour, they knew they were gonna lose outright performance to 13th gen!
 
Last edited:
This is why I've just built my very first AMD system ever. Been building PCs since the 8088 and never felt the desire to assemble an AMD rig, not even in the early 2000s. My 7700X with a -30 undervolt boosts to 5.5GHz and runs in the high 50s / low 60s C whilst gaming at 26-28 ambient. It's also pulling barely 56-60W doing so. Sure, it's not as fast as the new 13th Gen Intel parts but it makes virtually no difference to me at 4K/120 with my 4090 (which I have also, coincidentally, undervolted since my monitor won't go over 120fps. This gives me a power draw of around 270W on the 4090 and it utterly demolishes my 3090's frame rates with that card pulling 350-400W).

Makes for a whisper silent system that runs cool and super efficient at a total gaming power draw under 350W and pretty much maxing out my 4K/120 screen! Looking forward to getting a PCIe Gen 5 SSD when those launch and then this rig will be next to perfect for my needs!
Sounds like a sweet - and well tuned - setup! I personally wouldn't bother with that PCIe 5.0 SSD though - even when DirectStorage becomes a thing, you won't see any meaningful performance increase compared to a 4.0 drive (and likely even a 3.0 drive). Outside of massive sequential operations the bottleneck is the NAND, not the controller or bus, and NAND isn't getting much faster with time outside of pure sequential loads either. I would much rather have twice the capacity on a nominally slower drive, as real world performance (assuming you pick a good drive) won't be noticeably different.
 
wasn't long ago 1440p saw smaller margins at the top of the table too with 4k sitting at ~1% difference... now with RPL/Zen 4 vs 5600/my 9700K we're seeing a 15% shift at 1440p. 15% at this level is more than acceptable... but in my experience the single core 9700K is limiting performance or diminishing consistent visual eye candy in pacier select titles. I like my heavier multiplayer titles to run silky smooth hence compelled to upgrade.

Now we're hearing 40-series/potentially RDNA3 is transfering the bottleneck to the CPU... hope thats just a smelly-wallet-pinch-4090/4K-thing with mid-segment cards at 1440p delivering a finer balance :p
Ya, end result is how much you are satisfied/expectations and how much budget you have. My current 1700 and old rx480 was still meeting my wants & needs, so not point burning cash for no perceivable gains for my requirements. For anyone else.............it's your money, do whatever you want with it.
 
I would much rather have twice the capacity on a nominally slower drive, as real world performance (assuming you pick a good drive) won't be noticeably different.
I couldn't agree more, which is why my boot drive is a Firecuda 530 2TB Gen 4 drive (which, by the way, is noticeably quicker than my previous 2TB Samsung 970 Evo Plus drive in system start-up and game load times) and my data drive is a 8TB Corsair MP400 Gen 3 unit. I do find the 2TB drive slightly on the small side, so I guess I'll hang-on to this setup until decent 4TB Gen 5 drives are out and then I'll hock this one and be done!

I did consider waiting for the 13700K reviews to come in before I made this rig but I didn't like the notion that 13th gen was on a dead platform that would require a whole new system if I wanted to upgrade in 3-4 years' time! All I can say is I'm glad AMD pushed Intel as hard as they have in the past 2-3 years because the products from the 2600K to the 8086K were marginal improvements, year-on-year, at best!

Good time to be a PC gamer, honestly, you can't really lose whichever way you go!
 
i'll still stick with AMD, i don't do "e-cores" at all, i don't want them, i never wanted them and i won't overpay for unused silicon that i'll end up disabling on 1st power up.

IF intel had released a 8-core P-core only cpu, then my new PC would probably be that, as it is? i don't give a toss if it's more expensive, the software i use runs better on normal cores
Well come Zen 5 you will get E cores, but they will be Zen 5c cores and far more powerful than Gracemont+++ and SMT enabled. Come Arrow Lake i9 will have 48 cores, 8P + 40E!
 
Should I bother with Intel Optane? I am pretty sure my B660 board supports it, but it seems like I never hear anything about Optane. From what I understand it is relatively cheap for a 16gb stick of it, you plug it in a m.2 slot, install the drivers, reboot, and done... it just makes everything snappier and faster or something over time automatically after that?

(i just bought a 13600k, is why I am asking)
 
Why is the performance of the overclocked CPU so much worse in "Web Browsing" and "Microsoft Office"?
Those should be a single core workloads so 5.1GHz VS 5.6GHz should give it a nice boost in both.
Especially in the "Speedometer 2" it's 30% slower when overclocked.... why?
 
AMD Ryzen 7 7700 non-X CPU allegedly features 8 cores and 65W TDP

I wonder at what price it make sense with the kind of performance that 13600K/KF brings...

Should I bother with Intel Optane? I am pretty sure my B660 board supports it, but it seems like I never hear anything about Optane. From what I understand it is relatively cheap for a 16gb stick of it, you plug it in a m.2 slot, install the drivers, reboot, and done... it just makes everything snappier and faster or something over time automatically after that?

(i just bought a 13600k, is why I am asking)
Don't bother.
It has good random QD1-8 performance regarding 4K random read but no essential advantage in write and it is just a rebadged 5 year-old tech.
I don't know how much you can find it, $30? Just use them to buy a better SSD imo or something else
 
Last edited:
Incredible price/performance ratio. I think it will be the winner of this generation, waiting for the 13400F.
 
This just shows how close these CPU's can be..

TPU: The 13600K is a bit faster than the 7700X at 1080 & 1440 with a RTX 3080.

HUB: The 13700K is just as fast as the 7700X at1080 & 1440 with a RTX 4090:
View attachment 266620

$120's for 2P cores and small frequency boost to unlocked chips that already have a lot of thermal cooling requirements along with over indulgent power draw out of the box. Keep in mind a Alder Lake Pentium is $75's for the same 2P cores. You could probably even buy that and MB for about the same or maybe less as the additional price of the 13700K.
 
Holy shit it gives even the 7700X a run for its money!!
Time to lower prices AMD.....
And the 3D versions won't solve the applications deficit of the 7700X, only a price decrease would be acceptable.

AMD should think very hard about increasing core count of the 8 core to at least 10 core. And the 6 to 8.
 
I just watched Hardware Unboxed review for the 13600k against the 7600X after reading this review. Not sure what is going on, could but their results showed the 7600X being overall faster in gaming, even in the same games tested by both reviews results are completely opposite.
 
I just watched Hardware Unboxed review for the 13600k against the 7600X after reading this review. Not sure what is going on, could but their results showed the 7600X being overall faster in gaming, even in the same games tested by both reviews results are completely opposite.
This is interesting, because either site, whichever cpu wins, you won't notice it in real life, these reviews are more of an academic excercise, specially when the results are so close to each other.
A "win" at least for me has to be a difference noticeable by the user, like 20% - 30% minimum.
 
I just watched Hardware Unboxed review for the 13600k against the 7600X after reading this review. Not sure what is going on, could but their results showed the 7600X being overall faster in gaming, even in the same games tested by both reviews results are completely opposite.

Has to do with game selection and platform setup. Look at the three motherboards he is using. He's probably not equalizing them at all. When you just pop a CPU into a motherboard and set XMP, you have no idea what it is going to do.

The difference between what a MEG ACE motherboard (Zen 4) defaults to and what a Tomahawk (DDR4 Intel) or Carbon (DDR5 Intel) defaults to is likely significant. These 3 boards are all at entirely different tiers.
 
I like this CPU, competitively priced, fairly decent power consumption in games, amazing performance in games and crushes the 7600x in multithreaded applications! AMD's 7600x need to cost something like $200 in order to make sense at this point and their 7700x need to cost $350 in order to make sense! I don't see AMD doing well with their lower end parts if prices stay the same!

Intel definitely has an advantage at the mid range in terms of performance and value! Sure power consumption and temperatures are quite bad overall and in powerful applications this cpu turns into a heater as well, but for games and normal application work its a solid CPU.
 
I just watched Hardware Unboxed review for the 13600k against the 7600X after reading this review. Not sure what is going on, could but their results showed the 7600X being overall faster in gaming, even in the same games tested by both reviews results are completely opposite.

Looking at multiple 13600K vs 7600X reviews, it seems there is an emerging pattern which "possibly" explains why some reviews are showing the 13600K coming out ahead whilst other reviews are backing the 7600X.

It looks like its down to the test bench choice of graphics card. Based on several 10+ game averages, it appears:
  • where the high-end RTX 30-series card is used the 13600K takes the win with a nice 8% lead (taken from TPU 12-game bench shown below/others).
  • Where the RTX 4090 is used the 7600X takes the win with a 6% lead (taken from Jarrods 25-game bench shown below). Other 4090 reviews with 8-12 game averages are showing a 1-4% lead.
If this is correct, i'd appreciate if the know-howsers/experts can explain why the two generation of cards are showing variable results with RPL and Zen 4? I totally get it, some of these results will vary based on memory configurations, game type, etc... but for some reason i'm strongly suspecting the primary offender is the GPU(s)

TPU: 1080p / RTX 3090 / DDR5 / 13600K v 7600X

Share 1.jpg

Individual 12-game performance: https://www.techpowerup.com/review/intel-core-i5-13600k/18.html


JARRODS: 1080P / RTX 4090 / DDR5 / 13600K v 7600X

Share 2.jpg
 
Back
Top