• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i5-13600K

Yes, and you will lose some gaming performance.
When I run a game I can see the 4 e-cores at nearly no load while gaming with my i7 12700K, has it been changed with raptor lake?
 
When I run a game I can see the 4 e-cores at nearly no load while gaming with my i7 12700K, has it been changed with raptor lake?
Turn them off, how's your gaming now?
 
Turn them off, how's your gaming now?

So all the background processes and everything else the E-cores did will be shifted to the P-cores, ok..
 
I wonder if Raptor Lake can do lower voltages at the same clocks compared to Alder Lake, especially below 5 GHz. Looking forward to people testing this.

I will definitely be targeting efficiency when I upgrade next year, so a fixed clock speed and undervolting is what I will be looking at.

So all the background processes and everything else the E-cores did will be shifted to the P-cores, ok..

As long as you are not maxing out the P-cores, you will not see a difference after disabling the E-cores. I expect 6C/12T is not enough for some games at unlocked framerates, but I doubt there are many of those right now.
With a 12700K, I doubt you can ever utilize the E-cores in gaming, unless you are doing heavy background tasks like encoding.
For just gaming, I would still prefer 8 P-cores over any hybrid config.
 
Last edited:
dammit dammit dammit dammit why didn't you use the 4O90 kill two birds with one stone people want to know ABOUT CPU scaling AT 4K again thanks for all your hard work.

 
the test rig really needs a 4090...
 
the test rig really needs a 4090...

If you read the review, he says the 4090 review is incoming for the new chips.
 
dammit dammit dammit dammit why didn't you use the 4O90 kill two birds with one stone people want to know ABOUT CPU scaling AT 4K again thanks for all your hard work.

Because the workload involved in making changes to test procedures like that is massive. Most likely there'll be a separate article for that, possibly with a more limited scope of CPUs - there are 37 CPUs in the game test charts here after all. 12 games x 37 CPUs x even just 10 minutes per game test = 74 hours of testing. Most likely each game test takes more than that, and of course there's data collection, processing and analysis on top of this, plus all the time needed to build, tear down and re-build systems for testing, re-imaging OSes to avoid driver issues when swapping chips, and more. Even if you're able to run several tests in parallel some of the time, that's still a massive time expenditure to re-test everything for a new test suite.
 
Because the workload involved in making changes to test procedures like that is massive. Most likely there'll be a separate article for that, possibly with a more limited scope of CPUs - there are 37 CPUs in the game test charts here after all. 12 games x 37 CPUs x even just 10 minutes per game test = 74 hours of testing. Most likely each game test takes more than that, and of course there's data collection, processing and analysis on top of this, plus all the time needed to build, tear down and re-build systems for testing, re-imaging OSes to avoid driver issues when swapping chips, and more. Even if you're able to run several tests in parallel some of the time, that's still a massive time expenditure to re-test everything for a new test suite.
very funny put in the 4090 update drivers
 
If you read the review, he says the 4090 review is incoming for the new chips.
i know that.
doesn't change the fact that the gaming benchmarks are pretty much just GPU bound and by far not as accurate as others.
 
Can someone explain to me what
  • Some workloads get scheduled onto wrong cores
in the negative side of the review is talking about? I read 90% of the review, but did not come across this being talked about directly, anything I need to worry about as a casual gamer who just bought a 13600k?

i know that.
doesn't change the fact that the gaming benchmarks are pretty much just GPU bound and by far not as accurate as others.

I game at both 1440p and 1080p, and I disagree with you here, we are seeing 30-50 fps gains in some games, and a 3080 10gb isn't exactly what I'd call super high end. I mean not with next gen here. Personally, I am very happy to have the 13600k now. It will benefit me in raw frames gained.
 
dammit dammit dammit dammit why didn't you use the 4O90 kill two birds with one stone people want to know ABOUT CPU scaling AT 4K again thanks for all your hard work.

the test rig really needs a 4090...
Because I'd have to test 35 processors with RTX 4090 for proper comparison data

Can someone explain to me what
  • Some workloads get scheduled onto wrong cores
in the negative side of the review is talking about? I read 90% of the review, but did not come across this being talked about directly, anything I need to worry about as a casual gamer who just bought a 13600k?
Note how Virtualization has surprisingly low performance results? That's because it gets scheduled onto E-Cores. All other workload seem fine

doesn't change the fact that the gaming benchmarks are pretty much just GPU bound and by far not as accurate as others.
If your system is not GPU bound while gaming, you wasted a lot of money on the GPU. You should always be GPU bound, because the GPU costs much more than the CPU

12 games x 37 CPUs x even just 10 minutes per game test = 74 hours of testing.
12 games x 4 resolutions x 37 CPUs x 10 minutes = 296 hours

I spent almost a whole month in summer testing these CPUs
 
but not in a CPU Benchmark...
Why not? Are we influencers who are benchmarking for the sake of showing how awesome the new tech is? So that people think they get more FPS, while they will gain almost nothing?
 
I don't use virtualization, so thank you for the clarification @W1zzard
Just casual gamer, so this won't affect me. Really glad I am making this upgrade. Should be some fun times it gives me over next 5 years or so.
 
Because I'd have to test 35 processors with RTX 4090 for proper comparison data


Note how Virtualization has surprisingly low performance results? That's because it gets scheduled onto E-Cores. All other workload seem fine


If your system is not GPU bound while gaming, you wasted a lot of money on the GPU. You should always be GPU bound, because the GPU costs much more than the CPU


12 games x 4 resolutions x 37 CPUs x 10 minutes = 296 hours

I spent almost a whole month in summer testing these CPUs
Sorry I didn't know you had another test in coming BEFORE I OPEN MY BIG MOUTH.
 
Mainstream champion for sure! Gaming performance beyond anything from previous gen or what AyyyyMD can muster.

On Techspot, 13900K is 4% faster than the 7700X with a 4090 (not a 3080). $590 vs. $400. :)

Average_1080p-p.webp
 
Just got my 13600k ordered. thanks for the great review mate. never thought I'd be going back to Intel.

now I need to figure out if I want to go with a budget last gen board or go with Z790 board...

is 13600k plug and play with older boards? like lets say i find a really good deal on a z690 board, can i just pop the 13600k in it with no worry, or would it need a bios update first with an older intel chip?
Check out the MSI Z690 EDGE DDR4. I was able to snag one for $209. It has bios flash where you don't need CPU or RAM installed. It even comes with a 16GB flashdrive it's USB 2.0. You do need to format it to FAT32 it comes NTFS. Just ordered a 13600k. Hope it works well for another 5+ years like my 6600k. Now time for RDNA3
https://www.newegg.com/p/N82E16813144486
 
So all the background processes and everything else the E-cores did will be shifted to the P-cores, ok..
I've been wondering if you can use the process lasso program to cordon off all windows processes to only the e cores, then have all your main apps/games, all run on only the p cores.

I don't own a 12th or 13th gen, but it would be an interesting experiment to try.

I wouldn't doubt it would result in a slightly lower max fps in games, but I'm more interested in if it improves 1%, and 0.1% lows.
 
Please help me understand the m.2 SSD situation, from what I understand installing an m.2 SSD will cripple the GPU which seems ridiculous. Is it only true for Gen5 SSDs or any PCIe drive has this effect? Say I install my 660p (gen3) in the m.2 slot alongside the 3080 (gen4), would it still castrate the GPU or is it only a problem of mixing with gen5 GPUs ang gen5 SSDs?
If there's a simple answer in the review, I must have missed it - if that's the case I'd be much obliged for pointing me to the right place.
+1 here, can we get answer for this ?
 
+1 here, can we get answer for this ?
Alder and Raptor Lakes have 16 lanes of PCIe 5.0 from the CPU that usually go to the x16 slot for the GPU. Unlike Zen 4 the x4 dedicated to M.2 SSD is 4.0 and not 5.0. In order to get a 5.0 M.2 slot the motherboard has to take it from the GPU lanes thus reducing it to x8 (and potentially wasting x4 in the process). I doubt many motherboards will actually do this, the MSI MAG Z790 Tomahawk WiFi DDR4, which was reviewed today, does not.



kPyH5PBqJ4rbo0Vs.jpg
 
Yes, and you will lose some gaming performance. Good idea for an article

Waiting for article and maybe add some benchs about how impact in temperatures / watts if you can

:)
 
Back
Top