• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i5-13600K

The same here.
With 4090, the 7600X is faster.

1666518407768.png
 
OK, there's is a question that I've had ever since these new chips from both AMD and Intel have come out.

How is the Ryzen 7 5800X3D doing so well against these new chips despite it being a two-year-old chip? What's going on here?
 
How is the Ryzen 7 5800X3D doing so well against these new chips despite it being a two-year-old chip? What's going on here?

1666541575384.png
 
Ok, so one-year-old processor.
 
The same here.
With 4090, the 7600X is faster.

View attachment 266756

Yeah, i wander if this is a common variable amongst different generation of GPUs... either way, seems like both the 13600K and 7600K are at the top of their game (pun intended), with the 13600K taking the win with its powerhouse multi-threaded performance. I dunno, im not entirely excited about either of these options but look forward to more efficient non-K/X models

OK, there's is a question that I've had ever since these new chips from both AMD and Intel have come out.

How is the Ryzen 7 5800X3D doing so well against these new chips despite it being a two-year-old chip? What's going on here?

The 2 year old AMD 5000-series somewhat recently saw the addition of the "X3D" model... it does it a little different with some massive game performance rewards. Essentially its AMDs response to Intels 12th gen CPUs - taking the older 5800X and packing or vertically stacking multiple layers of L3 cache (bigger cache size).... which helps to quickly manage instructions internally opposed to having to revert to system RAM. In return, huge performance benefits in gaming, especially cache hungry titles! Surprisingly the 5800X3D even takes on DDR5 platforms housing the 7600K/13600K/etc and beats them in couple of titles.

Personally i prefer the faster overall performance with the 13600K (or even the 7600X in a game-only scenario). Obviously can't justify platform upgrade costs if you're already on AM4!
 
Last edited:
Essentially its AMDs response to Intels 12th gen CPUs - taking the older 5800X and packing or vertically stacking multiple layers of L3 cache (bigger cache size).... which helps to quickly manage instructions internally opposed to having to revert to system RAM.
I've always thought of system RAM as being pretty damn fast.
Surprisingly the 5800X3D even takes on DDR5 platforms housing the 7600K/13600K/etc and beats them in couple of titles.
This is the part that surprises me too. How is this even possible? Newer processors are supposed to be faster than the previous version. That's how it's supposed to be. Yet, that's not necessarily the case.
 
I've always thought of system RAM as being pretty damn fast.

This is the part that surprises me too. How is this even possible? Newer processors are supposed to be faster than the previous version. That's how it's supposed to be. Yet, that's not necessarily the case.

It boils down to 5800X3D avoids a lot of cache misses and fetching from system memory that is a magnitude significantly slower in relative terms. DDR5 however is faster than DDR4 so if you over extend cache usage and memory usage requirements are high it can start to pull ahead. Fast smaller buffer vs slower wider buffer.
 
I wonder, if one were to build a system with a 5800X3D in it today, how much life would one expect to get out of it considering today's ever growing software needs where it seems software is getting more and more bloated by the year?
 
Complex question to answer depends on intended usage and expectations in regard to what you prioritize most when comparing two chips.
 
I wonder, if one were to build a system with a 5800X3D in it today, how much life would one expect to get out of it considering today's ever growing software needs where it seems software is getting more and more bloated by the year?

I'm not sure why you are so enthralled by the 5800X3D. Yes, it's good at games, but it was never the fastest (the 12900K/KS with fast DDR5 was).

In productivity applications, the 5800X3D is demolished by double digit percentages. The 12600K is 11.5% faster overall, the 12700K which is in its same price category now, is 29.4% faster.

Meanwhile the 13600K, which is cheaper, is 36.2% faster in applications while clocking in about 5% higher 1440P FPS and the 7600X is 16.9% faster in applications while matching the 5800X3D in 1440P gaming. This based on TPUs recent benchmarks, with all updated drivers and BIOS', and just using a 3080.

5800X3D is a good deal - if you already have a decent AM4 platform and you just want to play games.

Conversely, it's a bad choice for people who are building an entirely new platform (CPU/Motherboard/RAM).

I personally would opt for a 5900X if I were on AM4. It loses to 5800X3D by 5% at 1440P, but blows it away in productivity by ~17%.
 
Should I bother with Intel Optane? I am pretty sure my B660 board supports it, but it seems like I never hear anything about Optane. From what I understand it is relatively cheap for a 16gb stick of it, you plug it in a m.2 slot, install the drivers, reboot, and done... it just makes everything snappier and faster or something over time automatically after that?

(i just bought a 13600k, is why I am asking)
Optane cache drives only really helped mechanical OS drives. They're discontinued and the software you rely on is now woefully out of date. 16GB is enough to act as an OS cache, but it's way too small to act as a read cache for a library drive.

The best-case scenario is that you won't have serious issues and you won't notice any real improvement.
The worst-case scenario is the outdated software screws up your windows install and forces a wipe & reinstall instead.
 
Optane cache drives only really helped mechanical OS drives.

I did not realize this. Thank you, I would not have asked the question if I understood that part of it. Thanks :toast:

I still am really happy I got a 13600k. I needed a big upgrade and for the price I can't complain.
 
Well come Zen 5 you will get E cores, but they will be Zen 5c cores and far more powerful than Gracemont+++ and SMT enabled. Come Arrow Lake i9 will have 48 cores, 8P + 40E!
Nothing is known about zen 5(specially as zen4 just launched), all that talk about them bringing e-cores are unsubstantiated rumors.
AND if they do that, great for them, i'm not going to buy those types of CPU ever.
 
@W1zzard will you be reviewing the 13700K? I'm thinking of buying this model.
 
I'm not sure why you are so enthralled by the 5800X3D.
Because just about every tech YouTuber seems to include this processor as a comparison as if it's some kind of god-like processor that puts both AMD and Intel's current crop of processors to shame.
AND if they do that, great for them, i'm not going to buy those types of CPU ever.
But if AMD does those kinds of cores the right way as versus the wrong way like what Intel is doing, what then? It seems that AMD is doing things far more intelligently than Intel so the way I see it, at the risk of sounding like an AMD fanboy, when they do come around with their own efficiency core design, I have no doubt that they'll do it the right way.
The 12600K is 11.5% faster overall, the 12700K which is in its same price category now, is 29.4% faster.
But at what cost? Higher energy consumption, higher heat output. Yeah, sure... it won the battle but it lost the war. One more victory like that and it's game over. A pyrrhic victory.
 
Because just about every tech YouTuber seems to include this processor as a comparison as if it's some kind of god-like processor that puts both AMD and Intel's current crop of processors to shame.

But if AMD does those kinds of cores the right way as versus the wrong way like what Intel is doing, what then? It seems that AMD is doing things far more intelligently than Intel so the way I see it, at the risk of sounding like an AMD fanboy, when they do come around with their own efficiency core design, I have no doubt that they'll do it the right way.

But at what cost? Higher energy consumption, higher heat output. Yeah, sure... it won the battle but it lost the war. One more victory like that and it's game over. A pyrrhic victory.

13600k does 74 watts in gaming and beats almost everything across the board in max/min fps, including the x3d in several games, not all, but majority. for $309 if you opt for the kf version.

its not high energy consumption at all for a gamer only. /shrug
 
Because just about every tech YouTuber seems to include this processor as a comparison as if it's some kind of god-like processor that puts both AMD and Intel's current crop of processors to shame.

But if AMD does those kinds of cores the right way as versus the wrong way like what Intel is doing, what then? It seems that AMD is doing things far more intelligently than Intel so the way I see it, at the risk of sounding like an AMD fanboy, when they do come around with their own efficiency core design, I have no doubt that they'll do it the right way.

But at what cost? Higher energy consumption, higher heat output. Yeah, sure... it won the battle but it lost the war. One more victory like that and it's game over. A pyrrhic victory.
There's not really a good way to make "efficiency" core without them being dogshit on the software side, i mean, ¿what can you do?, ¿the same core but with a lower max clock/power?
 
There's not really a good way to make "efficiency" core without them being dogshit on the software side, i mean, ¿what can you do?, ¿the same core but with a lower max clock/power?
I don't know. However, I have no doubt that AMD will pull some kind of rabbit out of their hat.
 
¿the same core but with a lower max clock/power?
with less cache on the side to make it smaller and it is starting to look like zen 4D(ense).
Well im oversimplifying, of course, but yeah...
 
Has anyone seen any gaming tests using only 8 E-cores, with P-cores completely disabled?

With the increased clock speeds and cache, I am curious what the results would be. Maybe it would even be viable for 60 FPS gaming?
 
Has anyone seen any gaming tests using only 8 E-cores, with P-cores completely disabled?

With the increased clock speeds and cache, I am curious what the results would be. Maybe it would even be viable for 60 FPS gaming?
techteamgb posted a video about that just yesterday
results are wildly varied with some games being "playable" and some others unplayable. That said the performance uplift between ADL and RKL e-cores is astounding.

At this point, Intel could probably do a 20-core e-core only cpu with the space the p-cores take, but then again, an e-core is essentially the same performance as a skylake cpu so you're essentially having a 20 core i7-6000.
 
techteamgb posted a video about that just yesterday
results are wildly varied with some games being "playable" and some others unplayable. That said the performance uplift between ADL and RKL e-cores is astounding.

At this point, Intel could probably do a 20-core e-core only cpu with the space the p-cores take, but then again, an e-core is essentially the same performance as a skylake cpu so you're essentially having a 20 core i7-6000.

Wow, gaming performance is really impressive.

But I am actually surprised at the productivity performance. In the 13600K, E-cores basically have 50% of the P-cores performance at slightly less than 50% power. Does that not confirm that it would be better to have 2 extra P-cores instead of 8 E-cores?
6 P-cores consume 118 W, so 8 would consume about 157 W, and the stock 13600K hybrid consumes 149 W in their testing. What is the point?

This really is a gimmick on desktop, where they do not care about efficiency anyway. Applications that can utilize 20+ threads will get slightly more performance at slightly less power, but that really seems irrelevant.
 
Wow, gaming performance is really impressive.

But I am actually surprised at the productivity performance. In the 13600K, E-cores basically have 50% of the P-cores performance at slightly less than 50% power. Does that not confirm that it would be better to have 2 extra P-cores instead of 8 E-cores?
6 P-cores consume 118 W, so 8 would consume about 157 W, and the stock 13600K hybrid consumes 149 W in their testing. What is the point?

This really is a gimmick on desktop, where they do not care about efficiency anyway. Applications that can utilize 20+ threads will get slightly more performance at slightly less power, but that really seems irrelevant.

Is it?

I don't think I have ever seen any reviewer who is talking about power control for the many factors that influence power consumption results. It isn't just a simple matter of swapping out the CPU.

Unfortunately, most of these sites and tubers justify unequal platforms by saying something about 'out of the box' experience. The problem with that logic is they have just switched from analyzing the power characteristics of a *CPU* to evaluating a *motherboards default settings*.

Even if you use the same motherboard, do we know what it's default power limits and VF curve looks like with different CPUs? My Asus TUF for example, came out of the box completely power unlocked.

I mean, without knowing all those details, you really don't know anything about what you just saw or what the reviewer did. This is especially true when testing between different CPUs and different vendors (AMD/Intel).

To give an example of what I'm talking about, study these two charts - yes that's 116W difference for negative performance, even the MSI MAG B660 Tomahawk is drawing 67W more for about 1.2% performance loss in CB MC :

1666714195812.png




1666714293428.png
 
Back
Top