• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900K

This review has been updated with new performance numbers for the 13900K. Due to an OS issue the 13900K ran at lower than normal performance in heavily multi-threaded workloads. All 13900K test runs have been rebenched
 
Last edited:
Phoronix finally got around to posting their review of the 13900k, and perhaps most interesting to me was the discrepancy between their SVT-AV1 numbers and yours. I realize there are a lot of variables here between the two tests, e.g. different OS and Phoronix didn't test 4K at preset 10 (the default) as you did. However, the difference is so large it got me to wondering, was your SVT-AV1 built with AVX-512 support?

Note that this has to be explicitly enabled when compiling it, even if you're doing a standard release build.

Edit: Just found this article which might also explain some or all of the discrepancy.
Yeah I have seen those controversial articles stating again, some scheduler problems with AMD CPUs. You can see those in the reviews to be fair with the 7950X and 7900x.
I only hope these will be fixed because I kinda tired of reviews which are misleading due to some Windows limitations created or behavior that affects performance drastically for one product making other look better even if it is not the reality.
 
This review has been updated with new performance numbers for the 13900K. Due to an OS issue the 13900K ran at lower than normal performance in heavily multi-threaded workloads. All 13900K test runs have been rebenched
now results make sense: 36K points in Cinebench were just weird.
 
This 13900K power consumption is still nothing. "Power consumption fun" will start after they introduce 6 GHz 13900K* (7800x3D "killer") in January. :)
 
For gaming, carefully follow the following material.
Almost like those E-cores are only useful to win in cinebench, and are generally useless otherwise




ALso, Nyoooooom
(Imagine how a tweaked 5800x3D like mine would go, i'm a good 15% more efficient than stock depending on the settings i choose)
1673743083136.png


To the deleted comment who i shall not call out, because i guess you deleted it after realising what I meant
E cores are not efficient. They're extremely, extremely NOT efficient with how intel uses them

Green bar is 12900K's E-cores
The e-cores are less efficient than the P cores at multi threaded workloads, despite being added purely to boost multi threaded performance
Yet they're fantastic at single threaded performance efficiency
1673744922381.png


They arent there for gaming, they arent there for single threaded performance, and they arent there for efficiency - purely there to add MT performance, and they're the reason these CPU's use so much power and run so hot, as the video above proves - the 13900K could be an 8 core 16 thread 125W gaming monster, but nooooooo E-cores had to win at cinebench
 
Strange comparison with 5800X3D. Even more strange is the fact that you chose a game where it wins. There are others in which the 13900K beats it even with limitations, both in number of frames and in terms of performance. There is not much difference consumption between 7950X and 13900K in gaming. The i9-13900K must be compared with 7950X, or here Intel wins (fps/W) even without limitations and especially with limitation to 90W. Interesting are the manual settings in the BIOS (UV/UC) with which it loses very little in fps, but destroys everything in fps per watt. You do not believe?
E-cores are set to 2200MHz. When it makes the change in the BIOS, it also gives you the explanation (time: 13min30sec). They are not set so low for gaming.

They arent there for gaming, they arent there for single threaded performance, and they arent there for efficiency - purely there to add MT performance, and they're the reason these CPU's use so much power and run so hot, as the video above proves - the 13900K could be an 8 core 16 thread 125W gaming monster, but nooooooo E-cores had to win at cinebench
E-cores can be disabled in the BIOS. I don't see the reason, but... it's possible. Without interventions in the BIOS, you can reduce them to the minimum (I think 400MHz) with XTU directly from Windows.
I bet with you that these E cores will also be present in AMD processors, when they will be able to implement them, as was the case with ray tracing on video cards. It is not just my impression that the difference in performance between Zen 3 and Zen 4 is primarily brought about by the higher frequency of Zen 4 with a cost of ~80W more per consumption at flagship 7950X. How much longer can they go on this way?
 

Attachments

  • manual setting.jpg
    manual setting.jpg
    595.4 KB · Views: 65
  • Battlefield uv_uc.jpg
    Battlefield uv_uc.jpg
    470.3 KB · Views: 59
  • UV_UC fps per watt.jpg
    UV_UC fps per watt.jpg
    424.7 KB · Views: 58
Last edited:
Almost like those E-cores are only useful to win in cinebench, and are generally useless otherwise
Yes, they are there for MT performance. But I mean, the same can be said about anything. That extra 8 cores (2nd CCD) of the 7950x is there to win in cinebench and generally useless otherwise.
 

Core i9 13900K DDR5 7200 MHz (+memory scaling) review

The difference between DDR4 3600 and DDR5 7200 is marginal. If you already have DDR4 memory, switching to DDR 5 has only the "future prof" excuse, the price of a 32GB 7000+ kit exceeding that of a decent "Z" motherboard with DDR4 support.
There are applications that react significantly to a faster memory, but you have to decide if you use them and if they are worth the investment. For gaming, the money saved from memory and the motherboard is enough for a significant upgrade of the video card.
 
That extra 8 cores (2nd CCD) of the 7950x is there to win in cinebench and generally useless otherwise.
Not true, when all of those cores all have high single threaded performance as well, and access to extra cache
 
Not true, when all of those cores all have high single threaded performance as well, and access to extra cache
But why do you need high single threaded performance in the 2nd ccd? You already have the 1st ccd to take care of single threaded tasks. The 2nd ccd only serves for mt performance. Exactly what the ecores are there for
 
But why do you need high single threaded performance in the 2nd ccd? You already have the 1st ccd to take care of single threaded tasks. The 2nd ccd only serves for mt performance. Exactly what the ecores are there for
Got him lol
 
Not true, when all of those cores all have high single threaded performance as well, and access to extra cache
:cool:
 
:cool:
That's a gaming related issue due to a game having its threads spread across multiple CCX's, and an example of exactly what i'm talking about on the intel side

In that case its because windows hadnt been updated to work properly with the AM5 CPU's, on the intel side when they just run out of P-cores...
 
We try differently
According to TPU reviews, the difference in 1080p - 4K gaming between the 7700X (one CCD) and the 7950X (two CCDs) is zero even though the 7950X works at higher frequencies in 1-8 cores. The 7950X is worse in terms of efficiency (FPS/Watt, FPS/MHz) and price in this segment. I think even an alien bacterium understands the purpose of the second CCD, only Mussels is Batman.

 

Attachments

  • Clipboard01.jpg
    Clipboard01.jpg
    142.8 KB · Views: 57
Last edited:
We try differently
According to TPU reviews, the difference in 1080p - 4K gaming between the 7700X (one CCD) and the 7950X (two CCDs) is zero even though the 7950X works at higher frequencies. The 7950X is worse in terms of efficiency and price in this segment. I think even an alien bacterium understands the purpose of the second CCD, only Mussels is Batman.

The second CCD is for people who need the extra cores, not for gamers. When a lightly threaded program (e.g. a game) spreads across the two CCDs, the advantage of the extra cache gets diminished by the latency between the two CCDs. It's been like this since Zen 2. We need Windows's scheduler to leave the second CCD alone unless more than 8 cores are used. I don't know why it hasn't been implemented, yet.

Sorry if I'm re-posting past arguments (I don't have time to read back). :ohwell:
 
Only the games are under discussion. Mussels claims that the second CCD helps and the Intel E cores don't. To perfectly allocate the tasks between two CCDs is impossible because the game is not the only one that consumes resources. If the 7700X worked at the same frequencies as the 7950X in 1-8 cores, it would certainly have been more punchy in gaming like this one. This is the logic that is deduced from the reviews.
 
We try differently
According to TPU reviews, the difference in 1080p - 4K gaming between the 7700X (one CCD) and the 7950X (two CCDs) is zero even though the 7950X works at higher frequencies in 1-8 cores. The 7950X is worse in terms of efficiency (FPS/Watt, FPS/MHz) and price in this segment. I think even an alien bacterium understands the purpose of the second CCD, only Mussels is Batman.

the 7700x has the higher performance due to a known bug, where game processes are not assigned to the one CCX like they are on AM4 and intel
This is even covered in those same reviews
1674445618480.png


You're somehow reading information that backs up what i'm saying, and going the opposite direction with it - on these AMD CPU's, tasks are being split that shouldnt be and are suffering for it - and they're suffering with cores that have the same ST performance, let alone the much slower E-cores

That's fixable both in windows, and by AMD's chipset drivers - but a 6 core CPU that has no extra P cores, has no alternative
 
Only the games are under discussion. Mussels claims that the second CCD helps and the Intel E cores don't. To perfectly allocate the tasks between two CCDs is impossible because the game is not the only one that consumes resources. If the 7700X worked at the same frequencies as the 7950X in 1-8 cores, it would certainly have been more punchy in gaming like this one. This is the logic that is deduced from the reviews.
It's not impossible. Both Intel and AMD use a core hierarchy where lightly threaded foreground work (e.g. a game) is always sent to the same "preferred" cores. On my 11700, they were cores 5 and 6, and on my 7700X, they're cores 8 and 2. Even Windows 10's scheduler works with this. We only need cores with priority #1 to #8 to be on the same CCD on AMD, and #1 to #X to be P-cores on Intel (although I have no experience with heterogenous Intel architectures, yet, so I'm not 100% sure).
 
the 7700x has the higher performance due to a known bug, where game processes are not assigned to the one CCX like they are on AM4 and intel
This is even covered in those same reviews
View attachment 280507

You're somehow reading information that backs up what i'm saying, and going the opposite direction with it - on these AMD CPU's, tasks are being split that shouldnt be and are suffering for it - and they're suffering with cores that have the same ST performance, let alone the much slower E-cores

That's fixable both in windows, and by AMD's chipset drivers - but a 6 core CPU that has no extra P cores, has no alternative
But why are you comparing a 6 p core cpu to a dual ccd one?
 
the 7700x has the higher performance due to a known bug, where game processes are not assigned to the one CCX like they are on AM4 and intel
This is even covered in those same reviews
According to new 2023 Toms review, strictly on Zen 3 the old AM4, the difference between 5700X and 5950X is marginal in 1080p, zero in 1440p-4K, and the 5950X is still worse in terms of efficiency, price, FPS/W, FPS/MHz. But you can clearly see the role of the additional ccd in multithreading.
What are we talking about? That drivers and other problems are invoked when something doesn't work well at AMD? It's been years since the launch of Zen3 and the problem persists. And it will always persist: the second CCD is useless for gaming.
Edit: for AM4, the comparison between 5800X (one CCD but higher frequency) and 5950X is better. The difference between them is, I think, 1% in 1080p, not because the 5950X has two CCDs, but because it works at higher frequencies.
 

Attachments

  • 7700X gaming.jpg
    7700X gaming.jpg
    254.4 KB · Views: 61
  • 7700X multi.jpg
    7700X multi.jpg
    249.4 KB · Views: 65
Last edited:
According to new 2023 Toms review, strictly on Zen 3 the old AM4, the difference between 5700X and 5950X is marginal in 1080p, zero in 1440p-4K, and the 5950X is still worse in terms of efficiency, price, FPS/W, FPS/MHz. But you can clearly see the role of the additional ccd in multithreading.
What are we talking about? That drivers and other problems are invoked when something doesn't work well at AMD? It's been years since the launch of Zen3 and the problem persists. And it will always persist: the second CCD is useless for gaming.
Edit: for AM4, the comparison between 5800X (one CCD but higher frequency) and 5950X is better. The difference between them is, I think, 1% in 1080p, not because the 5950X has two CCDs, but because it works at higher frequencies.
The only problem that persists (imo) is people buying dual CCD CPUs for gaming. Even if games got allocated to a single CCD beautifully, these CPUs would still be a waste of money for gaming.
 
But why are you comparing a 6 p core cpu to a dual ccd one?

Because that 6P + E core CPU is in the exact same boat - you got two sets of cores, and when the preferred cores are ignored performance takes a hit

People will happily show me that happens on AMD yet deny massively it could ever ever ever happen on intel
 
The second CCD is for people who need the extra cores, not for gamers. When a lightly threaded program (e.g. a game) spreads across the two CCDs, the advantage of the extra cache gets diminished by the latency between the two CCDs. It's been like this since Zen 2. We need Windows's scheduler to leave the second CCD alone unless more than 8 cores are used. I don't know why it hasn't been implemented, yet.

Sorry if I'm re-posting past arguments (I don't have time to read back). :ohwell:
I forgot to mention that having 2x 32 MB L3 isn't the same as 1x 64.
 
Back
Top