• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-13900K

DDF1B779-250A-483E-ACB5-1D2B6F19A929.jpeg


Anyone point me towards under volting data? He has interesting oc data also.

Only thing I could find.


Any hope for this processor? Was looking for something that would be efficient at long AV1 and mp4 or 5 video rendering.

Thank you.
 
Last edited:
When you say that the Intel 13 series are more affordable than Ryzen 7000 because you can buy a cheaper motherboard with DDR4, wouldn't it be fair to also review them on one?
 
Yeah, I think I'm just going to stick with what I have for a bit longer. Those power figures are kind of eye-watering. I understand running the upper tier stuff is going to use more power no as a general rule, and I accept that I will need to make the bill payment that comes as a result of it. So I'm willing to accept a bit of pain to have the rig I do have. That said, I'm also not a complete masochist. That's a lot of juice. Then slapping a 4090 on top of it? You're going to have your neighbors complaining about their lights dimming for a few seconds whenever you turn on your computer. Holy smokes.
 
Both camps need to be overclocked, not just from one side.
 
Just when I thought CPUs couldn't get any hotter after the 7950x, now we have the 13900k! Its pushing 101C in application performance and 90C in gaming. I love performance, but this is performance at all costs. I'd wait for a lower-power CPU variant, like a 65 watt 13700 (without the 'k'). I also like a cool and quiet PC.
 
Somethings not adding up, after comparing a bunch of reviews of the 13th gen and reading this


One GN example of frametimes
View attachment 266429


TPU's results (ignoring the messy graph)
View attachment 266428


Same game, same resolution - but those 0.1% lows massively change how to interpret those results and the results seem so wildly different


Vs the 7700x seems the same, I'm trying to wrap my head around what hardware is different to alter these results so much
View attachment 266430

Vs: (Cropped out some CPU's between them, didnt alter order or anything) where the 13900K is 11.9% higher average (vs 27.3%) and the 99th percentile/1% swap places completely
View attachment 266431
Any chance he's using the integrated benchmark? The benchmark uses some kind of fast flyby on a tiny map, which isn't how actual gameplay works. I'm using an in-game scene that I picked, that I feel is representative of the typical gameplay experience
 
Find it hard to understand anyone dropping the money for a latest gen chip and then using it at 1080p. Maybe a tiny subset of gamers?

Struggle to see the use case for both latest gen power hungry hot monstrosities. Last gen chips will do the same for less.

If you're gaming at a normal modern resolution like 4k, any modern cpu will do.
 
Any chance he's using the integrated benchmark? The benchmark uses some kind of fast flyby on a tiny map, which isn't how actual gameplay works. I'm using an in-game scene that I picked, that I feel is representative of the typical gameplay experience
Not to disparage Gamer's Nexus, but in the past I have noticed some of the values in GN reviews didn't add up and had little to no statistical variability. I have noticed the same thing mentioned above, that some of the 0.1% lows didn't make sense on GN in the past. If a processor is supposed to be exactly 1.0% faster, most likely on Gamers Nexus it will be exactly 1.0% faster. Some of the results on Gamers Nexus are a little too statistically consistent to be accurate IMO. But, that's why I'm here and not at Gamer's Nexus. Because I have far more faith in W1zzard. Truthfully speaking.
 
Last edited:
Ryzen is nice for MT performance, but I'll say it again as I've said in the past. ST is always relevant, MT is sometimes relevant.
According to this review and every other both cpus are indistinguishable from each other in st or mt. Don't know what all the fuzz is about. Anyone does a blind test and won't know which is which.
 
My guess would be 3090Ti vs 3080, and probably a different test scene. I know the built in benchmark for FC6 can stutter like a mutter.
Even so, somethings really wrong for the results to be so different with the frametime results

I'm not asking why one setup has more FPS than the other, but why one shows the x3D having its fantastic frametimes it's known for, and the other not

Any chance he's using the integrated benchmark? The benchmark uses some kind of fast flyby on a tiny map, which isn't how actual gameplay works. I'm using an in-game scene that I picked, that I feel is representative of the typical gameplay experience
It's entirely possible, I couldn't even find the hardware for their AMD setup listed
It's simply seeing the discrepancy is odd - I've just downloaded CP2077 so i can test on my own system for comparison.


I've just seen differing results all over the place, mostly boiling (hah) down to:

1. These CPU's run at 350W for 120 seconds, then drop to 250W. Performance drops when this happens, so they really shine in short tests when they have a cooldown time
2.the insane temps and power draw have resulted in erratic performance for some reviewers, where they boost high, throttle down, and repeat causing the stutters
 
Not to disparage Gamer's Nexus, but in the past I have noticed some of the values in GN reviews didn't add up and had little to no statistical variability. I have noticed the same thing mentioned above, that some of the 0.1% lows didn't make sense on GN in the past. If a processor is supposed to be exactly 1.0% faster, most likely on Gamers Nexus it will be exactly 1.0% faster. Some of the results on Gamers Nexus are a little too statistically consistent to be accurate IMO. But, that's why I'm here and not at Gamer's Nexus. Because I have far more faith in W1zzard. Truthfully speaking.
Same here. Also, GN tends to emphasize 0.1% lows a bit too much, when most of the time, it's just a small hitch while the game loads an asset and you barely even notice it - if at all.
 
We need to stop talking about top tier cpus (13900, 7950) as gaming cpu's as they won't have anything good to offer or even slower than same gen and lower tier cpus (13600, 7700) with worse thermal, higher cost and high wattage.

Those highly threaded cpus (32 threads) are for applications use and only gamer people should avoid them and invest the difference in gpu.
 
Last edited:
The KS is coming next year with some more fuel for the haters :D, so not quite a dead platform. The perks of buying that $600 CPU today instead of two $300 CPUs two years apart also being that you get to enjoy the performance (not just in gaming) of that $600 CPU for the next two years, which also happens to be more than 10% faster in gaming than the more expensive competition ;).

Ryzen is nice for MT performance, but I'll say it again as I've said in the past. ST is always relevant, MT is sometimes relevant.
LOL. Its not a dead platform because there is a .... KS.... coming out next year?

Now its clear you suffer from some bias lmao. And you repeat an argument about a 600 dollar CPU giving you the perf today, but that doesn't match with the results we see in reality and the discussion in which you yourself admitted you can't even get it maxed out proper today. In gaming. Similarly... ST is always relevant, yes, so we're using the argument wrt to a 32 thread CPU :D

Sorry man, but this is just strange.
 
Last edited:
Same here. Also, GN tends to emphasize 0.1% lows a bit too much, when most of the time, it's just a small hitch while the game loads an asset and you barely even notice it - if at all.
personally those hitches are a big deal

If i'm in VR, that's nausea - and in any other type of game it's exactly what I build gaming PC's to avoid
 
LOL. Its not a dead platform because there is a .... KS.... coming out next year?

Now its clear you suffer from some bias lmao. And you repeat an argument about a 600 dollar CPU giving you the perf today, but that doesn't match with the results we see in reality and the discussion in which you yourself admitted you can't even get it maxed out proper today.

Sorry man, but this is just strange.
Honestly that dead platform argument is stupid no matter which way you look at it. Going by current and past prices - amd will CHARGE you the price of a motherboard included in their CPU prices. Think about it this way, I had an R5 1600 + a b350. I could upgrade to a 5800x 3d for 450€ and keep my motherboard, or buy a 12700f + b660 bazooka for 450€ combined. Personally, I think the second option is better, yes I might lose up to 10% gaming performance in some extreme scenarios, but I get a brand new full of warranty motherboard with all the latest features - and a much faster CPU in everything besides gaming.

And you can see the same pattern this gen. The 13600k annihilates both the 7600x and the 7700x, and it's priced inbetween. Going by current numbers, it would take zen 6 or zen 7 (which is like what, 4 to 6 years) for an R5 cpu to match the performance of the 13600k.
 
personally those hitches are a big deal

If i'm in VR, that's nausea - and in any other type of game it's exactly what I build gaming PC's to avoid
VR is a fair point.
 
According to this review and every other both cpus are indistinguishable from each other in st or mt. Don't know what all the fuzz is about. Anyone does a blind test and won't know which is which.
But you do see the cost of the platform.
If both are the same and one is cheaper, by some hundreds of $ if you go ddr4 and z690, than the choice is more easy.
 
Last edited:
I definitely saw some reviews that explicitly mentioned 253W PL2 (PL1=PL2=253W) and 38.000 CBr23 scores but with windows 22H2, i even saw a 35500 result with 22H1.
The same publications had in the same charts results for unlimited power +40.000 so no confusion there!
I also watched der8auer video review that paints an extremely positive picture for 13900K power efficiency (with his optimizations it matches 7950X eco mode)
I really don't remember the last years so vastly different results from publication to publication, is it windows edition, is it motherboard peculiarities, is it memory clocks/timings/gear, cooler used, i really don't know what to think at this point, i will reserve my judgement for the time being.
 
@ModEl4
A few possible reasons:
  1. Not all CPUs are equal and some may even run at significantly higher or lower voltages than the average.
  2. As I mentioned earlier, when the CPU is running in a power-limited fashion, it is important that the DC Loadline be correctly configured, or results may be unusually higher or lower than they should be for the same reported Package Power, but almost nobody checks this out.
  3. The AC Loadline used also influences the results. This is a motherboard setting for regulating load voltage (with Adaptive voltages) and help keeping the CPU at its built-in voltage-frequency curve. MSI notably uses a very high AC Loadline by default; other manufacturers (e.g. ASUS) tend to use a lower one.
  4. Incompetence/sloppiness (wrong bios settings or testing methodologies).

EDIT: on a related note, HardwareUnboxed had testing issues:

1666344599318.png
 
Last edited:
i wonder if there is any possibility in a year from now when Meteor Lake launches (or even earlier than that), Intel to upgrade all the CPUs below 13600KF to Raptor Lake dies (supposedly upcoming 65W i5s are Alder Lake based)
I can't remember, has Intel ever done this the last decade? (same socket)
This will allow people that bought 12th gen i3s to upgrade to 6P+8E Raptor Lake CPUs with good gaming performance increase and also double MT performance (and also stay at low PL2 values in relation with 65W i7/i9 Raptor Lake models)

@ModEl4
A few possible reasons:
  1. Not all CPUs are equal and some may even run at significantly higher or lower voltages than the average.
  2. As I mentioned earlier, when the CPU is running in a power-limited fashion, it is important that the DC Loadline be correctly configured, or results may be unusually higher or lower than they should be for the same reported Package Power, but almost nobody checks this out.
  3. The AC Loadline used also influences the results. This is a motherboard setting for regulating load voltage (with Adaptive voltages) and help keeping the CPU at its built-in voltage-frequency curve. MSI notably uses a very high AC Loadline by default; other manufacturers (e.g. ASUS) tend to use a lower one.
  4. Incompetence/sloppiness (wrong bios settings or testing methodologies).

EDIT: on a related note, HardwareUnboxed had testing issues:

View attachment 266469
Regarding HU i watched before 2 years 1-2 reviews but it was obviously too one-sided (AMD favouring) and it showed that he wanted badly to promote the competing brand, so i never watched again! (but i never posted anything about it here, it was just my impression and most of their results were valid anyway, it's that some where not valid and it pushed too hard the message based on those)
 
Last edited:
It's probably getting time to upgrade my 8700k. The new AMD CPUs are very good and Intel still holds a small advantage in gaming performance. But the elephant in the room is the potential 78003dx. For strictly gaming (which is me) I think I'll wait till it comes out (thought it was supposed to be November) before I make a choice on an upgrade.

Yeah, I think I'm just going to stick with what I have for a bit longer. Those power figures are kind of eye-watering. I understand running the upper tier stuff is going to use more power no as a general rule, and I accept that I will need to make the bill payment that comes as a result of it. So I'm willing to accept a bit of pain to have the rig I do have. That said, I'm also not a complete masochist. That's a lot of juice. Then slapping a 4090 on top of it? You're going to have your neighbors complaining about their lights dimming for a few seconds whenever you turn on your computer. Holy smokes.
Do you run your rig at 100% 24/7? Unless you are a power user and do a lot of content creation where the system is running full peak all the time any difference on your electric bill will be negligible compared to whatever you currently run.

Gaming is not even close to a full load on the system and how much do you game? 12-15 hours a day? Most gamers (especially adults who can afford this equipment) wont be spending that much time gaming. So you complaints about power bills and lights dimming are silly.

If you can afford the 4090 or the cost of a platform upgrade, you can afford a few extra kWh on your electric bill.
 
Back
Top