• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i7-13700K

wow the frametimes consistency of 13700K is certainly amazing, 95th and 99th percentile beating 5800X3D by 10% while avg FPS is not much faster


frametime-battlefield-5-1920x1080-13700k-vs-5800x3d.png

frametime-eldenring-1920x1080-13700k-vs-5800x3d.png

frametime-rdr2-1920x1080-13700k-vs-5800x3d.png
frametime-forzahorizon5-1920x1080-13700k-vs-5800x3d.png

frametime-watchdogslegion-1920x1080-13700k-vs-5800x3d.png
 
I really wish the MMOs and older games that are still super popular and have a large player base would be tested to see the effect of L3 cache too, in the games section. Like Finsl Fantasy, WoW, etc. They are not niche use cases for gamers. I see plenty of reports from people upgrading to 5800X3D and seeing large fps jumps in scenarios where lots of data from other players in the same zones is being processed. At the moment there’s no info on how the new AMD and Intel processors fare against 5800X3D for those types of games.
They're almost impossible to benchmark because they thing you want to test for has massive variance minute-to-minute that is entirely influenced by external factors. There's no way to generate meaningful results unless you happened to have 20 different CPUs with otherwise identical hardware all logged into the MMO in question simultaneously and viewing the same character and scene at the same time.

Even if that was practical (it isn't) it's not possible because of the client/server nature of those MMOs in the first place.
 
Even if that was practical (it isn't) it's not possible because of the client/server nature of those MMOs in the first place.
It's doable, depending if the game is willing to provide test conditions that can be controlled.
 
OK but there is not a lot of a difference in performance between the two. So where is that improvement if you do not consider frequency bump as a RL improvements over AL. Is that the only one improvement?
It's different silicon altogether with twice the number of E-cores, more L2 cache per core cluster and more L3 cache.

Most of the performance increase comes from the frequency bump, but like the 5800X3D, it doesn't often manifest as a huge improvement, so the actual night-and-day uplifts in IPC are lost in the average of many other tests that don't see much benefit.
 
It's different silicon altogether with twice the number of E-cores, more L2 cache per core cluster and more L3 cache.

Most of the performance increase comes from the frequency bump, but like the 5800X3D, it doesn't often manifest as a huge improvement, so the actual night-and-day uplifts in IPC are lost in the average of many other tests that don't see much benefit.
Excuse me but 12900K and 13700K have exactly same core number. The cache is slightly different (higher on 13700K) Sure new silicon cause it has more cash and frequency is slightly higher but the difference in general is not that obvious. 13700K has higher allowed power consumption. The performance in general is not that much different one from the other to be fair. Can attribute that difference to higher clock speed an higher cache.
I been wondering what is the difference in instruction sets between the two for instance. 13700k doesn't have AVX512. Is there anything 13700k has that the 12900K does not have in instruction sets?
 
With 6xx motherboards and DDR4 memory support being mentioned as a pro for budget friendliness, I wonder what performance hit you get going that route? Be nice to see some numbers with that setup to compare
Techyescity did a test recently with the new i5 and a B660 mobo:

There is a diference in 1080p gaming. How this can be relevant depends on the needs of the single user.
 
Last edited:
The 13700k without e-cores would still perform as good as the 13900k in gaming and 95% of the 13900k in most applications (anything but rendering and encryption).
 
I like having the option to upgrade my cpu to the 13700k, but right now my 12700k is awesome. I think I will upgrade my rtx2070 before I upgrade my cpu.
 
I like having the option to upgrade my cpu to the 13700k, but right now my 12700k is awesome. I think I will upgrade my rtx2070 before I upgrade my cpu.
Same here, only I have a 2070 Super, I don't think I be upgrading anything soon, got alot more performance from my GPU with this 12700K.. :rockout:
My previous i7 6700K was a bottleneck.
 
Techyescity did a test recently with the new i5 and a B660 mobo:

There is a diference in 1080p gaming. How this can be relevant depends on the needs of the single user.

He's using DDR4-3600 C18, which is frankly crap DDR4, and comparing it to DDR5-6800 C34 which is very high end stuff. He's also only testing a few games.

Most of the tests I've seen with high end DDR4 vs high end DDR5, it's pretty close to a tie in games.

I would never suggest someone go out and buy high end DDR4 right now though. He is just showing what happens when you build a really cheap Raptor Lake rig, how does it compare to the expensive build.

It would probably be more relevant if he compared his low end Raptor Lake to a Zen 3 5800X or 5900X, since that is the only thing AMD has in the price range of his DDR4 13600K build and is still very popular right now.
 
Same here, only I have a 2070 Super, I don't think I be upgrading anything soon, got alot more performance from my GPU with this 12700K.. :rockout:
My previous i7 6700K was a bottleneck.
Yeah, I'm in no rush the 12700k gave me a huge boost over my Ryzen 5 2600. It was like night and day.
 
Why is 5.4 GHz... a thumbs down?
 
  • 5.4 GHz boost on only two cores

For an enthusiast this doesn't matter much.

If you can get a 5.4Ghz 2-core boost from the factory, 99% of the time you can get a 5.4 all core.

It all depends on how much heat your particular sample will generate and how much cooling you have.
 
Excuse me but 12900K and 13700K have exactly same core number. The cache is slightly different (higher on 13700K) Sure new silicon cause it has more cash and frequency is slightly higher but the difference in general is not that obvious. 13700K has higher allowed power consumption. The performance in general is not that much different one from the other to be fair. Can attribute that difference to higher clock speed an higher cache.
I been wondering what is the difference in instruction sets between the two for instance. 13700k doesn't have AVX512. Is there anything 13700k has that the 12900K does not have in instruction sets?
There is no instruction set difference. Read the primer article on Raptor Lake.

This is reworked silicon with more cache and more E-cores on a tweaked Intel 7 process that allows higher peak clocks (at the cost of higher power consumption).

Actual IPC improvements are very low, did you somehow skip the 13900K review?
 
He's using DDR4-3600 C18, which is frankly crap DDR4, and comparing it to DDR5-6800 C34 which is very high end stuff. He's also only testing a few games.
Most of the tests I've seen with high end DDR4 vs high end DDR5, it's pretty close to a tie in games.
I would never suggest someone go out and buy high end DDR4 right now though. He is just showing what happens when you build a really cheap Raptor Lake rig, how does it compare to the expensive build.
It would probably be more relevant if he compared his low end Raptor Lake to a Zen 3 5800X or 5900X, since that is the only thing AMD has in the price range of his DDR4 13600K build and is still very popular right now.
I think he's just showing what could be the result of upgrading just the cpu in a old gen platform. No one will buy a new B660 for the 13th gen Intel, I think.
But there is also lot of people who have 6 or more years old hardware and can't afford to buy the new stuff anyway. And for them a used 12th gen motherboard and cpu, with the possibility to upgrade to a 13th gen cpu later, is still a great option, will be a huge jump in performance. And that system can last for other 6 years for someone, if not more.
 
  • Like
Reactions: cbb
So should I buy this to upgrade from my aging 9900KS or should I hold out another year for a more efficient and less power hungry 14900K/14700K?
 
So should I buy this to upgrade from my aging 9900KS or should I hold out another year for a more efficient and less power hungry 14900K/14700K?

I would wait for next gen in your situation, also new socket coming.
 
Another hot power hungry monstrosity. Same as AMD 7x also.

Pretty much every use case is better off on prior gen.

If you're buying latest kit and using 1080p ok go for it I guess.
Its really only hot and power hungry if you want it to be, depending on what you are using it for. You could probably set a power limit and have the slightly better/same performance as last gen at much lower power/temps.
 
So should I buy this to upgrade from my aging 9900KS or should I hold out another year for a more efficient and less power hungry 14900K/14700K?
Don't assume the 14th gen will be less power hungry. It may be more efficient due to better performance but...

As many people have said, the 12th/13th gen is pretty efficient in single and lightly threaded workloads, it's when you use synthetic benchmarks or software to fully load every core that you see the extreme power draw. Are you planning on rendering on the CPU 24/7?

I'm interested to see if Intel loses it's latency advantage that games love when it moves to a tile based architecture.

9900KS is fine for your use case, you don't have anything higher than 1440/144 Hz, so it's not a particularly CPU demanding resolution or refresh rate.

If you are going to upgrade, look at the options that are currently on the table, there'll always be something better in future. 13th Gen Intel is, in my opinion (backed by our performance reviews) the best current option, but Zen 4 X3D is probably less than six months away, but then at that point you'll be close to the release of Meteor lake too.
 
Even my i7 12700K is still a king with 1440p gaming.:cool:
and risk having your 1% low FPS dip below your monitor refresh rate in a game you will never play and won't ever notice without benchmarking (assuming you are not GPU bound)?

200.gif
 
So should I buy this to upgrade from my aging 9900KS or should I hold out another year for a more efficient and less power hungry 14900K/14700K?
Slightly off topic but if you do end up upgrading to another platform, ill buy your 9900KS lol
 
Don't assume the 14th gen will be less power hungry. It may be more efficient due to better performance but...
As many people have said, the 12th/13th gen is pretty efficient in single and lightly threaded workloads, it's when you use synthetic benchmarks or software to fully load every core that you see the extreme power draw. Are you planning on rendering on the CPU 24/7?
...
Even if the cpu rendering lasts several hours and not two days, the power consumption and the high temps that come with it are a problem for all the people who care of the energy bill and of the room temperature. We live in a time where summers last longer and have higher temperatures: depending of what one does with his pc, that is something to care about before buying the cpu. The Ryzen chips with the ECO mode seem a better choice for the people who will keep their cpu on the limit for most of the time. That or they move to Alaska. :D
 
So should I buy this to upgrade from my aging 9900KS or should I hold out another year for a more efficient and less power hungry 14900K/14700K?

Well, based on TPUs review with a 3080 and assuming your profile is right and you still have a 3080, you're losing 10% FPS in games at 1440P.

In "productivity" - mostly media encoding and rendering / simulation, it's morel like an +65% bump (100/60).

On the IO bus, the Z690/Z790 has 4X the bandwidth of the Z390/Z490 to the chipset.

In the real world, unless you do a lot of encoding / image editing / simulation / VM stuff or have a big need for more IO bandwidth, I doubt you'd notice it.

I think the benefits would be much more apparent for someone on Zen 2 or earlier on games where they'll see a +20% fps bump with a 3080 + 13700K, and Intel Gen 8 and earlier.
 
Back
Top