• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

14900k - Tuned for efficiency - Gaming power draw

I think you didn't get the idea.
~2150 pts ST and 25000 pts MT are obtained at a maximum consumption of 76.2W.
I don't know exactly how much it consumes in the single tests, but HWinfo reported that peak after the two tests single/multi in Cinebench and CPU-Z. Default, 14700K(F) gets over 35000 pts at 253W and over 36000 fully unlocked.
So, I can make it consume as much as X3D and get much better performance than this 7800X3D, or I let it fly free and get double the 7800X3D. I repeat: at a lower price.

If someone proves to me that the 7800X3D helps the 3070 Ti achieve better performance than with the 14700K, we have a topic for discussion. Until then, the comparisons from the reviews regarding their potential beyond gaming remain standing.
And one more thing: if the 3070Ti doesn't get anything extra by replacing the i5-13500 with the i7-14700KF (notified by me, I don't need help), then it's clear to me that I have enough reserve for the next series of video cards. Maybe not the future 5090, but with the 5070 I bet it won't have any problems.
The 4000 series is skipped because I paid a lot for the 3070 Ti in the madness of 2021.
I came here to discuss i9 settings and HT information
but this thread has become another AMD/Intel battle. if you want the sad details everyone is spamming here they are

from the tests I've seen the 7800X3D can outperform my tuned i9 (and an i7) in all gaming situations by a small bit. It will almost always be better IN GAMING than your 14700k and you will see slight frame increases even with a beastly or decent gpu ( I had 14700 and tested it) Around +5 frames avg at 2k settings. Your 14700 will almost always outperform a 7800X3D in CPU benchmarks. I don't why anyone cares about benchmarking outside of stability tests if you aren't ever replicating that kind of data usage though. Only valuable benchmark is the games you play/tasks you do most often. 7800X3D It would have been a really good and cheaper option for me (and you) also out of the box performs better. Was fully aware of this, my last build was AMD and i hated it so this time around I wanted to go intel despite the obvious price/performance discrepancy IN GAMING that everyone is freaking out about. Guess what, AMD people are right the cpu friggin rips. Also the new intel shit friggin rips. Game on Gamers

Also the 14700 i tested was tuned and a bit faster than OOTB (about 5.9GHz). tested with a 4080 OC


anyone been able to get their i9 past 6.3Ghz with HT OFF?
 
Last edited:
I think you didn't get the idea.
~2150 pts ST and 25000 pts MT are obtained at a maximum consumption of 76.2W.
I don't know exactly how much it consumes in the single tests, but HWinfo reported that peak after the two tests single/multi in Cinebench and CPU-Z. Default, 14700K(F) gets over 35000 pts at 253W and over 36000 fully unlocked.
So, I can make it consume as much as X3D and get much better performance than this 7800X3D, or I let it fly free and get double the 7800X3D. I repeat: at a lower price.

If someone proves to me that the 7800X3D helps the 3070 Ti achieve better performance than with the 14700K, we have a topic for discussion. Until then, the comparisons from the reviews regarding their potential beyond gaming remain standing.
And one more thing: if the 3070Ti doesn't get anything extra by replacing the i5-13500 with the i7-14700KF (notified by me, I don't need help), then it's clear to me that I have enough reserve for the next series of video cards. Maybe not the future 5090, but with the 5070 I bet it won't have any problems.
The 4000 series is skipped because I paid a lot for the 3070 Ti in the madness of 2021.
OK, I think we got this clear:
  • I don't care about superior application performance with the i7 and i9 because I don't work with my CPU - I only need it for gaming. I'm not saying that your score is not great, just that I don't give a damn about Cinebench points. Out of the box, the X3D is better at gaming and much more conservative with power. That's all I care about.
  • You don't care about superior gaming performance with the X3D because it doesn't help the 3070 Ti, and by the time you upgrade your GPU, you'll probably upgrade your CPU as well.
Shall we move on? :)
 
Sounds fun, let me try get some numbers for comparison...
Ryzen 7 5800X3D (with -30 CO) + Thermalright Burst Assassin 120
View attachment 326235
(had to run R24 twice [I forgot to disable 10min throttle mode...], so Avg will be higher)
Works for me.

I was shocked, but Raptor lake has some impressive IPC at just about any frequency.
 
OK, I think we got this clear:
  • I don't care about superior application performance with the i7 and i9 because I don't work with my CPU - I only need it for gaming. I'm not saying that your score is not great, just that I don't give a damn about Cinebench points. Out of the box, the X3D is better at gaming and much more conservative with power. That's all I care about.
  • You don't care about superior gaming performance with the X3D because it doesn't help the 3070 Ti, and by the time you upgrade your GPU, you'll probably upgrade your CPU as well.
Shall we move on? :)
I think you should also be interested in the other scores because they reflect the computing power of your system. I doubt you use that PC only as a console.
I have just demonstrated that, at approximately the same consumption, the 14700K(F) outperforms the 7800X3D by ~25% in the single/multi core tests, with the option of getting double in multithreading at maximum power. I repeat again: at a lower cost.
I think I will save this preset with the name "X3D mode" because it offers double what the 3070Ti needs and AMD supporters can only take the hate to the line: "hey, I paid 30% more for X3D, but I saved 1.02 Watts :clap:".
No, I will not need another processor for a future generation of video cards, you know that very well. Even with the king of video cards (this superb 4090) the differences are negligible between the two processors in 1440p, zero in 4K. After the pitiful way in which Intel topics are attacked by AMD fans, I wouldn't be surprised if a new argument would be that I can notice the difference between 394 fps and 381 fps while playing and that this is the reason why I pay 30% more much on X3D.

Don't get me wrong. I'm not saying that the 7800X3D is a weak processor. No, it's not, but it's too expensive for what it offers and I find it embarrassing to argue that a processor is only useful in games. The absolute embarrassment comes especially from those who waved flags with the 5800X3D and now changed them to the 7800X3D. And I ask: hey, what happened to the 5800X3D, it was only released last year? Where did "future proof" go?

I am showing a new Cine23 test package, with the same settings but with the activation of "Instant 6 Ghz" in the BIOS. It has an impact only in singles.

14700K 140A Ciner23multi.jpg

14700K 140A Ciner23single.jpg
 
Last edited:
I think you should also be interested in the other scores because they reflect the computing power of your system. I doubt you use that PC only as a console.
You're right - I also use it for web browsing and video playback, which is something that even a Celeron can do, especially with GPU hardware acceleration.

So you're planning to keep the i7 for several generations. That's great! :) I might also keep the X3D for a while as it's way more powerful than what I need right now. The extra cache may become more useful in the long run, just as much as your extra cores might be. I don't think either of our approaches is wrong.

I don't know about people upgrading from a 5800X3D. I guess they just wanted the best, the same as people upgrading from a 12900K to a 13900K or 14900K. We're PC enthusiasts, a lot of upgrades don't make monetary sense, we just do it for fun. As for me, I upgraded from an i7-11700 to a 7700X and while the uplift in application performance was massive, I didn't notice much, if anything at all in games and during everyday use. Then I got the X3D just to see what all the hype was about, which I probably would have sold and switched back to the 7700X if not for the much lower power consumption and much more stable boost clocks out of the box. Whether it was worth it performance-wise, we'll see in the coming years, I guess. Like I said, if I was a sensible person, I'd be just about to start thinking about upgrading from a 7700K right now. But I'm not - in a hobby PC builder, and I'm not pretending that it's not an expensive hobby by any means. :)
 
There is life beyond gaming, believe me.
I can say the same about this gaming, that I will not notice any difference if I change the 14700K to the 7800X3D, but I have a real reserve of computing power, it didn't cost me anything extra and it is welcome for the future.

As for the future of your processor, in gaming, you can take a look at the fate of the 5800X3D. Last year he was king.
 
I think you should also be interested in the other scores because they reflect the computing power of your system. I doubt you use that PC only as a console.
I have just demonstrated that, at approximately the same consumption, the 14700K(F) outperforms the 7800X3D by ~25% in the single/multi core tests, with the option of getting double in multithreading at maximum power. I repeat again: at a lower cost.
I think I will save this preset with the name "X3D mode" because it offers double what the 3070Ti needs and AMD supporters can only take the hate to the line: "hey, I paid 30% more for X3D, but I saved 1.02 Watts :clap:".
No, I will not need another processor for a future generation of video cards, you know that very well. Even with the king of video cards (this superb 4090) the differences are negligible between the two processors in 1440p, zero in 4K. After the pitiful way in which Intel topics are attacked by AMD fans, I wouldn't be surprised if a new argument would be that I can notice the difference between 394 fps and 381 fps while playing and that this is the reason why I pay 30% more much on X3D.

Don't get me wrong. I'm not saying that the 7800X3D is a weak processor. No, it's not, but it's too expensive for what it offers and I find it embarrassing to argue that a processor is only useful in games. The absolute embarrassment comes especially from those who waved flags with the 5800X3D and now changed them to the 7800X3D. And I ask: hey, what happened to the 5800X3D, it was only released last year? Where did "future proof" go?

I am showing a new Cine23 test package, with the same settings but with the activation of "Instant 6 Ghz" in the BIOS. It has an impact only in singles.

View attachment 326275
View attachment 326276
Sorry to cut in here.

77w And the score is 10k low. I demonstrated an all core 4000mhz at 120w with the same score.

Can you fill me in what happened, was that a tuned all core run or something?
 
I seriously dont get all the intel vs amd love/hate.. i like both and i own both.

also, lets take a moment and realize how nice it is that these conversations are even happening, it wasn't very long ago AMD was so far behind this wasn't even a discussion, they have definitely redeemed themselves.
 
I seriously dont get all the intel vs amd love/hate.. i like both and i own both.
There are no bad cpu's only bad prices and PITA RMA processes.
also, lets take a moment and realize how nice it is that these conversations are even happening, it wasn't very long ago AMD was so far behind this wasn't even a discussion, they have definitely redeemed themselves.
AMD almost got snuffed out of existence. If they hadn't Ryzen to the occasion we would still be on and other +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ quad core CPU but with 28 ecores today.
 
There is life beyond gaming, believe me.
I can say the same about this gaming, that I will not notice any difference if I change the 14700K to the 7800X3D, but I have a real reserve of computing power, it didn't cost me anything extra and it is welcome for the future.
And I have a reserve of extra gaming power. If you can show me what else I could ever need in a gaming PC, I'll be happy to consider it.

As for the future of your processor, in gaming, you can take a look at the fate of the 5800X3D. Last year he was king.
It's next to the 12700K, sometimes 12900K, right where it's expected to be. And your point is...?

Edit: If you mean that Zen 4 and Raptor Lake perform better, of course they do! It's called progress. But the 5800X3D is still way faster than other Zen 3 CPUs and most of the Alder Lake lineup.

I seriously dont get all the intel vs amd love/hate.. i like both and i own both.
Same here, and I couldn't agree more. :)
 
Last edited:
  • Like
Reactions: Jun
So I did that little 4ghz all core + e-cores and Ran a little time Spy.

The CPU seems efficient enough to reach near my average scores from multiple runs.

4000mhz 24T vs 5700mhz 16T CPU time spy score difference. Minimal impact to TimeSpy.

Time Spy scales linear to the GPU, in this case an RX 6800 -

I suppose to get all the cores at a higher frequency to catch the 5.7ghz score would really increase the wattage. I estimate about 148-152w at load and would hope to be only running 4.2ghz, though I prefer the lower wattage at the 4ghz it's at now, roughly 123w peak vs 235w. 5.7ghz 16 threads is closer to 260w peaks. :)

Efficient enough?? Perhaps this is when the E-cores really shine, when the P-cores are at the same speed.
(Usually I overclock, this is like an all new weirded out adventure and I don't know why I'm doing it.)
13700K go burrrrrrrr

EDIT: Note, this is on air cooling. I'll nab a pick next time I post another Benchmark comparison.

TimeSpy CPU score comparison.png
 
Last edited:
...
All I do know is that my E-cores want an outstanding amount of v-core to run 4.7ghz. Even with P core reduction and frequency reduction. Horrible design mess.
Do you know that 4 E cores fit in the area of 1 P core and the E core has approx. 70% performance of the P core at the same frequency and they have similar energy efficiency?

E cores running at 4.4 GHz compared to P cores at 5.7 GHz cause approx. DOUBLE heat output per silicone area of the P cores. You have to be very careful about what frequency you make E cores run at.

E cores at 4700 MHz have approx. 2.4 higher heat output per silicone area compared to P cores running at 5.7 GHz.

E cores got their name from SILICONE AREA EFFICIENCY, not energy efficiency. You get a lot more performance from a given area of silicone from E cores than from P cores.
 
Last edited:
From what I've been seeing with CPU-Z ST/MT benchmark you want the E-cores and P-cores moderately closer together with a slight offset of a higher ratio on the P-cores, but there is a point where jacking up the P-core ratio has a serious detriment to overall ST/MT results. At least on the 14700K. The E-cores provide far more uplift overall to bump up the MT uplift is enormous, but ST not so much however it seems like a better sustained all P-core is preferable to some occasional P-core boost on 1-2 cores and others operating at lower frequencies. Setting the E-core ratio higher is nice, but seems tricky to get stable beyond a certain point.

Still trying to gauge and figure out instabilities on this system there is a lot of over complexity to the design which is both good and bad. It's harder to figure out, but once optimized well should be better given the amount of fine tuning controls that is offered.
 
"My 14900K will draw less power if I disable hyper-threading" is about as smart as "my body will require less food if I cut off both of my legs". If you buy a $600 CPU only to immediately disable half of its features just to get acceptable power draw, you're not being smart - you are, in fact, being the exact opposite.

I'm getting really tired of seeing these "Intel CPUs can be power efficient too" threads/posts. Nobody cares that they can be, the point is that, at stock, they are not. The fact that it's possible to make these CPUs consume sane amounts of power is not the saving grace that everyone who uses them seems to think it is. If it's not good out of the box, i.e. how the vast majority of users will experience it because most users don't tweak CPU power consumption, it's not good period.
-
If you're going to turn off hyperthreading in an i7/i9, just buy an i5 instead.
-
Exactly my thoughts.
-
If you can afford to buy a 600$ CPU, IT IS EXCATLY ONLY YOUR CHOICE WHAT YOU WANT TO DO WITH IT , PERIOD!!!
 
-

-

-
If you can afford to buy a 600$ CPU, IT IS EXCATLY ONLY YOUR CHOICE WHAT YOU WANT TO DO WITH IT , PERIOD!!!
It's a dumb choice. Let's be real.
 
I'll probably eventually look into how just disabling HT on some individual P cores works out versus disabling or enabling all of them. Someday after dialing in stability across 20 cores with individual ratio's and piles of voltage offsets. There are some scenario's where HT enabled/disabled helps or hinders. It's worth looking at if you want to get the most out of your system for how you use it.

It's not any different than any other of the numerous settings that need attention to optimization for stability or performance reasons. It's not too different than like AVX OFFSET being multifaceted and useful, but not always.

If you can just get away with keeping HT enabled on a few individual cores, 1/2 of them, or even like 3/4's that's might still be worth doing so if there is enough of a qualitative difference between the different setup scenario's.
 
I'll probably eventually look into how just disabling HT on some individual P cores works out versus disabling or enabling all of them. Someday after dialing in stability across 20 cores with individual ratio's and piles of voltage offsets. There are some scenario's where HT enabled/disabled helps or hinders. It's worth looking at if you want to get the most out of your system for how you use it.

It's not any different than any other of the numerous settings that need attention to optimization for stability or performance reasons. It's not too different than like AVX OFFSET being multifaceted and useful, but not always.

If you can just get away with keeping HT enabled on a few individual cores, 1/2 of them, or even like 3/4's that's might still be worth doing so if there is enough of a qualitative difference between the different setup scenario's.
I still think that if you need to disable cores and/or HT, then you either chose the wrong CPU, or you seriously need to upgrade your cooling.

I'm not denying that there's a lot of efficiency to be gained on 14th gen, but perhaps the highest of the highest end isn't always necessary. I'm sure that at least half of the i9 owners would be just as happy with an i5.
 
Reposting my findings of running frequency limited 14900K with an RTX 4070 and 1440p monitor (HT off) to this more appropriate thread and adding one more data point:

Cyberpunk built in benchmark with RT off, DLSS off, settings high. The low reported power draw is while the camera pans the bar and is reliable and repeatable, the maximal power number is not.

P cores MHz/E cores MHz - power - avg/min/max fps

5700/4400 - 122-160W - 90/73/122 *
5000/4000 - 66-83W - 94/73/126
4800/4000 - 60-75W -93/73/126
4500/3600 - 48-63W - 95/58/126 **

* Game drew more than 160W before I got to the benchmark and then it overheated during benchmark under my air cooler, I removed a power limit for this test

** It seems that the dip in min FPS was caused by E cores running at 3600MHz, combination 4500/4000 was fine

The insane stock frequencies more than doubled power draw for no benefit compared to 4800/4000 settings (in my case even worsening performance due to overheating).
 
Last edited:
No one is saying you need to though. Some people are just inherently into tweaking optimizing to make things more optimal. Also regardless of how happy they would be it's their money and they might reach a point where they aren't just as happy with a i5 at the same time. I mean I could run my i7 just as happy as any weaker performing CPU and at lower power draw than it would use at stock. It's entirely capable of operating how I wish. In fact Asus even has profile setups you can create for specific programs which seems pretty useful.

Until their not happy with i5 and wished they had a i7 or i9's MT capabilities and/or a bit more boost frequency binning though probably more so the MT realistically for a given task that actually benefits from it.
 
Reposting my findings of running frequency limited 14900K with an RTX 4070 and 1440p monitor (HT off) to this more appropriate thread and adding one more data point:

Cyberpunk built in benchmark with RT off, DLSS off, settings high. The low reported power draw is while the camera pans the bar and is reliable and repeatable, the maximal power number is not.

P cores MHz/E cores MHz - power - avg/min/max fps

5700/4400 - 122-160W - 90/73/122 *
5000/4000 - 66-83W - 94/73/126
4800/4000 - 60-75W -93/73/126
4500/3600 - 48-63W - 95/58/126 **

* Game drew more than 160W before I got to the benchmark and then it overheated during benchmark under my air cooler, I removed a power limit for this test

** It seems that the dip in min FPS was caused by E cores running at 3600MHz, combination 4500/4000 was fine

The insane stock frequencies more than doubled power draw for no benefit compared to 4800/4000 settings (in my case even worsening performance due to overheating).
Maybe the results with higher clocks are similar to those with lower clocks due to overheating, a GPU limit, or a combination of both?
 
Maybe the results with higher clocks are similar to those with lower clocks due to overheating, a GPU limit, or a combination of both?
Sure, I am not testing a CPU, I am testing my CPU-GPU combination with a moderately powerful GPU which is GPU limited at the resolution I want to use, however I lowered the settings of the game to make the task as easy as possible for the GPU. The results are comparable even when the CPU was not overheating, so the indicated constant performance level (94/73/126) is due to the GPU limitation.

It feels like I could/should be using a more powerful GPU with this CPU, however 4070 is similarly expensive as the CPU (in my case I payed 25% more for the GPU, I got one with a nice cooler on it and I got the 14900K quite cheap). So it is not like I am using a GPU which costs a third of the CPU.
 
Last edited:
Sure, I am not testing a CPU, I am testing my CPU-GPU combination with a moderately powerful GPU which is GPU limited at the resolution I want to use, however I lowered the settings of the game to make the task as easy as possible for the GPU. The results are comparable even when the CPU was not overheating, so the indicated constant performance level (94/73/126) is due to the GPU limitation.

It feels like I could/should be using a more powerful GPU with this CPU, however 4070 is similarly expensive as the CPU (in my case I payed 25% more for the GPU, I got one with a nice cooler on it and I got the 14900K quite cheap). So it is not like I am using a GPU which costs a third of the CPU.
A hard GPU limit like yours just proves what was said above: you don't need an expensive CPU for gaming, especially with a mid-range GPU and reasonable expectations.
 
Reposting my findings of running frequency limited 14900K with an RTX 4070 and 1440p monitor (HT off) to this more appropriate thread and adding one more data point:

Cyberpunk built in benchmark with RT off, DLSS off, settings high. The low reported power draw is while the camera pans the bar and is reliable and repeatable, the maximal power number is not.

P cores MHz/E cores MHz - power - avg/min/max fps

5700/4400 - 122-160W - 90/73/122 *
5000/4000 - 66-83W - 94/73/126
4800/4000 - 60-75W -93/73/126
4500/3600 - 48-63W - 95/58/126 **

* Game drew more than 160W before I got to the benchmark and then it overheated during benchmark under my air cooler, I removed a power limit for this test

** It seems that the dip in min FPS was caused by E cores running at 3600MHz, combination 4500/4000 was fine

The insane stock frequencies more than doubled power draw for no benefit compared to 4800/4000 settings (in my case even worsening performance due to overheating).
I had similar experience with 12900KS. The extra frequency is mostly marketing. Possibly with a massive radiator (1080mm) it would be worth while as one could avoid throttling.
 
If you step back for a second... you will see that 14th gen is now a 2 year old architecture on a 10nm (7n-mish node) competing with a newer architecture, on a newer 4nm node, and really only loses badly to the model with 3D cache in gaming efficiency.

If you compare it to the non-3d cache parts, or to the 3d cache parts in anything but gaming - it's less efficient, but it's also faster...

The fact that this chip is in contention for anything is nothing short of a miracle. There should be no way team blue is in the top 10 slots for any FPS chart, yet there they are. The only way to really do that is to yeet power until you hit 6ghz.
 
It's a dumb choice. Let's be real.
Its not, for the sake, you turn on the PC, hit del after 1 sec UEFI shows up, short cut to "HT off preset" - save - restart on PCIe NVMe 8 sec later you are good to go with HT off with 5-20% more fps (min/max/avg) for your own pesronal preference. Why? Because you paid the $$$ for it.

Its not dumb at all, it is just an individual use / show case.
 
Back
Top