• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Any 10-12 core CPUs with Zen 3 or better yet Golden Cove IPC that can clock all core 5GHz or higher

Status
Not open for further replies.
I think you are correct though in raw performance. Though power consumption to have good clocks across all cores may have been too much without having to lower clocks too much?? Even though you mentioned that I can use e-cores for background tasks, would you have preferred Intel made a 10 P core Alder Lake chip as a choice even if it coexisted along side with 8P + X ecore chips??

It seems intel really has had a tough time with their process nodes. Even the good success Alder Lake has had is on 10nm and Intel admitted that is a dead on process node and they are trying to develop something new.

I am not sure why Intel did not just use TSMC or Global Foundries or Samsung to build their CPUs if their own process node was having such a rough time they had to backport Rocket Lake to 14nm and even 10nm runs extremely hot and not what they want as they are trying to replace it.
What is the point of 10 or more p cores? The apps that use that many cores would perform better with the die size used on ecores instead. Id rather they used the die size to make p-cores BIGGER so they have more IPC rather than just throwing in a bunch of extra cores.
 
This is just wrong. Do you think SMT can just be "turned on" without being designed for? It's an intrinsic architectural trait of the cores; it's included in the design work from the beginning. It obviously takes time to implement and make work properly.

Yeah thats true. But these have been designed with HT/SMT in mind for 2 decades. SMT/HT existed since Late 2002 release of Pentium 4 3.06GHz,. I am sure Intel used those ideas and was easy to implement in designs of all future CPUs.

And all the CPUs that did not have it, I am sure could have easily had it like the Core i5s prior to Comet Lake. They just disabled it so they could sell it without it. I highly doubt those chips were designed special without it. It was a switch. Why else did Intel have HT enabled easily on all CPUs even the low end i3 series starting with Comet Lake. Cause AMD also did so Intel could not get away with it anymore.
 
I had a 5950X for a few months until I sold it and bought an 11700 (non-K) instead. There is absolutely no difference between the two in gaming. Maybe you could feel the difference between 300 and 320 FPS at 720p low, but 1. I doubt it, 2. It's pointless.

You won't future-proof with a 5 GHz CPU. Future generation CPUs will be a lot faster with the same clock speeds.

And no, 5 GHz is not "nice to have". Good performance in general IS nice to have.
 
What is the point of 10 or more p cores? The apps that use that many cores would perform better with the die size used on ecores instead. Id rather they used the die size to make p-cores BIGGER so they have more IPC rather than just throwing in a bunch of extra cores.

Well you are saying how Intel could have easily made them and they are more efficient than e-cores regarding power use?? So now saying e-cores more efficient.

Well how about the die size used for no more P cores nor e cores. How about 8 P cores with much better IPC?? How would you have felt about that assuming they could do it?

It does seem IPC has stagnated a bit. We have had some good uplifts but nothing ground breaking from an already good arch really since Bloomfield to Sandy Bridge which was enormous I think like at least 40 to 50%.

Well AMD did have just as big of one with original Ryzen, but that was from the already weak Bulldozer not a good arch like Intel did from Bloomfield/Nehlem to Sandy Bridge.

Since then some solid ones but nothing earth shattering. Sandy to Ivy was like 10-15% I think. Ivy to Haswell was like 20 to 25%. Haswell to Skylake was like 15-25%.

Then we were on Sklake IPC for a while. Original Zen and Zen+ were at Haswell IPC. Zen 2 caught up to Sklake IPC or was 5% better. Then Zen 3 added 19% on top of that. Then Intel catches up with Cypress Cove or Tiger Lake, but the backport to Rocket Lake made it worse. Then Intel finally gets Golden Cove which had a 19% uplift over Cypress Cove (Tiger Lake Willow Cove, Ice Lake??). And it seems to be about 11-13% over Zen 3. Some say 20%. Some say less than 10%.

Am I correct about all that IPC gains in gens over gens? That sounds from memory about right?

Will we have a breakthrough moment where AMD or Intel release something with insane IPC gain like Intel with Conroe form Netburst (Though that was almost 100%) or Sandy Bridge or AMD did with from K7 to K8 like 40% or better?
 
Intel and AMD are going down different paths.

Intel cant compete with AMD on core count, so they are going back to their strengths of per core performance, but of course their e-core system is used to stay as competitive as possible for creativity workloads.

Personally if the priority is gaming, especially games that are not high budget AAAs optimised for high core counts (which is 99% of the games released), and of course emulators such as RPCS3, then I would prefer Intel. On RPCS3 Intel destroys AMD its like the absolutely perfect workload for them. But if priority is creativity and especially if I only play high budget shooters then I would go AMD.

With the gains both companies are making every generation now longevity is shot either way. If i was going intel the product that would interest me is the one that has no e-cores that got reviewed here on TPU.

For creativity I have mostly moved over to GPU encoding now, forced on me due to energy costs.

Also Intel HEDT and threadripper might make a comeback if it becomes standardised for mainstream PC chipsets to only have 1 or 2 PCIE slots, as HEDT would then with all its extra lanes be the means to have more traditional i/o options.
 
I don't know what you are running right now, but be it Intel or AMD, I would just run it as fast as you can until you cant cool it anymore.

That's pretty much what I do with all of my CPU's..
 
Intel and AMD are going down different paths.

Intel cant compete with AMD on core count, so they are going back to their strengths of per core performance, but of course their e-core system is used to stay as competitive as possible for creativity workloads.

Personally if the priority is gaming, especially games that are not high budget AAAs optimised for high core counts (which is 99% of the games released), and of course emulators such as RPCS3, then I would prefer Intel. On RPCS3 Intel destroys AMD its like the absolutely perfect workload for them. But if priority is creativity and especially if I only play high budget shooters then I would go AMD.

With the gains both companies are making every generation now longevity is shot either way. If i was going intel the product that would interest me is the one that has no e-cores that got reviewed here on TPU.

For creativity I have mostly moved over to GPU encoding now, forced on me due to energy costs.

Also Intel HEDT and threadripper might make a comeback if it becomes standardised for mainstream PC chipsets to only have 1 or 2 PCIE slots, as HEDT would then with all its extra lanes be the means to have more traditional i/o options.


Yeah that makes some sense. Though is 8 strong cores still going to easily be more than enough for gaming like dgianstefani and some others insist it is easily. Cause then I could get 12700K or 12900K and shut down e-cores and then up to Raptor Cove chip and shut down e-cores in BIOS and have a super 8 core 16 thread chip that would be as good and trade blows with 5800X3D in gaming and be far superior in other things as well. Though still stuck at 8 strong cores. But it may be more than enough for even most intensive games for a long time yet??

But the Cyberpunk example in this thread where someone said it is true it uses more than 8 cores has me worried.

And you say if priority is gaming especially not high budget AAA games. Well priority is both including high budget AAA games?? Are those optimized for high core counts above 8?? Like Watch Dogs 2 and Cyber Punk?? Or is it only Battlefield heavy multi player that can scale meaningfully above 8 cores?

And you say if you were going Intel a product reviewed here that has no e-cores would be of interest. Is there such a thing. Or referring to future product? Well the 12400 has no e-cores, but no overclock either as it is locked and only 6 P cores.

Perhaps raptor lake may fulfil your requirement this time.

Raptor Lake?? From what I have seen it only has 8 strong cores and they are just adding more e-cores. Have you read or heard any breaking news that will change or they have a separate SKU with 10-12 P cores on Raptor Lake?
 
Well you are saying how Intel could have easily made them and they are more efficient than e-cores regarding power use?? So now saying e-cores more efficient.

Well how about the die size used for no more P cores nor e cores. How about 8 P cores with much better IPC?? How would you have felt about that assuming they could do it?

It does seem IPC has stagnated a bit. We have had some good uplifts but nothing ground breaking from an already good arch really since Bloomfield to Sandy Bridge which was enormous I think like at least 40 to 50%.

Well AMD did have just as big of one with original Ryzen, but that was from the already weak Bulldozer not a good arch like Intel did from Bloomfield/Nehlem to Sandy Bridge.

Since then some solid ones but nothing earth shattering. Sandy to Ivy was like 10-15% I think. Ivy to Haswell was like 20 to 25%. Haswell to Skylake was like 15-25%.

Then we were on Sklake IPC for a while. Original Zen and Zen+ were at Haswell IPC. Zen 2 caught up to Sklake IPC or was 5% better. Then Zen 3 added 19% on top of that. Then Intel catches up with Cypress Cove or Tiger Lake, but the backport to Rocket Lake made it worse. Then Intel finally gets Golden Cove which had a 19% uplift over Cypress Cove (Tiger Lake Willow Cove, Ice Lake??). And it seems to be about 11-13% over Zen 3. Some say 20%. Some say less than 10%.

Am I correct about all that IPC gains in gens over gens? That sounds from memory about right?

Will we have a breakthrough moment where AMD or Intel release something with insane IPC gain like Intel with Conroe form Netburst (Though that was almost 100%) or Sandy Bridge or AMD did with from K7 to K8 like 40% or better?
Comparing Intel generations in the Sandy Bridge - Skylake era is pointless. If you compare Sandy Bridge to Skylake on the other hand... the small gains through the years did add up. ;)

Yeah that makes some sense. Though is 8 strong cores still going to easily be more than enough for gaming like dgianstefani and some others insist it is easily. Cause then I could get 12700K or 12900K and shut down e-cores and then up to Raptor Cove chip and shut down e-cores in BIOS and have a super 8 core 16 thread chip that would be as good and trade blows with 5800X3D in gaming and be far superior in other things as well. Though still stuck at 8 strong cores. But it may be more than enough for even most intensive games for a long time yet??

But the Cyberpunk example in this thread where someone said it is true it uses more than 8 cores has me worried.

And you say if priority is gaming especially not high budget AAA games. Well priority is both including high budget AAA games?? Are those optimized for high core counts above 8?? Like Watch Dogs 2 and Cyber Punk?? Or is it only Battlefield heavy multi player that can scale meaningfully above 8 cores?

And you say if you were going Intel a product reviewed here that has no e-cores would be of interest. Is there such a thing. Or referring to future product? Well the 12400 has no e-cores, but no overclock either as it is locked and only 6 P cores.



Raptor Lake?? From what I have seen it only has 8 strong cores and they are just adding more e-cores. Have you read or heard any breaking news that will change or they have a separate SKU with 10-12 P cores on Raptor Lake?
More than 8 cores for gaming is pointless. You might need 12-16 cores a few years down the line, but present-day 16-core CPUs will be worthless by then. Just look at how the 3950X gets destroyed by the 5800X (not to mention the 5800X3D) in pretty much everything.

Future-proofing is a myth. Just accept it.
 
You might need 12-16 cores a few years down the line, but present-day 16-core CPUs will be worthless by then.
The one good thing about the 12/16 core cpus is their potential for higher clocks. Mine will boost to 5150, and the 5950 will boost higher.. whereas 5600X will only boost 4850, and 5800x to 5050.

SuperPi32M each core.png
 
Comparing Intel generations in the Sandy Bridge - Skylake era is pointless. If you compare Sandy Bridge to Skylake on the other hand... the small gains through the years did add up. ;)


More than 8 cores for gaming is pointless. You might need 12-16 cores a few years down the line, but present-day 16-core CPUs will be worthless by then. Just look at how the 3950X gets destroyed by the 5800X (not to mention the 5800X3D) in pretty much everything.

Future-proofing is a myth. Just accept it.


Is more than 8 cores pointless even for Cyberpunk? Are games that look like they use more just a mirage and only using more cause they are programmed to just peg every CPU core they can even though they have 0 benefit of doing so?? I have heard some games like Watch Dogs 2 and Cyberpunk do that but see no performance improvement from more than 8 cores or even 6 anyways.

The one good thing about the 12/16 core cpus is their potential for higher clocks. Mine will boost to 5150, and the 5950 will boost higher.. whereas 5600X will only boost 4850, and 5800x to 5050.

View attachment 254562

Is that because they are better binned than lower core Zen 3 counterparts?
 
Sorry but I want control over my PC, not it controlling me. This is especially more important with regards to WIndows and me disabling the spyware and bloat crap and why I do not like WIN11 as it is harder to do where as in 10 it is well known and easy to disable the Big Brother crap. Never am I going to just trust my computer to do everything for me.
We lost that battle long ago... word of advice, never look into the chipset security procesors. Your PC has not been completely under your control for some time.
 
Is more than 8 cores pointless even for Cyberpunk? Are games that look like they use more just a mirage and only using more cause they are programmed to just peg every CPU core they can even though they have 0 benefit of doing so?? I have heard some games like Watch Dogs 2 and Cyberpunk do that but see no performance improvement from more than 8 cores or even 6 anyways.
I have played Cyberpunk on my 4-core Ryzen 3 just fine, and it never pegged my 11700 to 100% usage, so I would say so, yes.
 
Yeah that makes some sense. Though is 8 strong cores still going to easily be more than enough for gaming like dgianstefani and some others insist it is easily. Cause then I could get 12700K or 12900K and shut down e-cores and then up to Raptor Cove chip and shut down e-cores in BIOS and have a super 8 core 16 thread chip that would be as good and trade blows with 5800X3D in gaming and be far superior in other things as well. Though still stuck at 8 strong cores. But it may be more than enough for even most intensive games for a long time yet??

But the Cyberpunk example in this thread where someone said it is true it uses more than 8 cores has me worried.

And you say if priority is gaming especially not high budget AAA games. Well priority is both including high budget AAA games?? Are those optimized for high core counts above 8?? Like Watch Dogs 2 and Cyber Punk?? Or is it only Battlefield heavy multi player that can scale meaningfully above 8 cores?

And you say if you were going Intel a product reviewed here that has no e-cores would be of interest. Is there such a thing. Or referring to future product? Well the 12400 has no e-cores, but no overclock either as it is locked and only 6 P cores.



Raptor Lake?? From what I have seen it only has 8 strong cores and they are just adding more e-cores. Have you read or heard any breaking news that will change or they have a separate SKU with 10-12 P cores on Raptor Lake?
I think it will be rare that games would need more than 8 cores, console ports will be based on console hardware (which isnt even 8 cores as they reserve resources for recording etc.), and PC dedicated game developers wont want to restrict their market by making a game need a CPU that only a few % of people have. Up until as recent as a few years ago the majority of PC games released didnt use more than 4 threads with many only using 1-2. The problem of course we dont see these games used in the hardware reviews.
 
Well you are saying how Intel could have easily made them and they are more efficient than e-cores regarding power use?? So now saying e-cores more efficient.

Well how about the die size used for no more P cores nor e cores. How about 8 P cores with much better IPC?? How would you have felt about that assuming they could do it?
Pcores are more efficient in performance per watt. E cores are more efficient in performance per die space. The thing is when an application uses more than 8 pcores, it would most likely use n cores, in which case having the ecores is the better option

But the Cyberpunk example in this thread where someone said it is true it uses more than 8 cores has me worried.
What difference does it make if it can use more than 500 cores. Fact is alderlake performs the best on cyberpunk, therefore it doesn't matter if the game can use 90 cores or not. The lowest fps the 12900k drops at this game is 115 with RT on ultra settings. A 5950x is 30-40 fps behind in that same scene. Usually it is at 150 up to 230 fps. That's 720p resolution DLSS and RT ON, trying to make the game as CPU bound as possible.

Yes. Utterly and completely:

cyberpunk-2077-1920-1080.png
That test doesn't have RT on. RT pegs the CPU a lot. There is no way the zen 2's are that high on the list with RT on. They can barely get 60 fps.
 
Last edited:
Sorry but I want control over my PC, not it controlling me. This is especially more important with regards to WIndows and me disabling the spyware and bloat crap and why I do not like WIN11 as it is harder to do where as in 10 it is well known and easy to disable the Big Brother crap. Never am I going to just trust my computer to do everything for me.
Well, thank you for confirming that my interpretation of your approach was correct, if nothing else. But, here's a challenge: Read that first sentence back to yourself, then ask yourself how that relates to letting the CPU control its own clock speeds.

You see how utterly and completely nutty that is, right? You see how even placing clock regulation on the same planet as corporate data harvesting is completely and utterly absurd, right? This is getting really, really close to tinfoil hat territory.

Seriously. If you feel like your PC is controlling you if you can't have absolute and utter control over its clock speeds, you should stop using computers. This is not a healthy way of engaging with the world. The kind and degree of a need for control that you're expressing here is deeply unhealthy. If letting the automated clock regulation circuits in the CPU do their thing makes you feel icky because you're not in control, the need for that level of control is what needs changing.

Do you also refuse to use ... thermostats? Fan controllers? The power adjustment on your microwave? Do you refuse to use dimmer switches, instead changing brightness by changing the lightbulb? Do you refuse to have any kind of automatic backlight adjustment on your TV, no matter how advanced (including OLEDs where each pixel is self-adjusting)? Do you mod the firmware on your phone so that its transmit power is always constant, rather than being modulated? Do you see how the lines you are drawing for computers are entirely arbitrary and have no logical relation to anything beyond your own beliefs?

Back then every overclocking and enthusiast guide said to shut off these features.
Yes, but ... so what? Does it being commonly offered advice mean it's good advice? Have you ever seen any kind of proof that it ever made a difference to anything, perhaps outside of extremely small margin competitive benchmarking?
I remember seeing Speedstep in the BIOS and thinking wow this is a laptop or SFF thing and overclocking guides said disable it. If I could run all core all the time, I would as it was easy and CPU temps were as cool at idle and responsiveness was better than on laptops. So yes it made sense back then for an enthusiast high end build especially since it was so easy to do on air cooling and there were no boost clocks and when they were introduced, running them all the time all core on a quad was a piece of cake on a decent tower air cooler with fine thermals. So of course I wanted to do it.
I still don't see how that's an "of course", outside of just bowing to the pressure of common beliefs and not thinking critically. Why would you "of course" want to run peak clocks at all times if it didn't actually provide any kind of benefit? It's entirely possible that disabling SpeedStep was the best approach back then, but ... well, things change. CPUs today are vastly more advanced than even ten years ago, let alone seventeen. Current CPUs boost to peak clocks within a few miliseconds when hit with a suitable load. AMD's latest mobile chips boost to peak clocks in less than one milisecond, and their desktop chips are in the same range. Intel is slower, but not enough to matter, at around 16ms. If you're claiming that you can feel the difference between a CPU staying at peak clocks all the time vs. one boosting to peak in single digit or low double digit ms, then you're deluding yourself.
It was like a free overclock.
By your own description, it was no such thing. You see that, right? If your CPU isn't being loaded, then what does it matter if it downclocks? How is it anything at all like an overclock, free or not, if it doesn't actually increase performance in any meaningful way? And isn't the core part of an overclock that it increases your clocks to improve performance? I don't see how lowering clocks when not in use is anything even remotely resembling a free overclock.

The more you write, the clearer it is that you've spent these two decades constructing an intricate set of hyper-conservative and anxiety-ridden beliefs for yourself around how a PC should operate. Ridding yourself of those beliefs will be far more beneficial to you than any CPU upgrade, so that's where I'd start if I were you.
Things have drastically changed though as you said.
Yet you are still actively and vehemently refusing to accept that your beliefs and preferences are fundamentally unsuited to modern computing. You see the problem there, right?
Now things are changing even more as you said trying to stick with manual all core overclock is leaving performance on the table as boost clocks only apply to a core or 2 and all core almost never as power and heat is just too much unless you switch to good liquid cooling. On air though very difficult now and even water cooling or just flat out will not do it at certain speeds no matter what with stability regardless of how good of cooling.
The thing is, clock manual clock adjustments are easier than ever, including fixed clocks. The problem is that factory configurations are now dynamic and aggressive in their boost clock scaling in a way that was unthinkable a decade ago. And when you have a massively powerful CPU monitoring and adjusting clock speeds millions of times a second, no fixed setting done by a human can hope to keep up.
So yes probably have to change as manual overclocking is kind of going away as they max these CPUs out of the box so much now. Though at the same time even a slight all core underclock can be ok if temps are good cause sometimes algorithms take CPU too far and temps too hot.
That's true to some extent. Consumer SKUs are IMO tuned far too hard for performance, and there's a lot of good to be done in changing that tuning for efficiency. Still, there are few situations in which using the dynamic tuning options isn't better for that as well - with the obvious exception being significant underclocks dropping power very low, which can cause trouble for performance-oriented boost algorithms.
We shall see what Zen 4 brings to the table but Ryzen in general manual overclock all core is much harder and its not as designed for it as Intel CPUs right now. Though even Intel P cores are boosting to 5.6GHz which is great. Where as Ryzen 5000 rarely boost higher than 4.8 or 4.9 even on one or 2 cores. So manual overclock CCD1 4700 I am only losing 100 to 200MHz officially 100 as max boost is 4.8GHz on the box of 5900X. But Zen 4 if the demoes are to be believed has rumors of boosts to 5.6 or higher and all core manual overclock that speed unlikely so it probably dies if I get that and that is the case.
That's just how things are developing. There's no longer room in the competitive landscape for leaving near 50% of performance on the table like with Sandy Bridge, and acceptance for higher TDPs and power draws is much higher. At the same time boost algorithms and the options for tuning those are getting more and more advanced. There's no realistic route back to static clocks, as you will never get as good overall performance with a one-size-fits-all approach as you will with a dynamic approach.
Yeah thats true. But these have been designed with HT/SMT in mind for 2 decades. SMT/HT existed since Late 2002 release of Pentium 4 3.06GHz,. I am sure Intel used those ideas and was easy to implement in designs of all future CPUs.
But SMT still needs to be explicitly designed into every new architecture - having done it before doesn't mean it's automatically included in any major revision.

Also, you know that Intel has been developing Atom cores, which the E cores are the newest revision of, for more than a decade, right?
And all the CPUs that did not have it, I am sure could have easily had it like the Core i5s prior to Comet Lake. They just disabled it so they could sell it without it. I highly doubt those chips were designed special without it. It was a switch. Why else did Intel have HT enabled easily on all CPUs even the low end i3 series starting with Comet Lake. Cause AMD also did so Intel could not get away with it anymore.
That's literally the opposite of what I was saying. When you have SMT, you can disable it - as with essentially any other feature. But to have SMT, you need to design the core for it. Whether it's included or not in various SKUs is irrelevant to the point I was making.

And I still don't see why you insist on disabling it. You wanted the """free overclock""" of your CPU not clocking down when not being used, but you don't want the actual, real-world performance increase of SMT? o_O Sorry, but your brand of logic here is making my head spin.
It does seem IPC has stagnated a bit. We have had some good uplifts but nothing ground breaking from an already good arch really since Bloomfield to Sandy Bridge which was enormous I think like at least 40 to 50%.
This is the oppsite of true. SB was absolutely a major increase (though not as high as 40-50% AFAIK), but the era of IPC stagnation was Sandy Bridge-Comet Lake, where per-generation IPC increases were often in the mid single digits. Or, heck, Intel's four-generation Skylake(++++) run? Zen was a 50%+ IPC increase on top of previous AMD offerings; Zen2 was a ~15% increase (over Zen+); Zen3 was a 19% increase over Zen2. Rocket Lake was highly variable across workloads, delivering anywhere from 6% to 22% IPC increase depending on the workload. Alder Lake delivers an 18-20% increase for the P cores. Gains in the past few years have been bigger and more regular than anywthing seen for the preceding 5+ years.


As I've said before: please, pretty please, make an effort to try and let go of your anxieties and need for absolute control over these things. You're getting in your own way, and it's making you frustrated over not getting things that simply can't exist. You would be a lot happier if you just took a step back, accepted that CPUs are far better at frequency and voltage control than you could ever be, and let them do their thing - with some tuning and input if you wanted that. That way you could actually use and enjoy your PC. To adopt some of your own wording: it's your current approach that's seeing your PC controlling you and not the opposite - you're fooling yourself into thinking you're improving something when instead you're making things worse, hurting both performance and efficiency for no benefit other than a misbegotten peace of mind.
 
Last edited:
Here's what needs to happen in this thread. We need to stop pandering to the inane mutterings of this delusioned guy. No, it's not a compromise to choose a processor that doesn't fit his requirements, because those requirements are nonsensical and not based in reality. It's been heavily explained, in detail, by multiple people including moderators, how and why he is wrong in his assumptions, but the fool continues to march down his self imposed strict specification. A 12900k is an excellent choice, and doesn't have the shortcomings this guy believes it does. Same for the 5800x3d, despite it not being the "oooooo 5ghz" or "wowooowoow 10 cores".

All that happens if this thread continues is more wasting time throwing pearls to swine.
Well, thank you for confirming that my interpretation of your approach was correct, if nothing else. But, here's a challenge: Read that first sentence back to yourself, then ask yourself how that relates to letting the CPU control its own clock speeds.

You see how utterly and completely nutty that is, right? You see how even placing clock regulation on the same planet as corporate data harvesting is completely and utterly absurd, right? This is getting really, really close to tinfoil hat territory.

Seriously. If you feel like your PC is controlling you if you can't have absolute and utter control over its clock speeds, you should stop using computers. This is not a healthy way of engaging with the world. The kind and degree of a need for control that you're expressing here is deeply unhealthy. If letting the automated clock regulation circuits in the CPU do their thing makes you feel icky because you're not in control, the need for that level of control is what needs changing.

Do you also refuse to use ... thermostats? Fan controllers? The power adjustment on your microwave? Do you refuse to use dimmer switches, instead changing brightness by changing the lightbulb? Do you refuse to have any kind of automatic backlight adjustment on your TV, no matter how advanced (including OLEDs where each pixel is self-adjusting)? Do you mod the firmware on your phone so that its transmit power is always constant, rather than being modulated? Do you see how the lines you are drawing for computers are entirely arbitrary and have no logical relation to anything beyond your own beliefs?


Yes, but ... so what? Does it being commonly offered advice mean it's good advice? Have you ever seen any kind of proof that it ever made a difference to anything, perhaps outside of extremely small margin competitive benchmarking?

I still don't see how that's an "of course", outside of just bowing to the pressure of common beliefs and not thinking critically. Why would you "of course" want to run peak clocks at all times if it didn't actually provide any kind of benefit? It's entirely possible that disabling SpeedStep was the best approach back then, but ... well, things change. CPUs today are vastly more advanced than even ten years ago, let alone seventeen. Current CPUs boost to peak clocks within a few miliseconds when hit with a suitable load. AMD's latest mobile chips boost to peak clocks in less than one milisecond, and their desktop chips are in the same range. Intel is slower, but not enough to matter, at around 16ms. If you're claiming that you can feel the difference between a CPU staying at peak clocks all the time vs. one boosting to peak in single digit or low double digit ms, then you're deluding yourself.

By your own description, it was no such thing. You see that, right? If your CPU isn't being loaded, then what does it matter if it downclocks? How is it anything at all like an overclock, free or not, if it doesn't actually increase performance in any meaningful way? And isn't the core part of an overclock that it increases your clocks to improve performance? I don't see how lowering clocks when not in use is anything even remotely resembling a free overclock.

The more you write, the clearer it is that you've spent these two decades constructing an intricate set of hyper-conservative and anxiety-ridden beliefs for yourself around how a PC should operate. Ridding yourself of those beliefs will be far more beneficial to you than any CPU upgrade, so that's where I'd start if I were you.

Yet you are still actively and vehemently refusing to accept that your beliefs and preferences are fundamentally unsuited to modern computing. You see the problem there, right?

The thing is, clock manual clock adjustments are easier than ever, including fixed clocks. The problem is that factory configurations are now dynamic and aggressive in their boost clock scaling in a way that was unthinkable a decade ago. And when you have a massively powerful CPU monitoring and adjusting clock speeds millions of times a second, no fixed setting done by a human can hope to keep up.

That's true to some extent. Consumer SKUs are IMO tuned far too hard for performance, and there's a lot of good to be done in changing that tuning for efficiency. Still, there are few situations in which using the dynamic tuning options isn't better for that as well - with the obvious exception being significant underclocks dropping power very low, which can cause trouble for performance-oriented boost algorithms.

That's just how things are developing. There's no longer room in the competitive landscape for leaving near 50% of performance on the table like with Sandy Bridge, and acceptance for higher TDPs and power draws is much higher. At the same time boost algorithms and the options for tuning those are getting more and more advanced. There's no realistic route back to static clocks, as you will never get as good overall performance with a one-size-fits-all approach as you will with a dynamic approach.

But SMT still needs to be explicitly designed into every new architecture - having done it before doesn't mean it's automatically included in any major revision.

Also, you know that Intel has been developing Atom cores, which the E cores are the newest revision of, for more than a decade, right?

That's literally the opposite of what I was saying. When you have SMT, you can disable it - as with essentially any other feature. But to have SMT, you need to design the core for it. Whether it's included or not in various SKUs is irrelevant to the point I was making.

And I still don't see why you insist on disabling it. You wanted the """free overclock""" of your CPU not clocking down when not being used, but you don't want the actual, real-world performance increase of SMT? o_O Sorry, but your brand of logic here is making my head spin.

This is the oppsite of true. SB was absolutely a major increase (though not as high as 40-50% AFAIK), but the era of IPC stagnation was Sandy Bridge-Comet Lake, where per-generation IPC increases were often in the mid single digits. Or, heck, Intel's four-generation Skylake(++++) run? Zen was a 50%+ IPC increase on top of previous AMD offerings; Zen2 was a ~15% increase (over Zen+); Zen3 was a 19% increase over Zen2. Rocket Lake was highly variable across workloads, delivering anywhere from 6% to 22% IPC increase depending on the workload. Alder Lake delivers an 18-20% increase for the P cores. Gains in the past few years have been bigger and more regular than anywthing seen for the preceding 5+ years.


As I've said before: please, pretty please, make an effort to try and let go of your anxieties and need for absolute control over these things. You're getting in your own way, and it's making you frustrated over not getting things that simply can't exist. You would be a lot happier if you just took a step back, accepted that CPUs are far better at frequency and voltage control than you could ever be, and let them do their thing - with some tuning and input if you wanted that. That way you could actually use and enjoy your PC. To adopt some of your own wording: it's your current approach that's seeing your PC controlling you and not the opposite - you're fooling yourself into thinking you're improving something when instead you're making things worse, hurting both performance and efficiency for no benefit other than a misbegotten peace of mind.
You're wasting your time bro
 
You're wasting your time bro
Maybe, but there's always a chance of changing their mind, and if so, it's worth it. Snapping even one person out of an ingrown, hyper-conservative, anxiety-ridden mindset would be more than worth the effort, even if chances of that are tiny overall.
 
Maybe, but there's always a chance of changing their mind, and if so, it's worth it. Snapping even one person out of an ingrown, hyper-conservative, anxiety-ridden mindset would be more than worth the effort, even if chances of that are tiny overall.
I would usually agree, but he's being told the same thing by at least 6 different people so I wouldn't hold your breath.

After a certain point, I don't feel that giving people more attention helps with their delusions, just validates their concerns.

Like, replying to an issue as if it's an issue, reinforces that in people's mind. The best thing to do is to refute it and move on.

Ask yourself, does he actually acknowledge and accept any of the detailed explanations already written? Or does he ignore them and press on? He's seeking attention and we're giving it to him. Pearls to swine.
 
I would usually agree, but he's being told the same thing by at least 6 different people so I wouldn't hold your breath.

After a certain point, I don't feel that giving people more attention helps with their delusions, just validates their concerns.

Like, replying to an issue as if it's an issue, reinforces that in people's mind. The best thing to do is to refute it and move on.

Ask yourself, does he actually acknowledge and accept any of the detailed explanations already written? Or does he ignore them and press on? He's seeking attention and we're giving it to him. Pearls to swine.
You're not wrong, but I'm seeing things that might indicate some movement. With someone so utterly bereft of self-critical thinking and so stuck in their own privilege, even the first little step is going to be hard fought and difficult. Getting them to explicitly state their motivations and beliefs - most of which are often subsumed into behavioural patterns and never even made explicit otherwise - is the first step towards productively questioning and changing these motivations and beliefs. But oh boy does it take time and work, and does it have a million chances to fail. Absolutely. Still, worth a try.
 
That test doesn't have RT on. RT pegs the CPU a lot. There is no way the zen 2's are that high on the list with RT on. They can barely get 60 fps.

I think the "RT ON" issue is not a CPU issue, it's a GPU issue. You'll get GPU limited way before you get any CPU limitation.
 
I think the "RT ON" issue is not a CPU issue, it's a GPU issue. You'll get GPU limited way before you get any CPU limitation.
Nope, rt is very taxing on the cpu. A 3090 can get around 90 to 100 fps at 1440p dlss quality RT on. A 3600 cant even break 60 with those settings.

My 3700x with tuned ram ping pongs between 40 and 70 constantly
 
Pcores are more efficient in performance per watt. E cores are more efficient in performance per die space. The thing is when an application uses more than 8 pcores, it would most likely use n cores, in which case having the ecores is the better option


What difference does it make if it can use more than 500 cores. Fact is alderlake performs the best on cyberpunk, therefore it doesn't matter if the game can use 90 cores or not. The lowest fps the 12900k drops at this game is 115 with RT on ultra settings. A 5950x is 30-40 fps behind in that same scene. Usually it is at 150 up to 230 fps. That's 720p resolution DLSS and RT ON, trying to make the game as CPU bound as possible.


That test doesn't have RT on. RT pegs the CPU a lot. There is no way the zen 2's are that high on the list with RT on. They can barely get 60 fps.


I honestly feel and do not believe that Intel could have done so without power consumption. I think you are saying they could have easily as a way to rub it in cause you think the e-cores are so good they had no need which is just wrong.

If they could have easily done it, the e-cores are hybrid arch are a nuisance and problem for many despite many thinking they are good that if Intel could have easily come up with a high clocked 10 P core Alder Lake without thermal/power budget blowing up and/or without manufacturing costs getting out of hand, they would have done so in addition to the hybrid e-core variants. Especially given once again how there are lots of people who hate the e-cores and would actually like a little more than 8 P cores and go AMD instead. I think Intel is unable to is why at least without a severe crippling of clock speed which would make it pointless even for those who hate e-cores and want more than 8 big boy cores which is quite a lot.

Therefore I think if Intel could have easily made a 10 P core Alder Lake part they would of in addition to the hybrid 8 P core + 8 e core to cater to both markets if they could have done so for being cost effective enough and/or thermal/power/heat consumption was reasonable without crippling clock speeds for 10 P core part.
 
Last edited:
Status
Not open for further replies.
Back
Top