But we are also in an era where the Highest core count SKU on the mainstream platform is also the fastest in ST. I don't understand why I'm still seeing so many people acting as if we are still in the X99/X299 era were clock speed tanked hard starting from six cores...But I also don't understand why the top SKU having more cores than most people need seems to be a bother. Just buy a core i5 with 8 cores for cheaper. Or is there some kind of social status stigma about not having a core i9?
The value proposition of the "i5"/"i7"/"i9" tiers have varied from generation to generation, but my recommendation is usually for demanding users to go for the highest tier of CPU core performance "within reason", as what makes the CPU "long lived" for normal use is actual core speed, whether it's gaming, productive applications or just heavy web browsing. When the CPU is 3-5+ years old, you rather have a slightly faster CPU vs. more cores for general use. If more cores could offset slower cores, we would all buy used 60+ core Xeons for "future proofing".
Now, if the Arrow Lake refresh have a Ultra 5 245K with more decent clocks, it should be a top seller. But at the moment, the recommendation would probably be the 265K.
No it's because of the diminishing returns of just adding more cores without additional bandwidth, especially with the marketing tactic of throwing more E cores on, it seems like more of a stopgap than any real performance increase.
Most of the focus is on either special benchmarks or synthetics in the media. Like how many have chosen a CPU based on a Cinebench score without having a faint idea what it's actually for? (It's actually a very niche application)
As you're referring to, most loads which properly scales with many cores also needs memory bandwidth, which is why I've often called the 12/16 core Ryzen chips "benchmark chips", as they make little sense in real world use. Proper workstation chips also have lots of thermal headroom, and can sustain (mixed) load on many cores without throttling so much. Most real workstation use is often a mix of load; some threads with high, some medium and some low load, such workloads are hard to benchmark fairly. Even most productive benchmarks are just the batch load, not the interactive part you use 99% of the time.
Normies won't have a use for that, except 7 zipping all day long..
At least Linux users get to enjoy it more and more, both encryption and now CRC are accelerated with AVX-512, which at the very least gives a bit of extra free performance for many loads.