• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Eight-Core CPUs Become the Most Popular Choice of PC Users, CPU-Z Stats Show

Well whatever it is... its just the natural course of events isn't it. After the quad core era, we had the hexacore blip and now its an 8-core normalized market. It was really just long overdue... 4C8T has been there for a while.
I guess although with E cores, P cores, cores with X3D and without, cores with multi-threading and without; it seems like a "core" is anything but normalized
 
I guess although with E cores, P cores, cores with X3D and without, cores with multi-threading and without; it seems like a "core" is anything but normalized
Yeah was it ever... the CPU core was always a wild mix of elements, instruction sets, overall capability. Its not really new, that, either.
 
Two different issues (and not even comparable).
Some people use computers for things other than video games.

sure and those most likely need even more vram than gamers
 
sure and those most likely need even more vram than gamers
That's not necessarily the case either. Memory-intensive applications are only one class of programs, of which VRAM-specific ones are a subset. Majority of applications, even in non-entertainment domains (or perhaps especially in these domains. Gotta love those libs/techniques/algorithms still around from Fortran days. Lol!) don't even use GPUs beyond drawing UIs and perhaps some viewport rendering.
 
That's not necessarily the case either. Memory-intensive applications are only one class of programs, of which VRAM-specific ones are a subset. Majority of applications, even in non-entertainment domains (or perhaps especially in these domains. Gotta love those libs/techniques/algorithms still around from Fortran days. Lol!) don't even use GPUs beyond drawing UIs and perhaps some viewport rendering.
Okay but anybody doing AI is screaming for VRAM and the production people have been screaming for even longer. If you don't need VRAM you're already well served by the market and have been for a long time and if you don't use a GPU beyond "drawing UIs" and "viewport rendering" then what are we even talking about now, you don't need a GPU. Point is VRAM hasn't kept up compared to every other component in a PC.
 
Okay but anybody doing AI is screaming for VRAM and the production people have been screaming for even longer. If you don't need VRAM you're already well served by the market and have been for a long time and if you don't use a GPU beyond "drawing UIs" and "viewport rendering" then what are we even talking about now, you don't need a GPU. Point is VRAM hasn't kept up compared to every other component in a PC.
What I was talking about is that demand for better processors with more core count far outweighed -and still does, albeit not as much- the need for more VRAM. With a subtext that the limitations themselves aren't similar.

Some/many AI models may fall under the memory-intensive umbrella, but we are talking about a niche that is highly cloud-centric. People "doing AI" as in who? The rando trying to implement some plagiarism engine genAI or SAM locally? I'm with you. The LLM end user? Those run on their own accelerators (or connect to cloud), afaik. The researcher/developer? They're mostly on Colab. The people who do training? Yeah I think they are looking at things at a different scale...

Meanwhile, look at any random, locally-running, number-crunching, production/engineering/scientific/whatever application, and you'll very likely find it recommending as many cores as you can throw at it, as it has been for decades.
 
Some/many AI models may fall under the memory-intensive umbrella, but we are talking about a niche that is highly cloud-centric. People "doing AI" as in who? The rando trying to implement some plagiarism engine genAI or SAM locally? I'm with you. The LLM end user? Those run on their own accelerators (or connect to cloud), afaik. The researcher/developer? They're mostly on Colab. The people who do training? Yeah I think they are looking at things at a different scale...
More people than you'd think. When I say "doing AI" I generally mean users buying consumer GPUs. The freedom and privacy of a local setup can far outweigh the benefits of any cloud solution for any normal user, otherwise you would have seen small creators in the production space move over to render farms ages ago. Training SD LoRAs and finetuning LLMs is absolutely possible on consumer hardware. As for researchers, I thought they were training tiny models like 1.5Bs or so, I personally don't really get how Google Colab works or why I'd ever use it.
Meanwhile, look at any random, locally-running, number-crunching, production/engineering/scientific/whatever application, and you'll very likely find it recommending as many cores as you can throw at it, as it has been for decades.
FreeCAD says hello.
 
Okay, cool.

Now let's make 16-Core at the same price points to progress, or will we be stuck for 10 years like with Intel? I don't see a 3rd player shaking up the market.
 
Core count is a lot of things, though. There's a lot of applications that could make better use of your 5800x versus your 7600x.
tech people talk about performance, amateur fan boys talk about cores
 
More people than you'd think. When I say "doing AI" I generally mean users buying consumer GPUs. The freedom and privacy of a local setup can far outweigh the benefits of any cloud solution for any normal user,
I highly doubt something that has both high operational cost and relatively high barrier of entry (i.e. technical knowledge) would be as wide spread, especially when it has little beyond entertainment value (as far as LLMs and GenAI in general go, at least).

And cloud rendering services are a dime a dozen these days. But rendering, unlike pip installing some genAI crap, often has economic value, and no "free" cloud service exists for it.

FreeCAD says hello.
FreeCAD needs an entire separate computer to handle the all the googling of how to fix its issues and do even basic CAD ops on it. :roll:

Now let's make 16-Core at the same price points to progress, or will we be stuck for 10 years like with Intel? I don't see a 3rd player shaking up the market.
That should have been Qualcomm (and Apple, if you were to look past their exclusivity), but they themselves are serving as an example that we are not heading in the direction you -and I- are hoping for. Marketing and development are shifting from actual processing performance to those "NPU" gimmicks.
 
Core count is a lot of things, though. There's a lot of applications that could make better use of your 5800x versus your 7600x.

I highly doubt. I always check my previous 2x 5800x vs my current 7600x. I think I know most of the benchmarks by now.
I'm too lazy. I could even get a ryzen 7700 tray and sell my 7600X for a price difference of 40€. I do not need any more performance as of now. 2x32GiB DRAM / radeon 7800X limit my gaming experience. For gnu gentoo linux the two months in 2023 showed me that a ryzen 3 3100 is enough with 1x 8GiB 3200mt/s DDR4 @ B550. I highly doubt I will find any games / any cheaper graphic cards in the next 2-3 years which motivate me to upgrade. (KCD 2 maybe)

gentoo compiles source code. I saw the differences since 2006 in hardware changes or upgrades or side grades.
5800x = sidegrade = nearly equal = 7600x
7500f / 7600 / 7600x = buy the cheapest and be happy (when you do not care for the cpu graphics)

I always read about microsoft flight simulator and maybe those big city builder games which benefits from high core count and high 3d-cache.
 
As someone who has built many gaming focused PCs for myself or with friends, 8 cores is the obvious choice. Consoles have had 8 cores for generations now too, and less cores tend to introduce micro stutters and such with demanding games especially multiplayer. It’s also better to slightly over spec a CPU especially 8c over 6c since it’s far easier to upgrade and sell your used GPU (2min swap) than a CPU upgrade with repasting or new mobo/ram.

For example a 9700X is $280 at MC hard to argue with that and other than Zen 5’s weirdly high idle power consumption it’s an excellent option.
 
Back
Top