• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is more than 8 cores still overkill for high end gaming for 1440P with RTX 4090

Hey fellas maybe just step back and take a breath, things are heating up!

Only thing left to do is to drop the gloves :D
 
there is no sweet spot, it's a meaningless term since each person's needs are unique. I can easily make an argument for $150, $250, or $350 CPUs (or any other price point) as the "sweet spot" and won't be wrong as long the CPU's performance meets the needs in the sweet spot example. Notice I talk about price and performance yet never mention cores.
Quite specifically for this topic, where the subject is 1440p high end gaming (which I read as 'high refresh'), yes there certainly is a sweet spot. And its where most gaming happens; if you're down to 1080p your CPU needs probably won't be lower, and if you're on 4K they likely won't be higher.

And then what's left beyond core count is IPC and frequency for gaming scenarios, where most CPUs are remarkably close to one another, and gen to gen increases aren't staggering enough to do much over the course of 5 years at least if you stick to midrange.

Jumps in performance requirement on CPU do happen, but they coincide with big console releases, on the gaming front, and even then, consoles hardly chase the cutting edge.

I get the quote you made here, but it still doesn't match with the situation in practice.

All CPUs are iterative designs, they improve upon the last, so they're largely the same. With current APIs and games that utilize them, cores go a long way to explain the performance difference and that happens, in practice, at the core counts from dual through to octa. These gaps are almost impossible to cover with other aspects of a CPU, like clocks or architecture (within reason). The CPUs in a same gen's stack also don't deviate that wildly on their other specs; Frequency peaks out on single core at some very good level for even the lower parts in the stack, and the design is the same. At the same time, games still want a fast single core for their baseline performance, and some even want multiple; core requirements are definitely not a misconception.

Core requirements become a misconception when you already have headroom on a lower core count. Its similar to RAM/VRAM. Enough is enough, more is a waste, and too little tanks the performance hard.
 
Last edited:
Ideally you need a 6GHz 2 core always maintaining booost on a 12 p-core 12 thread, yes none of the existing qualities. You don't want ecore or ccds or even mesh interconnect. the rest of the cores could be running at 4.5 to save power.
 
I have not said one thing about cores, only performance. You also failed to post the very next line by the article

It's also easier to dump down system requirements to core count, because it's a quick way to dismiss a wide range of CPUs. For example, games no longer run properly, or at all, on dual-core CPUs, so in that sense you require at minimum a quad-core to game. Having said that, most modern and demanding games don't run well on quad-cores, even if they support SMT (simultaneous multi-threading). That sounds like I'm contradicting my own argument right off the bat, but once again, it's first and foremost about overall CPU performance.
Yes, because minimum core count is minimum prerequisite before you can talk about overall performance nowadays. Without that games won't run well at all. You can theoretically condense power of quad core into single core chip, but it would still fail of very basic reason and reason for that is simply games today are written for multicore processors and expect at least certain amount of independent processing cores and that's because if you tried to run multicore optimal code on single core chip's pipeline would stall a lot, because much of code depends on sequential code's results, therefore you can't ever fill up pipeline of that theoretical chip's fully and it matters a lot, because full pipeline means more code executed per each cycle, so if you use multicore optimal code, you basically starve pipeline of code. Or you end up trying to do 4 core's job in one pipeline, which simply doesn't fit in it, even if that code could enter pipeline fast, then you would need more ALU's, FPU's, other instruction logic quadruppled and in the end that would only create one core, that has many other internal bottlenecks. Your 1% lows will be shit and framerate will gyrate a lot during game, that's why we benefited a lot from more than one core. It's not just about overall processing power, it's also about simply more efficient whole chip's design. Obviously, some compute tasks don't scale well across cores, but most code can and does, but not to infinity (mostly due to nature of MIMD code, aka multiple instructions, multiple data). That's also why Intel switched to P and E cores, because it works well for both poorly scaling code and for well scaling code, oh and also for poorly scaling code which requires many and complex instructions and also code that is light on instructions, but simply needs dependencies of previous results (again, not to infinity, because then you end up with GPU or in other words a powerful SIMD processor, in other words a processor that scales to many cores very easily, but requires processing of only one or very little different instructions, which is basically what many 3D models in games are, mostly instruction light vector data, also mostly single precision floating point data too).

Anyway, it's very simply, you can talk about overall performance, only when you have enough cores to begin with, oh and also don't expect that performance to scale with core count too (some games can utilize a lot of cores, others don't). So as long as you have enough cores, more correctly stated, you have to care about performance of each of those cores that can be utilized, not all cores on chip.

So why does Techspot talk about cache? Reason is very simple, CPU itself processes data and asks for data from RAM, but what actually fetches data from RAM is branch predictor and cache. Since cache is only very small (mostly for performance reasons, although for cost too, but only to some extent) and can't store the whole code of the game, which is in RAM, branch predictor has to guess what code will be most likely needed. Most of the time branch predictor is right, but not always. And since it is right most of the time cache is filled with what CPU actually needs. Some data can and is too big to fit into cache, which costs CPU, cache and RAM cycles, therefore hurts performance, but most code fits and is executed reasonably fast. Obviously bigger cache lowers chances of code fitting into it, therefore you get better performance, but often designing big cache is simply impossible due to its size and size means increased cost of chip, but also can mean lower performance and also it can make other more important parts of CPU not fit too. Brands like Intel and AMD don't touch branch predictors, each core has the same amount of them, but cache (more precisely L3 cache) is resized according to core count. And if you have too many cores for task like running game, you still get more free L3 cache per core, therefore it has potential to reduce lower performance in big code (or compensate for bad branch predictor, which makes CPU fetch more data than needed). That's why you see a bit improved performance (in some instance it can be a lot, especially outside of gaming) between different core count chips in same game, even if not all cores are utilized.

Like article shows, you can compensate that lack of L3 cache by running chip at higher clock speed. It works, but it doesn't eliminate bottleneck from having too little cache and you basically waste cycles of CPU with empty pipelines or underutilized ones. So if you up the clock speed, you end up using more electricity to achieve a task, which bigger cache could have solved at lower power level, not to mention that it's not always possible to raise clock speed for various reasons.

And we end up with TL;DR, that you need certain minimum core count (ignoring SMT as it merely helps to fill up gaps in pipelines, but doesn't behave like core and doesn't make your CPU process more data) and you better have enough L3 cache for intended task.

BTW there could be even more nuance in performance, but this is reasonable enough short advanced description of what happens in CPUs while gaming.
 
Are there any games that actually meaningfully benefit from more than 8 cores?? I have researched and most say no or an extreme rare few like shorter turn simulations or something or only if doing lots of streaming in background. Though some say games are starting to scale now to as many cores as you can throw at it though many disagree and say there is no proof of that and while games are more threaded it is limited to only a ertain number of cores and threads and 8 is easily enough and will be for many years as games are just impossible to make parallelism to lots more cores when coding them which mean they are a few thread limited and will be for years to come??

So is even 6 cores and 12 threads enough for high end gaming with an RTX 4090 at 1440P. And thus 8 cores 16 threads provides a little headroom for high end gaming?? Or do any games actually start to benefit from more than 8 cores meaningfully?? This is of course would be doing no streaming and no background tasks other than of course NOD32 AV, HWInfo64 and MSI Afterburner and other simple Windows services on WIN10 install with spyware shutdown.

I do like the idea of future proofing a bit, but not at all a fan of Intel e-cores. To me the Intel Alder Lake and Raptor Lake parts are 8 core 16 thread CPUs with excellent P-cores and of course e-cores shut off to be a monster 8 core 16 thread gaming powerhouse.

Then you have AMD with the new Zen 4 Ryzen 7000 CPUs which have made some good gains, but they run so hot. Pkus they still seem a bit behind even Alder Lake in gaming well tuned let alone Raptor Lake. Though you can get more than 8 strong cores?? However however, only 8 strong cores on a single CCD/ring and I hear games are very latency sensitive, so if a game that scales beyond 8 cores or 6 cores in case of 7900X (2 6 core CCDs), would there be a latency penalty with thread communication cross CCDs causing a big dip in 1% and 0.1% lows in games?? I hear it is an issue on Ryzen 9 7900X and 7950X, but it was fixed with Ryzen 9 5000 series?? Though was it only fixed by ensuring game threads stay on one CCD and if it had to hop over to the other or communicate with each other the other core on other CCD a big hit?? Or is that not at all an issue?? Obvious it is not for productivity work, but for games it is a different animal I hear.

Then there is Ryzen 7000 X3D chips coming out. Do you think those will hammer even a well tuned 13900K with e-cores off and fast DDR5 in gaming or will they trade blows?


Your thoughts.
Well, taking a brief glance at toms gaming cpu hierarchy https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html, The top 4 slots are occupied by 13 gen intel chips, though three of those entries are the result of overclocking some flavor of the 13 gen chips. and there is less than a 10% performance difference between the 5800x3d and the 13900k.

But, while AMD chips have a MAX power draw of 240w, intel's limit is 350w. and while AMD chips might see power usage jump over 100w from time to time while gaming, intel will pull around 200w, or more as a matter of course. A 10% performance gain, in some titles, in some circumstances doesn't justify that kind of power consumption. If you want something right now, the x3d is the most cost effective option.

As for the 8 core question. When it comes to gaming, clock speed matters a lot more than cores. The 7600x/7700x/5800X3D are the best options for gaming CPU's right now. But you would do well to hold off upgrading until the 7000X3D chips launch if you want to get some more longevity for your money.
 
modern games fully utilize multithreading, so the more the better especially if something is running in the background (updates, streming, screen recording etc).
 
"Overkill" is entirely software dependent. There are some games that are completely insatiable.
 
I don't think hybrid architecture is really a key point. Reviewers have disabled P-cores and E-cores for benchmarking purposes and still run the same software.

P-cores/E-cores really became a thing because efficiency is valued by some.

Apple lives by the performance-per-watt mantra because most of their business is iPhone; over 85% of the Mac sales are from notebooks. That's why they were really the first to widely market a device with differentiated CPU cores (yes, in the A-series SoC for iPhone).

Many people here at TPU (and other PC sites for that matter) ignore the fact that enterprise computing is a major influencer in how PC hardware develops.

CPU core differentiation (performance and efficiency) is being driven largely by organizations who also value performance-per-watt. The US federal government has power efficiency mandates that extend to computer equipment. It's not Joe Gamer who wants E-cores, it's the General Accounting Office purchasing agent who needs 5,000 desktop PCs from Dell, HP, etc.

When the operating system supports it and the task scheduler is properly configured, workloads will be directed to the more appropriate silicon. Apple does this pretty well with iOS/iPadOS/macOS. I think I read somewhere that Apple claimed that their Blizzard (efficiency) cores provide something like 80% of the performance of the Avalanche (performance) cores at a fraction of the power. Maybe my figures aren't exact but that's the point. Most mundane workloads can be handled by efficiency cores; the performance cores are waiting for those rarer instances when the system needs as much performance as it can get.

Because of Intel's botched migration from their 14nm process node, their power consumption skyrocketed which probably forced them to adopt P-cores and E-cores faster. But don't worry, AMD will likely have to implement them at some point if they want to keep being competitive for enterprise sales.

And remember Datacenter customers are all about performance-per-watt.
I seriously doubt that AMD is going to hop on the Big/little bandwagon, simply because they don't have to. The 13900K has a mix of 24 cores, and while they claim a 125w TDP, during boost periods that chase that 5.8/6ghz claim, the things can siphon off 350w. Now look at threadripper, 64 cores/128 threads(with 96 cores/192 threads incoming) and they maintain a power draw of 280w, or 4.375w per core (2.1875w per thread) Intel had to add 50% more cores and double the power limit to gain 10% performance, in some situations, sometimes.


Looking back through the past 20 years, AMD has usually been content to release lower performing parts to maintain efficiency, while Intel (and nvidia for that matter) are more than happy to jack power consumption up simply to maintain their crown.
 
It's not really the amount of cores that matter, it's the IPC/clocks etc and you roll that into total absolute CPU performance.

Would you rather have a Ryzen 1700X or 7600X for gaming, despite one having 8 cores and the other having 6? Clearly the 7600X has a higher outright CPU performance for productivity, but also the IPC/clocks etc are vastly improved for gaming.

To give the 4090 room to actually stretch it's legs, currently I'd want to be on at least a;

5800X3D
7600X or better if you desire
12400F or better if you desire

Ideally, a 13600K+ or wait for Zen 4 X3D imo
 
It's not really the amount of cores that matter, it's the IPC/clocks etc and you roll that into total absolute CPU performance.

Would you rather have a Ryzen 1700X or 7600X for gaming, despite one having 8 cores and the other having 6? Clearly the 7600X has a higher outright CPU performance for productivity, but also the IPC/clocks etc are vastly improved for gaming.

To give the 4090 room to actually stretch it's legs, currently I'd want to be on at least a;

5800X3D
7600X or better if you desire
12400F or better if you desire

Ideally, a 13600K+ or wait for Zen 4 X3D imo
Obviously the iteration of architecture matters, but all other things being equal, if you have the choice between a 7950x, and 7700x that you intend to use only for gaming, right now you're better off going with the 7700x because higher clocks on a single CCD is more valuable than having more cores running at a lower speed with the added complexity of latency from the interconnect. There's already decent coverage of people seeing significant performance uplift by disabling a CCD in their 79xx chips.

While i'm sure their are exceptions (RTS titles for example) unless you have a legitimate use outside gaming, it's not worth spending extra for chips with multi-CCDs that push core count past 8, at least in terms of AMD.
 
Last edited:
Obviously the iteration of architecture matters, but all other things being equal, if you have the choice between a 7950x, and 7700x that you intend to use only for gaming, right now you're better off going with the 7700x
What I've noticed too, mostly through Hardware Unboxed testing multi generations of Ryzen, is that essentially from the bottom SKU to the top of a given generation, there's bugger all in it - with a few exceptions along the way I suppose.

I had some more production type work to do at the time, so opted for a 5900X, but really I should have just bought a 5600X, for the useful lifespan of those products, from a gaming perspective, the extra cores just don't really net you a boost anywhere near close to the extra $$ paid, in games.

What held me back, and I still don't like is AMD charging what they charge at launch for their 6 core products, and pricing to make the upper SKU's feel like better value, but I spose that's life for companies driven by profit and shareholder contentedness. AMD certainly haven't passed up the opportunity to be AM-gree-D.
 
Last edited:
Hi,
Console proves time and time again you can game using a potato :laugh:

shut up! We here like to through money at problems, the more the better.
 
What I've noticed too, mostly through Hardware Unboxed testing multi generations of Ryzen, is that essentially from the bottom SKU to the top of a given generation, there's bugger all in it - with a few exceptions along the way I suppose.

I had some more production type work to do at the time, so opted for a 5900X, but really I should have just bought a 5600X, for the useful lifespan of those products, from a gaming perspective, the extra cores just don't really net you a boost anywhere near close to the extra $$ paid, in games.

What held me back, and I still don't like is AMD charging what they charge at launch for their 6 core products, and pricing to make the upper SKU's feel like better value, but I spose that's life for companies driven by profit and shareholder contentedness. AMD certainly haven't passed up the opportunity to be AM-gree-D.
My uncle was one of the first people in the US to get a degree in microprocessor engineering. He worked at or with almost every big name in the industry over a span of nearly 50 years, sat on a bunch of boards that designated standards for interfaces, and i was the only person in our family that had any understanding, or interest in electronics and hardware (it was a real fun time when he was a VP at AMD and had access to their labs) But the most useful advice he imparted when it came to hardware, was to wait a minimum of 6 months after a launch before buying anything. By that time, the early adopters have done the legwork figuring out the binning, and idiosyncrasies in overclocking, and then all you have to do is look on ebay for the "golden" stepping of a given core, snap up the lowest priced part that has the core type and count you're looking for....and than overclock the hell out of it until it's running faster than the expensive thing you really want.


When i went shopping for my new build last year, i was really tempted by the 5900x/5950x. I've done CAD work as a hobby for 25 years, and render time is by far my least favorite part about it. Unfortunately i've not had as much time (or interest tbh) in pursuing that as much as i used to. When i have free time, i tend to play games, and no other aspect of my work benefits from higher core counts, so i opted for a 5800x, specifically because the x3D wasn't going to have overclock support. After toying with the PBO curves, my gaming clocks stay pretty steady between 4.75-4.9, and by the time i get around to upgrading in another year or two, it will still be fairly snappy when it's passed down to my wife. Had pandemic prices not been a thing, i'm sure i would have made the same choice you did.

As for their pricing, yeah it's a hard pill to swallow, hence my uncles advice, but i've always looked at the overall cost of the platform and upgrade path to temper it. Both my wife and I were on AM3 for 8 and 6 years respectively. While CPU's were upgraded a few times throughout those years, the motherboards did not. I don't even want to know how many socket "upgrades" intel saw in that period. But i wouldn't call AMD "greedy" so much as they're trying to capitalize while they can as they compete against a company that has infinite resources.

The difference i've always seen between intel and AMD is this: Intel is great at iteration and marketing, refining given tech to the absolute limit to push marginal performance upgrades and convincing consumers that "actually, it's the best possible outcome for our new chips to draw three times the power of the competition, because we've eeked out UP TO 10% performance uptick, sometimes!" While AMD is forced to innovate. Socket A chips were heavily based on the DEC-Alpha 64bit RISC chips, they cranked those out to keep themselves afloat while they were working on the x86-64. Phenom II and bulldozer were the foundation of Ryzen/threadripper and EYPC, which they began developing in 2007/08, which is around the same time they started work on MCM consumer GPU's. They don't have the luxury of churning out iterative crap, paired with a new socket every 18 months, they're more worried about where the industry is going to be in 10 years time.


None of that makes things more affordable, but it's at least a bit easier to swallow than the crap that intel has been pulling for the past 25 years.
 
Back
Top