• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Zen5 only 16 core.

Absolute nonsense. That might be true for some games or for games from 7 or 8 years ago, not for many modern titles. Granted, a 16 core CPU is a bit overkill for gaming, but a 6 core should be considered the starting point if 1440p is the target resolution. 8 core CPUs should be the target if 2160p is the target resolution.
I haven't seen any evidence that resolution factors into CPU calculations. If this was true we would see low core count struggle more than 15% from a i3 to top tier. Most logical reason for that discrepancy is the core frequency not being the same and different CPU generation (IPC clock). Id wager with the same parameters and just less cores, the fps would be the same until you hit that magical 4 core count or below.

Now I have not tested this with a billion games, so outliers may exist or it's a per game engine basis.
 
Last edited:
I haven't seen any evidence that resolution factors into CPU calculations. If this was true we would see low core count struggle more than 15% from a i3 to top tier. Most logical reason for that discrepancy is the core frequency not being the same and different CPU generation (IPC clock). Id wager with the same parameters and just less cores, the fps would be the same until you hit that magical 4 core count or below.

Now I have not tested this with a billion games, so outliers may exist or it's a per game engine basis.
A lot of games scale with core counts though. TLOU - Cyberpunk (especially with RT / PT ) - Spiderman - Warzone etc.

Just an example, 8/16+8E cores vs a normal 6c 12thread CPU. Massive differences, right? Around 40 fps in averages and like 80-90 at 0.2% lows.


 
Last edited:
I haven't seen any evidence that resolution factors into CPU calculations. If this was true we would see low core count struggle more than 15% from a i3 to top tier. Most logical reason for that discrepancy is the core frequency not being the same and different CPU generation (IPC clock). Id wager with the same parameters and just less cores, the fps would be the same until you hit that magical 4 core count or below.

Now I have not tested this with a billion games, so outliers may exist or it's a per game engine basis.
This is true but it is also true that reviews tend to have the game and only the game running. Whereas most people will have other programs running concurrently, on likely a less than tuned OS.

AFAIK the tests are air gapped too, so there's no network traffic or variability in game/software version etc., but also no additional load from that.

The point I'm making is that singleplayer offline, clean OS testing is a little less representative of the average gamer online, background tasks, less than clean OS. But it can provide insight into what resources a game needs in a vacuum.

In my experience eight cores without SMT/HT is enough, six for games, two for everything else.

The only way to use a dual CCD chip for gaming is to pin the game on one CCD and everything else on the other. Using both for the game will result in worse performance. Similar story for P+E cores, but there's no latency penalty moving off die in that situation.

The final thing I'd like to point out is that a huge amount of people have a 60/120/165 Hz monitor, and are almost never CPU limited in the first place. Try 240/360 Hz with a non entry level GPU, and they might start appreciating the actual differences in performance between CPUs.
 
This is true but it is also true that reviews tend to have the game and only the game running. Whereas most people will have other programs running concurrently, on likely a less than tuned OS.

AFAIK the tests are air gapped too, so there's no network traffic or variability in game/software version etc., but also no additional load from that.

The point I'm making is that singleplayer offline, clean OS testing is a little less representative of the average gamer online, background tasks, less than clean OS. But it can provide insight into what resources a game needs in a vacuum.

In my experience eight cores without SMT/HT is enough, six for games, two for everything else.

The only way to use a dual CCD chip for gaming is to pin the game on one CCD and everything else on the other. Using both for the game will result in worse performance. Similar story for P+E cores, but there's no latency penalty moving off die in that situation.

The final thing I'd like to point out is that a huge amount of people have a 60/120/165 Hz monitor, and are almost never CPU limited in the first place. Try 240/360 Hz with a non entry level GPU, and they might start appreciating the actual differences in performance between CPUs.
Im running a very barebones stripped windows and still 6c 12t is pretty bad in eg TLOU. I'm sure the same will happen in cp2077 with PT on, in the heavy areas of the game perfomance will drop like a rock.
 
Im running a very barebones stripped windows and still 6c 12t is pretty bad in eg TLOU. I'm sure the same will happen in cp2077 with PT on, in the heavy areas of the game perfomance will drop like a rock.
I had no issues running TLOU above 90 FPS for it's entire playthrough with maxed settings. I use Process Lasso so the game only saw six threads.

You need a 4090 to get 120 FPS anyway.
 
I had no issues running TLOU above 120 FPS for it's entire playthrough with maxed settings. I use Process Lasso so the game only saw six threads.
I'm sure it can, mine also avaraged above 120, but the fully enabled chip was hitting like 180+. And especially in the 1% and 0.2% lows there was a huge discrepancy. Yoi can see the videos above, the 6c12t dropped to the 40ies in terms of lows, the full chip was at 140.

You can try Tom's dinner in CP77 with RT enabled, 6c will just choke.
 
I'm sure it can, mine also avaraged above 120, but the fully enabled chipped was hitting like 180+. And especially in the 1% and 0.2% lows there was a huge discrepancy. Yoi can see the videos above, the 6c12t dropped to the 40ies in terms of lows, the full chip was at 140
This is contrary to TPU testing.


The game is GPU limited at every point.
 
This is contrary to TPU testing.

This is a gpu test?

The game is GPU limited at every point.
Yes, at 1080p it is with a high end cpu. I'm using dlss to make it cpu bound.
 
Right you dont know because you only have 6 cores..
Seeing usage on more than 6 cores doesn't mean the game benefits or even uses more than 6 cores for it. Threads can be swapped between cores.
We also see all cores/threads used on high core count CPUs in other games, doesn't mean it makes a sliver of a difference at all. It could, but unless you have truly tested this, you can safely count on it to be irrelevant.

As an outsider looking in, I dislike the division it causes. A CPU specifically made just for games, but it can do other stuff ok. I would prefer a CPU that can do it all, just like what we had before :D

As an insider who owns one, I dislike how locked down the CPU is, as I prefer to tune manually. If I wanted a Dell I would have bought one :)

Outside of that, it isn't a bad CPU, and pretty easy to cool. And the performance at lower resolutions is pretty good..
But there is nothing an X3D can't do. Its just not as fast as a higher core count chip at highly parallelized work. So what? The vast majority of consumers never even get close to that kind of workload. And if they do, its ran on datacenters / the cloud. I don't see how the X3D is any different than a lower core count CPU in the line up of any company. It just has the added advantage of higher gaming performance at superb efficiency. Also, the non-X3Ds haven't become worse at those workloads either.
 
Seeing usage on more than 6 cores doesn't mean the game benefits or even uses more than 6 cores for it. Threads can be swapped between cores.
We also see all cores/threads used on high core count CPUs in other games, doesn't mean it makes a sliver of a difference at all. It could, but unless you have truly tested this, you can safely count on it to be irrelevant.
FWIW Linux etc. had load average number that would put this question to rest very quickly. If it grows above your core count, you are being core-count bound, among other things. I don't know of anything directly comparable on Windows ecosystem. Someone could benchmark this with a suitable setup.
 
We know that the X3D chips are in fact more efficient because when you set the scheduler to prefer cache their efficiency increases further:

View attachment 350174

Therefore the cache itself does improve efficiency and not just the frequency limits imposed on X3D processors.
1717571443865.png

I'm going to make the guess that the CPU simply can't clock as high, lowering the power consumption in the process.
Even though it's more efficient, it would likely be due to the probable lower frequency. The difference between the 7700X and non-X kinda shows this too, a 5% performance loss for a 8W(20%) power reduction.
No, it was just never apart of the conversation. It's illogical to imply that everything not discussed is purposefully being ignored.
$100 more expensive, whether that's significant is subjective.
Which is irrelevant because my argument was about the objective merits of the product. Subjective ever shifting metrics aren't really interesting, particularly on the internet where those metrics will change based on whatever argument the person wants to support.
Mentioning good points of the CPU to imply it's better than what someone thinks, while not mentioning the most obvious downside is ignoring it, regardless if it's on purpose or not.
Realistically it was more than $100, as the 7950X wasn't being sold for MSRP by the time the X3D version came to market. But anyway, how significant the difference is, is subjective, but it's in fact more expensive.
While some merits can be objective, any conclusion about a product as a whole is subjective, after all performance greatly varies depending on the tests done, and how they are done, on top of market conditions, and available options all greatly changing the conclusions.

I think the X3D CPUs can be great for many people, in particular at their own niche, but they have their own set of drawbacks, and as someone that doesn't care about any tasks that benefit from the extra cache, i would never buy them and just ignore it. Diffrent from the opinion of the user that started the discussion, I like having more options, and think overall it's a good thing, even if, like in this case it's irrelevant to me.
 
what game(s) makes use of 16 cores? :rolleyes:
Compiling the shaders, plus, installing/uncompressing games.

As the magnanimous Lisa said, "Software still lags behind hardware. We offer more cores according to what they can keep up with"
 
what game(s) makes use of 16 cores? :rolleyes:
Ashes of the Singularity is the only one I can think about and it isn't even designed for 16 core use, but it could put them to use because it was designed with heavy Multi_Threading in mind.
I guess point taken, 16 is overkill for today's game market
 
This is a gpu test?
So is this:
This is what an actual CPU limited game looks like. Most high end cards are bunched up at the top.
TLOU is not like that. In every practical scenario the game will be GPU limited. I suppose a bottleneck can be induced via DLSS, not sure why you would want to, though.
 
So is this:
This is what an actual CPU limited game looks like. Most high end cards are bunched up at the top.
TLOU is not like that. In every practical scenario the game will be GPU limited. I suppose a bottleneck can be induced via DLSS, not sure why you would want to, though.
Every game can become cpu limited by dropping the resolution. I showed some video results of what happens in tlou with 6c cpus
 
Not heard of GTA V scaling past 6 cores. Although im running off what info i can find on the internet. So if it does - I havent read that it does anywhere.

Here I made a screenshot while running GTA V, as you can see, even the 4 e-cores are helping.

987793783003889.png


how would anyone know if it's yet to be released on PC?

GTA V PC was released in April 2015.

GTA 6 PC will probably be released somewhere in 2026.


Years ago, @CAPSLOCKSTUCK already proved that GTA V was using all his xeon cores.

So , there you have it, once again...
 
Last edited:
Every game can become cpu limited by dropping the resolution. I showed some video results of what happens in tlou with 6c cpus
Congratulations, you've proven that when a game is made artificially CPU-bound it performs better with more CPU cores.

SHOCKER.

Do you have any other pearls of wisdom you'd like to share? Like, say, the colour of the sky; or what bears do in woods?
 
I haven't seen any evidence that resolution factors into CPU calculations.
It's a known factor, ignoring it will not make it any less factual.
Now I have not tested this with a billion games, so outliers may exist or it's a per game engine basis.
We don't need to test a billion games, that's already been done. All we need to do is understand what the numbers tell us. And the numbers for more games than not show very clearly that many games are utilizing more than 2 or 4 cores. Some can take advantage of up to as many as 12, though most current are happy with 8 cores as long as the CPU is modern(2018 forward). Older CPU's suffer a bit. So a Xeon W3680 for example, a 6 core, is not going to cut it for modern games, but even a Ryzen 1600X is going to be a bit dated for some newer titles. While a 3600X would be enough for most.

Years ago, @CAPSLOCKSTUCK already proved that GTA V was using all his xeon cores.
Can't remember what thread that was, but this sounds familiar. And yeah, this was one of those games that uses more than 4 cores to good effect, especially when you turn the settings up.

Every game can become cpu limited by dropping the resolution. I showed some video results of what happens in tlou with 6c cpus
This is part of what I was talking about.
 
Last edited:
Congratulations, you've proven that when a game is made artificially CPU-bound it performs better with more CPU cores.

SHOCKER.

Do you have any other pearls of wisdom you'd like to share? Like, say, the colour of the sky; or what bears do in woods?
Huh? How is that obvious? When people say more cores don't make a difference they don't usually mean " because you are gpu bound", what they mean is that the game doesn't scale with more cores. I showed otherwise. The point is some games can't utilize more than 4 or 6 cores and are ST bound, some games (like tlou I used) are actually bound my the MT performance, ie number of cores.

But even if I just used 1080p or even 1440p resolution the 6c would suffer greatly in the 0.1% lows as already shown.
 
maybe a 4c/8t runs it just fine.

Also depends on the GPU of course.

In my case it didn't run fine previously, I had a i7 6700K @ 4.5Ghz and it was bottlenecking my 2070 Super with GTA V.

Since I've been running a i7 12700K @ 5Ghz no more bottlenecking.

@1440p high-refresh that is.
 
Realistically it was more than $100, as the 7950X wasn't being sold for MSRP by the time the X3D version came to market. But anyway, how significant the difference is, is subjective, but it's in fact more expensive.

No, this is more you cherry picking the SKU and time-frame you want to represent X3D. First off, what relevance does temporary launch scarcity pricing have to do with the pricing of a product overall? It'd be one thing if that pricing stuck over a majority of the product's lifetime, at least then you could claim that was effectively the price. That isn't the case here though, for the majority of it's life the 7950X3D has been at it's MSRP. In essence you are trying to take a small period of a product's aftermarket pricing history and implying that it somehow represents all X3D CPUs. That's nonsense. Second, why out of all the X3D CPUs would you choose the 7950X3D? You are trying to represent a class of products with the less common SKU so I do believe you have explaining to do as to why that wasn't just a blatant attempt to cherry-pick.

Mentioning good points of the CPU to imply it's better than what someone thinks, while not mentioning the most obvious downside is ignoring it, regardless if it's on purpose or not.

You mean like intentfullly ignoring other X3Ds or excluding the large period of a product's life as you did above?

Valve was never part of the original conversation. You are jumping past so many other potential reasons to come to this conclusion. Again, simply because something isn't mentioned doesn't mean it's being ignored. By saying something is being ignored, you are implying intentful exclusion and in this case you are assuming my intent and I've already explicitly told you that assumption was wrong. Just because you do something, don't automaticlaly assume someone else does.

While some merits can be objective, any conclusion about a product as a whole is subjective, after all performance greatly varies depending on the tests done, and how they are done, on top of market conditions, and available options all greatly changing the conclusions.

Conclusions are subjective, the degree to which depends on the exact conclusion. Performance data is not. You should not be seeing large performance differences if you replicate the exact test setup and settings used in reviews. The use of said data could be considered somewhat subjective though, as in how applicable it is at large or how useful said data is (it's comparability to real life scenarios for example).

I think the X3D CPUs can be great for many people, in particular at their own niche, but they have their own set of drawbacks, and as someone that doesn't care about any tasks that benefit from the extra cache, i would never buy them and just ignore it. Diffrent from the opinion of the user that started the discussion, I like having more options, and think overall it's a good thing, even if, like in this case it's irrelevant to me.

"drawbacks"?

Thus far you've only pointed out one thing you think is a drawback. There seems to be a mythos here where higher than normal prices during a full moon on the summer solstace turns into a three headed hydra before our very eyes.
 
Back
Top