In 1440p, which is the most relevant for flagships as it's either native or upscaled to 2160p
1440p is neither most prevalent nor most relevant for flagships. Those two things are
separate variables. High frame rate games, such as e-sports, are typically played on 1080p high refresh rate monitors, both with flagship and non-flagship CPUs and GPUs. Always good to be aware of this and not make assumptions by lumping resolutions to specific tier of processors or graphics cards. Sure, better CPUs and GPUs will produce better results on higher resolution displays, but that's not the point here. The point is three ways in which a CPU can contribute to gaming and be better or worse than others:
1.
'Floor tests': CPU's contribution to gaming is established in 720p/1080p tests,
regardless of actual resolution people play on. Testing in native 720p/1080p is fundamental, as it tells us
the extent to which a CPU could maximally contribute. Also, over 60% of global PC population still games on 1080p displays. Those floor metrics are still very much relevant and not becoming obsolete any time soon.
2.
Lower settings: Resolutions above, 1440p/4K, are more GPU bound in general, so CPU's impact will be lower, of course. In 4K, top 30 CPUs are within distance of 5-6%, which is not surprising. A CPU can still contribute more in higher resolutions if game settings are lowered from Ultra to High/Middle. In this case, there is more job for CPU to do by lowering demands from GPU. That's another reason why CPUs are tested in lower resolutions, to see the extent of possible contribution to gaming, no matter which resolution gamers use, with or without upscalers.
3.
CPU-intense games: In addition, some games are more CPU-intense than others, which is another factor to consider during testing, and reviewers need to select a fairly balanced share of such games to show us CPU's contribution.
Under those scenarios, 720p/1080p native, lower graphics settings in higher resolutions and CPU-intense games, X3D CPUs will definitively do this job better, on average. Extra L3 cache has been identified as a culprit driving better performance in games that benefit from it.
It's more than that, ~20% in 1080p and ~25% in 720p.
The new AMD Ryzen 9 9950X3D brings Zen 5 with 3D V-Cache to the high-end. This new $700 flagship offers the best application performance, beating even the 9950X, and at the same time you get a fantastic gaming experience that's better than any other non-X3D processor on the market.
www.techpowerup.com
If the 10800X3D would be 9% better than the 9800X3D is that plausible? Too little? How much is the core increase from 8 to 12 going to contribute, besides IPC and possibly higher frequency? So that would make the 10800X3D 25% better than the 285K.
I expect '10800X3D' to be way faster than 9% in relation to 9800X3D. Waaay more, ~30%. New node, more 3D cache, higher clocks, more cores, etc. Current gap between 9800X3D and Arrow Lake is already ~20%, depending on measurements. The gap between 10800X3D and 285K, or '385K' refresh will be higher than that. You can see now how difficult task ahead Nova Lake CPUs have. They would need to lift up gaming performance by ~50% in comparison to Arrow Lake, in order to win in gaming against Zen6 X3D CPUs.
When 5800X3D was launched in 2022, it was on par in gaming with 12900K (see TPU review). From there, next two generations of X3D CPUs increased the gap against Intel's CPUs, more and more each time. 7800X3D was faster in high single digits than Raptor Lake and 9800X3D is faster in double digits against Arrow Lake. Intel will need to take a rabbit out of a hat to close the ever widening gap in gaming performance. They are getting more and more behind, gen after gen. They will need to sort out inter-tile latency to improve memory performance and SSD performance for Direct Storage feature (currently Arrow Lake CPUs throttle the speed of Gen5 SSDs by 2GB/s, which was measured, published and acknowledged by Intel), and offer another layer of cache or more L3 cache.
4 extra cores on Zen6 R7 will contribute to games that benefit from more than 8 cores, and therefore this will lift up overall CPU contribution in test of 40-50 games by a few fps on average. The alleged uplift on 3D cache die to 96MB will further offer CPU benefits in games that can use more cache. Higher clocks too. Lower latency new Infinity Fabric too. All those features will add a few percentages in fps numbers to different games. Not all games will benefits from all those features at the same time, but each game will benefit from a few, to form an overall uplift.
But that isn't the maximum performance that Intel has achieved up to now. The 14900K is a little higher, compared with this the 10800X3D would be 21% better.
So your estimation is spot on.
My estimation was very conservative.
The problem with these single-CCD X3D chips is that when it comes to gaming people use them for comparisons, but for other tasks they use the other chips with higher core count, thus the Intel chips have to be great in all workloads otherwise they get dismissed.
That's Intel's problem to deal with, as it is them who offer a generic CPU only. AMD offers more specialised CPUs in desktop segments. If you do mostly gaming, there are X3D CPUs on offer, if you do mostly productivity workloads, there are vanilla CPUs, if you do both, there are higher core count X3D CPUs, if you need more graphics, there are G desktop APUs. Plenty of choice. Intel has not managed to diversify the offer of desktop CPUs, hence their current CPUs are
jack of all trades, master of none. Some buyers still enjoy such CPUs, but increasing number does not. This is reflected in gradual loss of market share, which we all can see.