- Joined
- Mar 15, 2017
- Messages
- 198 (0.07/day)
This is an idea that hadn't previously occurred to me until I heard it being mentioned in passing in a video recently and thought about it. Most often CPU reviews consist of taking the fastest graphics card available and moving it across different CPU test benches to assess CPU performance when graphics horsepower isn't a concern. With GPU reviews it's the other way around - the fastest CPU test system is home to different graphics cards. This is all good, but people more often build systems around mid-range parts, not the fastest (and most expensive) ones.
Reviews on the big tech sites that illustrate bottlenecks across different tiers of both CPUs and GPUs are quite rare and include fewer game titles than typical CPU/GPU only testing. This is mostly understandable in my opinion, because of the time it takes to juggle multiple configurations and do benchmark runs. So how would you (as an average user) know where to draw the line and not overspend on your CPU purchase, for example, when you have a midrange GPU, but reviews for this GPU only include using top-tier CPU models?
This is where I think that TPU's recent addition of absolute average/minimum FPS values for both CPU and GPU reviews may prove to be very handy for getting a general idea of what parts make for a good pairing. Since top-tier CPU parts from the one generation are usually fast enough to saturate the same generation's midrange GPU parts (and vice-versa), and since the top-tier CPU and GPU (as the main performance drivers), settings, game selection etc. are almost universally shared between TPU's own contemporary reviews, performance numbers should be almost comparable and used as a good point of reference. I say mostly because I know that driver and OS versions, game selection, storage etc. differences may exist, but unless there's some catastrophic bug with one of those that needs patching, they add little variability.
Enough theory, let's take a look at my friend's recent upgrade for example. His old setup included a Ryzen 1600 AF and a GTX 960 carried over from his previous build, and the new parts are the Ryzen 5600 and an RX 6600 XT. For practical purposes and a lack of this particular part in recent reviews, let's say his 1600 AF is the R5 2600.
The RX 6600 XT is already installed and set in stone, the R5 5600 is still waiting. While waiting for his call to help out with the installation I got curious and wanted to have a look at what the most suitable CPU that extracts all (or most of) the card's performance is versus what he's got now. The avg/min numbers for finishing the TPU bench suite at 1080p for the three relevant parts are as follows:
RX 6600 XT - 97/78 FPS (with an i9 13900K)
R5 2600 - 96/69 FPS (with an RTX 4090)
R5 5600 - 176/123 FPS (with an RTX 4090)
I understand that this is the parts running "unrestricted", the best possible performance at this resolution with high settings that these parts can deliver right now. When the data is mixed together, I see that that the R5 5600 is capable of delivering a much higher averaged and minimum FPS than the RX 6600 XT is capable of providing at these settings. In fact, the R5 looks like quite a good match for the RX 6600 XT as their avg/min numbers are quite similar, and the R5 5600 apparently has enough horsepower to saturate the RX 6800 XT, for example.
A couple of other thoughts: I know that using low settings shifts the bottleneck more towards the CPU, so the RX 6600 XT probably will need a CPU such as the R5 5600 to let it stretch its legs and achieve higher than this test suite's average FPS at this resolution. Also, different games scale differently with CPU clock speed, architecture, cache etc., but without per-game results containing every CPU/GPU tested ever, direct and concrete comparisons are not as easy without having the hardware at hand.
Still, even with all these limitations, combining CPU and GPU review info may provide a ballpark estimate of what bottleneck and where you can expect when combining different hardware if I'm not mistaken. What do you guys think?
P.S. Sorry if this has been prevously discussed somewhere. I don't follow every thread.
Reviews on the big tech sites that illustrate bottlenecks across different tiers of both CPUs and GPUs are quite rare and include fewer game titles than typical CPU/GPU only testing. This is mostly understandable in my opinion, because of the time it takes to juggle multiple configurations and do benchmark runs. So how would you (as an average user) know where to draw the line and not overspend on your CPU purchase, for example, when you have a midrange GPU, but reviews for this GPU only include using top-tier CPU models?
This is where I think that TPU's recent addition of absolute average/minimum FPS values for both CPU and GPU reviews may prove to be very handy for getting a general idea of what parts make for a good pairing. Since top-tier CPU parts from the one generation are usually fast enough to saturate the same generation's midrange GPU parts (and vice-versa), and since the top-tier CPU and GPU (as the main performance drivers), settings, game selection etc. are almost universally shared between TPU's own contemporary reviews, performance numbers should be almost comparable and used as a good point of reference. I say mostly because I know that driver and OS versions, game selection, storage etc. differences may exist, but unless there's some catastrophic bug with one of those that needs patching, they add little variability.
Enough theory, let's take a look at my friend's recent upgrade for example. His old setup included a Ryzen 1600 AF and a GTX 960 carried over from his previous build, and the new parts are the Ryzen 5600 and an RX 6600 XT. For practical purposes and a lack of this particular part in recent reviews, let's say his 1600 AF is the R5 2600.
The RX 6600 XT is already installed and set in stone, the R5 5600 is still waiting. While waiting for his call to help out with the installation I got curious and wanted to have a look at what the most suitable CPU that extracts all (or most of) the card's performance is versus what he's got now. The avg/min numbers for finishing the TPU bench suite at 1080p for the three relevant parts are as follows:
RX 6600 XT - 97/78 FPS (with an i9 13900K)
R5 2600 - 96/69 FPS (with an RTX 4090)
R5 5600 - 176/123 FPS (with an RTX 4090)
I understand that this is the parts running "unrestricted", the best possible performance at this resolution with high settings that these parts can deliver right now. When the data is mixed together, I see that that the R5 5600 is capable of delivering a much higher averaged and minimum FPS than the RX 6600 XT is capable of providing at these settings. In fact, the R5 looks like quite a good match for the RX 6600 XT as their avg/min numbers are quite similar, and the R5 5600 apparently has enough horsepower to saturate the RX 6800 XT, for example.
A couple of other thoughts: I know that using low settings shifts the bottleneck more towards the CPU, so the RX 6600 XT probably will need a CPU such as the R5 5600 to let it stretch its legs and achieve higher than this test suite's average FPS at this resolution. Also, different games scale differently with CPU clock speed, architecture, cache etc., but without per-game results containing every CPU/GPU tested ever, direct and concrete comparisons are not as easy without having the hardware at hand.
Still, even with all these limitations, combining CPU and GPU review info may provide a ballpark estimate of what bottleneck and where you can expect when combining different hardware if I'm not mistaken. What do you guys think?
P.S. Sorry if this has been prevously discussed somewhere. I don't follow every thread.
Last edited: