- Monitoring of stats is 'an estimate over time' and tied to a polling rate, so you won't always see the exact same numbers. Small differences are 'margin of error' - ignore them.
- Different architectures work differently together. Intel CPU might be better optimized for the task so it can feed the GPU better. The GPU might still have maxed utilization, but that is ALSO an estimate.
The % you see in on-screen monitoring are 'global' numbers, but you really are performance capped if ONE subsystem of your rig is maxed out. For example, if you are transporting the maximum number of bytes over the VRAM/system memory to feed the GPU, you will have 100% utilization on something and the GPU will do all it can, but you can still lose some FPS. You have to consider that we're talking 'Frames PER SECOND' here. An 8 FPS difference over 100 total FPS is just 8%
of one second difference. We're talking about milliseconds here, so percentages are a very inaccurate way to display that.
Things you CAN check:
RTSS /Afterburner can do per-core CPU monitoring, also in games:
What you will probably see is that 1 or 2 cores are always going to be at 99% usage - even today many games still use one or two fat game logic threads and lighter ones alongside it. The performance is capped by the heaviest thread. In that case, improving the single thread CPU clocks will directly benefit gaming performance. This is a given and even DX12 with better threading doesn't completely remove that fact.
View attachment 180604