• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

can someone explain this graph of cpu/gpu?

makes no sense what you said. in the 5700xt and 5600xt, the fps are the same there. why is there no variation between the cpu if its "faster cpu=higher fps". there should be variation and on the 3070 3090 there is.

have no idea what youre saying about .5/1.5
 
The 5600 XT and 5700 XT are completely GPU-limited and all CPUs are waiting for something to do because the GPUs are too slow. Frankly the 3070 is GPU-limited as well as the differences are very small.

The 3090 is also GPU limited with the 3600X and 5600X but is finally CPU-limited by the slower 1600x and 2600X. So the 3090 is finally waiting for the 1600X and 2600X to finish their calculations as they're too slow for the very fast 3090.

Remember that this behavior will be different in every game and even different sections of the same game and also at different graphics quality settings. It's likely in Cyberpunk 2077 at Ultra that all GPUs will be the limiting factor as CP2077 is much more demanding on the GPU.
 
Last edited:
A faster graphics card needs a faster processor to show its true power.

The 5600 and 5700 aren't fast enough to keep the CPU busy, therefore, the graphics card is the limiting factor in game performance.

With the 3090, however, every single drop is extracted from the CPU, so the differences show.

This is why it is important to match your CPU with your GPU properly when building a gaming PC. A 4090 with a Core i3 would be equally bad as a Core i9 with a 3050.
 
makes no sense what you said. in the 5700xt and 5600xt, the fps are the same there. why is there no variation between the cpu if its "faster cpu=higher fps". there should be variation and on the 3070 3090 there is.

have no idea what youre saying about .5/1.5
Because any single component in a PC can be the limiter for the achieved frame rates.
Why are these all the same? Because the weakest CPU in each group is still faster than those GPUs.

1698477485492.png



This is why CPU reviews always use the fastest GPU out at the time of testing, so you can see actual differences without all the results being the same.
 
This is why it is important to match your CPU with your GPU properly when building a gaming PC. A 4090 with a Core i3 would be equally bad as a Core i9 with a 3050.
thanks for the clarification. all helpful replies from all. is there a good pairing site to look at? I was looking at the 5600x and 5700xt/6600xt but I dont plan to play any shooter games and if I hit 50-60fps, im ecstatic as thats more then enough for me. hell ive played games at 35-50fps. its not bad and still enjoyable.
 
thanks for the clarification. all helpful replies from all. is there a good pairing site to look at? I was looking at the 5600x and 5700xt/6600xt but I dont plan to play any shooter games and if I hit 50-60fps, im ecstatic as thats more then enough for me. hell ive played games at 35-50fps. its not bad and still enjoyable.
You are on one. :) Just ask, and we'll try our best to help you. The 5600X and 6600 XT is a good pair, by the way.
 
The 5600X and 6600 XT is a good pair, by the way.
but its shown that the 6600 and 5700 are quite similar. only a bit boost in performance and less watts used. I saw that it was between 3-5fps on average. I would take the 5700xt if it wasnt for the power draw as many people dont take into consideration investment over the time of use. and gpu are getting crazy high tdp. its a huge factor for my purchases.

still curious why in the graph the 5700xt doesnt give more fps to the 5600x vs the 3600x. the 3600x may be a weak link, but in comes the 5600x and it should be a bit more. something is off there. is that all of those cpu are stronger then the 5700xt and they all max it out? seems weird a gen 1 ryzen 1600x maxes out a 5700xt which is much newer, no?

thanks man. much appreciated. someone should consider making a site that does pairing with charts and stuff. theres too many reviewers as is. weird there isnt one.
 
Whenever you hear someone say "GPU bottleneck" this is what they mean.

The reason the CPU doesn't matter for the 5700XT results is because the GPU is because all the CPUs are idle some of the time. The 1600X might be running at 95% and idle 5% of the time, while the 5600X might be running at 50% and idle 50% of the time. Both CPUs are waiting for the GPU to finish rendering the last frame before they send it the next one to work on....

Think of it like cars on a road with traffic signals; The 1600X is a slow car, the 5600X is a fast car. At every stop when the traffic light turns green, the 5600X races ahead to the next red light and waits there for the green light. The slow 1600X eventually gets to the same stop light and they both wait there a little longer until the light goes green. So the 5600X is a faster car, but both cars are covering ground at the exact same average speed because it's not the speed or acceleration of the car that matters, it's how fast the signal lights change.
 
Last edited:
but its shown that the 6600 and 5700 are quite similar. only a bit boost in performance and less watts used. I saw that it was between 3-5fps on average. I would take the 5700xt if it wasnt for the power draw as many people dont take into consideration investment over the time of use. and gpu are getting crazy high tdp. its a huge factor for my purchases.
I would take the 6600 XT because of its smaller power draw, and more modern features. The 5000 series is already showing its age in games where mesh shaders are required, like Alan Wake 2.

still curious why in the graph the 5700xt doesnt give more fps to the 5600x vs the 3600x. the 3600x may be a weak link, but in comes the 5600x and it should be a bit more. something is off there. is that all of those cpu are stronger then the 5700xt and they all max it out? seems weird a gen 1 ryzen 1600x maxes out a 5700xt which is much newer, no?
It depends on the game. In that particular game, the CPU isn't used enough to be maxed out. In some other games, it might be. Generally, you want a gaming PC where the GPU is the bottleneck. It usually gives smoother gameplay, not to mention, the GPU is easier to upgrade.

thanks man. much appreciated. someone should consider making a site that does pairing with charts and stuff. theres too many reviewers as is. weird there isnt one.
No worries. :)

The problem is that there's so many variables (which game, which CPU architecture, which CPU or GPU model, etc), that you can't just set up a chart to recommend something definite. You need a bit of experience to figure this out.
 
but its shown that the 6600 and 5700 are quite similar. only a bit boost in performance and less watts used. I saw that it was between 3-5fps on average. I would take the 5700xt if it wasnt for the power draw as many people dont take into consideration investment over the time of use. and gpu are getting crazy high tdp. its a huge factor for my purchases.

still curious why in the graph the 5700xt doesnt give more fps to the 5600x vs the 3600x. the 3600x may be a weak link, but in comes the 5600x and it should be a bit more. something is off there. is that all of those cpu are stronger then the 5700xt and they all max it out? seems weird a gen 1 ryzen 1600x maxes out a 5700xt which is much newer, no?

thanks man. much appreciated. someone should consider making a site that does pairing with charts and stuff. theres too many reviewers as is. weird there isnt one.
They exist but this isnt cut n dry 'do this cpu with that GPU'. And as a result these simple sites just suck.

The balance you need is time dependant and on top of that depends on your needs and wants.

Example: say you play on a 1080p monitor while I play on a 4K monitor. Our games will run fine on say, a 5800X3D with any GPU. But for my 4K rig I definitely want a much faster GPU, for each frame the CPU is calculating, I need to calculate 4x more pixels to show the image.
 
I would take the 6600 XT because of its smaller power draw, and more modern features. The 5000 series is already showing its age in games where mesh shaders are required, like Alan Wake 2.
Yes and no. You're not wrong. I'd also choose the 6600XT over a 5700XT but....

Right now 99.9% of games don't use or need mesh shaders. In 3-5 years time, 97% of games probably won't need or use mesh shaders.

If you're the sort of person who only plays the latest AAA games then mesh shaders are probably more important, but at the same time both the 5700XT and 6600XT are already starting to struggle with last year's AAA games - 1080p60 usually requires a few compromises to medium settings for 2022 and 2023 titles. Mesh shader support is almost irrelevant because the new 2024 and 2025 AAA games that NEED them are also fairly likely to be beyond the reasonable scope of either card without seriously compromised image quality or framerate.
 
thanks for the clarification. all helpful replies from all. is there a good pairing site to look at? I was looking at the 5600x and 5700xt/6600xt but I dont plan to play any shooter games and if I hit 50-60fps, im ecstatic as thats more then enough for me. hell ive played games at 35-50fps. its not bad and still enjoyable.
You're on that website.

You start with the performance of your CPU, because you can't "turn down" CPU intensive settings in a game - that'd be kicking half the players out or killing half the units, and isn't possible.
You can lower GPU settings, a 4090 on ultra is no different to a 3090 on medium, or a 5700XT on lower settings again.

Find the latest CPU review, look for minimum FPS. The rest is upto you, pair with any GPU you want, lower settings til you're happy with the FPS. Preferably run an FPS cap to keep things within the 1% low range of the CPU, and you'll get stutter free happiness.
Notice how nothing can really reach 144FPS, despite high refresh displays being a big deal? This is why they don't matter yet.


Intel Core i9-14900K Review - Reaching for the Performance Crown - Minimum FPS / RTX 4090 | TechPowerUp
4K results are what matter the most IMO, because they show what they'll be like under a more demanding load, which you can use as an example of what next-gen games will run like in the coming years, even at lowered settings.

1698484277384.png



60Hz display? Even a ryzen 2600 is fin.

For models not listed, compare them to the closest one that is - look how the 5600 and 5600G are far apart (they're very different designs) while the similar Zen3 CPUs (5600 through 5950x) are all very close together

Higher models tend to add more cores, which doesnt matter at all for gaming. They only show up as faster due to them also being 100-200Mhz faster each step, to artificially make them the 'best' while real world results are basically identical

Spot the nearly identical CPU's, for both AMD and intel
1698484412408.png


5800x3D added more cache to break away from the pack here, and is an example of why to investigate anything different with the naming
1698484510320.png
 
Yes and no. You're not wrong. I'd also choose the 6600XT over a 5700XT but....

Right now 99.9% of games don't use or need mesh shaders. In 3-5 years time, 97% of games probably won't need or use mesh shaders.

If you're the sort of person who only plays the latest AAA games then mesh shaders are probably more important, but at the same time both the 5700XT and 6600XT are already starting to struggle with last year's AAA games - 1080p60 usually requires a few compromises to medium settings for 2022 and 2023 titles. Mesh shader support is almost irrelevant because the new 2024 and 2025 AAA games that NEED them are also fairly likely to be beyond the reasonable scope of either card without seriously compromised image quality or framerate.
You're not wrong. But I'd still much rather get the 6600 XT. A more modern architecture usually means better efficiency and longer driver support.
 
making a site that does pairing with charts and stuff
TPU has it but it's still vacuumised because of these reasons:

0. People have different displays or even sets of displays. Greater resolution = bigger impact on GPU performance, more CPU idle time (that's why at 4K, it's not as bad to have an i3: your GPU is tortured regardless).
1. People play different games and have different graphics preferences. This leads to inconsistent data on what PC component is used the most. Average results do exist but they should be looked at with a little grain of salt because your mileage WILL vary.
2. Different motherboards have different layouts, thus having different latencies which results in faster RAM access in one PC and quicker GPU access in another one. It'll take a billion eternities to make an elaborate chart on motherboards based on one chipset, let alone all non-outdated motherboards.
3. Silicon lottery means you don't have equally tuned RAM sticks on two different systems. Bandwidths and latencies differ which causes shifts in results.
...and so on.

Best case scenario you will get about 90 percent accurate information about your GPU before actually using it and seeing how it performes "in person."

That said, 5700 XT is extremely inferior to 6600 XT in terms of efficiency, support, hardware condition, warranty, and feature set. I'd rather get a 6800 non-XT though... But I know, additional $120...180 could be too much for you. 6600/7600 is a no-brainer if we compare to 5700 XT anyway.

If you're below 1080p (why?) then you'll be more often limited by CPU+RAM.
If you're at 1080p or UW 1080p then your system is balanced.
At 1440p, 6600 XT is usually tortured a bit more than the Ryzen 5600X. But that's still playable.
At UW 1440p and beyond, 6600 XT is usually insufficient.
 
Last edited:
4K results are what matter the most IMO, because they show what they'll be like under a more demanding load, which you can use as an example of what next-gen games will run like in the coming years, even at lowered settings.
This is a pretty dangerous statement. Look at Alan Wake 2.

All bets are off wrt next gen engines and games. The best approach is 'get a bit of headroom in performance' and 'get sufficient VRAM'. Beyond that, you can't really say a thing about what your perf will look like based on card performance at a different res. More often than not, resolution upgrades aren't that painful on performance if you can already get good frames - its a simple relative hit: 25% more pixels? 75% of your perf left, or more.

Engine upgrades on the other hand... they can simply place whole generations of cards out of the game just like that. RDNA1 on Alan Wake 2 is a good recent example. Required technologies and featuresets are the most influential impact on GPUs. Consider for example also stuff like having DLSS/FSR support on your card.
 
Last edited:
RDNA1 on Alan Wake 2 is a good recent example.
Pascal is even more super dead. 5700 XT is at least capable of hitting low 40s with FSR...
 
Pascal is even more super dead. 5700 XT is at least capable of hitting low 40s with FSR...
Yes but RDNA1 launched in 2019, Pascal in 2016.

That's 1-1,5 gen worth of a gap. And if you place that in perspective... If you bought RDNA1 in 2019, you've had a mere 4 years of good perf out of it before it just drops dead. For Pascal, the counter is at 7 years. Almost double the lifespan.
 
Yes but RDNA1 launched in 2019, Pascal in 2016.

That's 1-1,5 gen worth of a gap. And if you place that in perspective... If you bought RDNA1 in 2019, you've had a mere 4 years of good perf out of it before it just drops dead. For Pascal, the counter is at 7 years. Almost double the lifespan.
True but 5700 XT is still 60 percent ahead and that's significant. Not to mention the difference in positioning and MSRP: 1080 Ti was a borderline halo GPU with 7 hunnit bucks of a price and 5700 XT was launched almost twice as cheap. 1070 was launched for the same ~400 USD as the 5700 XT and... yeah, I mean, there is no way 1070 can compete. RDNA1 has fallen faster, true, but Pascal has it much harder.

Pascal is still the only nVidia's architecture to be much future proof so far. It's only now it becomes unplayable. Turing didn't achieve much due to poor perf per $ and pre-alpha state of RT cores. Ampere... suboptimal Samsung node, questionable VRAM capacity, instant abandonware state (DLSS Frame Generation and similar features wise) and Ada Lovelace feels like the only GPU capable of ageing well is the 4090. With a grain of salt because $1600 is way beyond average Joe's GPU budget.

RDNA1, though, wasn't a success by any mean. Just a regular "AMD do their AMD thing," nothing extra. RDNA2 is much better despite exceptionally greedy MSRPs, even greedier than those of Ampere GPUs.
 
True but 5700 XT is still 60 percent ahead and that's significant. Not to mention the difference in positioning and MSRP: 1080 Ti was a borderline halo GPU with 7 hunnit bucks of a price and 5700 XT was launched almost twice as cheap. 1070 was launched for the same ~400 USD as the 5700 XT and... yeah, I mean, there is no way 1070 can compete. RDNA1 has fallen faster, true, but Pascal has it much harder.

Pascal is still the only nVidia's architecture to be much future proof so far. It's only now it becomes unplayable. Turing didn't achieve much due to poor perf per $ and pre-alpha state of RT cores. Ampere... suboptimal Samsung node, questionable VRAM capacity, instant abandonware state (DLSS Frame Generation and similar features wise) and Ada Lovelace feels like the only GPU capable of ageing well is the 4090. With a grain of salt because $1600 is way beyond average Joe's GPU budget.

RDNA1, though, wasn't a success by any mean. Just a regular "AMD do their AMD thing," nothing extra. RDNA2 is much better despite exceptionally greedy MSRPs, even greedier than those of Ampere GPUs.
We are in complete agreement here. To relate this to what Mussels said about performance over time... you have to be pretty deep into the world of GPUs and gaming to know what's what. Even just reviews won't provide the frame of reference you really need to make sound choices.
 
Pascal is still the only nVidia's architecture to be much future proof so far. It's only now it becomes unplayable. Turing didn't achieve much due to poor perf per $ and pre-alpha state of RT cores. Ampere... suboptimal Samsung node, questionable VRAM capacity, instant abandonware state (DLSS Frame Generation and similar features wise) and Ada Lovelace feels like the only GPU capable of ageing well is the 4090. With a grain of salt because $1600 is way beyond average Joe's GPU budget.
I don't think Ada will age well. It lacks VRAM just like Ampere, and it relies way too heavily on software augmentation (e.g. DLSS FG) to run games properly. Performance in games of the next 2-4 years will drop like a rock, and all Ada prices will follow that drop when the 50-series is out. The 4090 is only held up by its halo status, which will also change. Vaporware, the whole lot.

Pascal, on the other hand, did very well by pushing through 7 years of gaming. It's probably the best series Nvidia has ever had.
 

so in the picture in all his tests of games the ones in the 5700xt and the 5600xt all have the same fps and the ones with the 3070 or 3090 have varying scores. why is this?
That's called a bottleneck. In case you need a more detailed explanation, you can always ask ChatGPT.
Me said:
Describe a bottleneck in terms of a computer system.
ChatGPT said:
In the context of a computer system, a bottleneck refers to a point of congestion or restriction in the flow of data or operations that hinders the system's overall performance. This bottleneck prevents the system from operating at its maximum potential, and it can occur at various points within a computer system. Here are some common examples of bottlenecks in a computer system:

1. CPU Bottleneck: This occurs when the central processing unit (CPU) is unable to handle the incoming data and instructions as quickly as other components can generate them. In such cases, the CPU becomes the limiting factor, causing slowdowns in processing and overall system performance.

2. Memory Bottleneck: A memory bottleneck happens when the computer's RAM (Random Access Memory) or other types of memory cannot keep up with the data transfer demands of the CPU and other components. This can lead to slower performance as the system needs to fetch data from slower storage devices.

3. Storage Bottleneck: When the storage subsystem, which includes hard drives or solid-state drives, is unable to read or write data as fast as other components need it, a storage bottleneck occurs. Slow storage can lead to slower boot times, application loading, and file access.

4. Network Bottleneck: In networked computer systems, a bottleneck can occur in the network infrastructure, such as routers, switches, or network cables. Slow network speeds can cause delays in data transfer and communication between devices.

5. Input/Output (I/O) Bottleneck: This type of bottleneck occurs when the input/output operations, like reading from or writing to storage devices, are slower than the processing speed of the CPU. This can impact tasks that involve frequent data access, like database operations or file transfers.

6. Graphics Processing Unit (GPU) Bottleneck: In systems that rely on GPUs for tasks like gaming or graphics-intensive applications, a GPU bottleneck can occur when the GPU's processing power is not sufficient to handle the workload, leading to reduced frame rates and graphics performance.

7. Software Bottleneck: Sometimes, the bottleneck isn't hardware-related but rather a result of inefficient software or poorly optimized code. Software bottlenecks can manifest as slow program execution or resource-intensive applications that strain the hardware components.

Identifying and addressing bottlenecks is essential for optimizing a computer system's performance. This often involves upgrading the bottlenecked component or improving system configuration, such as adding more memory, upgrading the CPU, or using faster storage devices. Performance monitoring tools can help diagnose and pinpoint the source of bottlenecks in a computer system, allowing for targeted improvements to enhance overall efficiency and speed.
The two you likely care about is #1 and #6. Maybe #7 to some degree as well.
 
That's called a bottleneck. In case you need a more detailed explanation, you can always ask ChatGPT.
That's as stupid as ChatGPT can get.

A bottleneck doesn't mean that one component "can't keep up" with the work of the others. It means that one component is loaded to 100% of its capacity, while other components spend time waiting that first component to finish. Every computer has some kind of bottleneck, which can vary task by task. It's not something to avoid, it's completely natural.

"Identifying and addressing bottlenecks" (as ChatGPT said) is only essential if the bottleneck is causing performance problems in the desired applications. That is, if your CPU is too slow for your work, or causes your GPU to wait within a game, or if your FPS is too low, or fluctuates too heavily. Other than that, bottlenecks are part of every system.
 
Back
Top