• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

new gpu = less FPS

This makes no sense, most if not all "competitive gamers" (always makes me cringe) run games with without any form v-sync, which means the input lag is as low as possible all the time, no matter when the monitor refreshes because it will just simply tear, there is no connection whatsoever between the monitor refresh rate and the input lag if no form of synchronization is involved.
Argue about it with the reputable lab that did the testing, not me.
Higher frame rates regardless of monitor Hz make the frame more likely to hit closer to the exact moment the monitor refreshes and decrease input lag. There's good reason why every competitive gamer runs FPS much higher than their monitor refresh rate.

He probably got confused and thought it's a 1080ti instead of a 1080. Simple mistake to make.
Yep. Most people I know avoided the 1080 and got the ti.

Even so, it's about 40-45% difference. So you shouldn't be seeing a regression.

1674914986117.png
 
Argue about it with the reputable lab that did the testing, not me.
Literally nothing in there says that you get less input lag because the frame is more likely to hit closer to the exact moment the monitor refreshes, that's something you came up with which as I explained has nothing to do with input lag.

Higher framerate = less input lag, it has nothing to with when a frame hits the monitor's refresh rate.
 
Even so, it's about 40% difference. So you shouldn't be seeing a regression.
6750XT is the GPU OP has (not the 5700XT you marked) and it is about the same as 2080ti from the left graph. From the right graph you posted, 135 (2080ti) divided by 85 (1080) is about 58% perf improvement
 
6750XT is the GPU OP has (not the 5700XT you marked) and it is about the same as 2080ti from the left graph. From the right graph you posted, 135 (2080ti) divided by 85 (1080) is about 58% perf improvement
5700XT is 26% slower than the 6750XT
1080 is 15% slower than the 5700XT
TPU doesn't have a graph with both the 6750XT and the 1080.

I didn't do the math with 2080ti as a substitute, but I'll take your word for it.

End of the line it doesn't really matter if it's 45 or 58%, he shouldn't be seeing a regression.

Higher frame rates regardless of monitor Hz make the frame more likely to hit closer to the exact moment the monitor refreshes and decrease input lag. There's good reason why every competitive gamer runs FPS much higher than their monitor refresh rate.
Literally nothing in there says that you get less input lag because the frame is more likely to hit closer to the exact moment the monitor refreshes, that's something you came up with which as I explained has nothing to do with input lag.

Higher framerate = less input lag, it has nothing to with when a frame hits the monitor's refresh rate.
I've put it in bold and italics for your convenience.
 
Last edited:
Seeing so many Nvidia shills here in this thread is just remarkable... Even comparing the 1080 to the 6750 XT is laughable... And there's people saying the latter is slower... Holy sh*t... And all of you saying how good the Nvidia drivers are... HardwareUnboxed made a great video about driver overhead on Nvidia cards. In a CPU limited scenario (which OP has), a 6950XT has no problem blowing the 4090 out of the water (even if it's techically a card that is twice as fast).
1674916629071.png

1674916821174.png
Source:
Older source:
Instead of fighting who is better, maybe try helping the OP with his problem.
The 3Dmark results OP posted show DOUBLE GRAPHICS SCORE for the 6750 XT compared to the 1080. The problem is somewhere else, the GPU seems to be working just fine. Let's chill and wait for a reply from OP.

Sorry for the rant.
 
Last edited:
Seeing so many Nvidia shills here in this thread is just remarkable... Even comparing the 1080 to the 6750 XT is laughable... And there's people saying the latter is slower... Holy sh*t... And all of you saying how good the Nvidia drivers are... HardwareUnboxed made a great video about driver overhead on Nvidia cards. In a CPU limited scenario (which OP has), a 6950XT has no problem blowing the 4090 out of the water (even if it's techically a card that is twice as fast).
View attachment 281232
View attachment 281234Source:
Older source:
Instead of fighting who is better, maybe try helping the OP with his problem.
The 3Dmark results OP postes show DOUBLE GRAPHICS SCORE for the 6750 XT compared to the 1080. The problem is somewhere else, the GPU seems to be working just fine. Let's chill and wait for a reply from OP.
The only person comparing the 6750XT to the 1080 and saying it's slower is the OP.

We know it shouldn't be, calm down.
 
I think that if its possible to enable smart access memory on the motherboard it could be a way to boost performance and still think that the older chipset on your motherboard dosnt allow the 6750 to be used to its full potential. I had a Radeon VII on a Z170 setup and the change to AM4/x570 gave a noticeable difference.
 
I doubt this has anything to do with smart access memory, that usually makes 5-10% of a difference not 100%.
 
A lot of total bullshit nvidia supremacy in here. Is this what they call mindshare?

To OP: I'd suspect an issue with your software. Changing hardware on a stable system can sometimes unveil instability, especially on windows where, frankly, the OS can corrupt itself. First thing I'd do is a SFC and CHKDSK run, make sure windows didnt try to commit seppku on itself when installing new drivers.

Second thing, open up your AMD control panel and make sure you are running 22.11. AMD did have a notorious issue with MS where windows update would auto install a 2020 era driver onto systems right after a fresh install. If your control panel refuses to open or gives an error, this has likely happened. Assuming it does open, load up a game and see if the 6750 is clocking appropriately. IF its clocking too low with full GPU usage then it may be a corrupt driver, it may now be seeing its power cable properly, ece.

Assuming stuff is connected and the GPU is not fully loaded yet framerate still suffers, I'd look at reinstalling windows. If its an older install any number of things could be occurring in the background that could disrupt performance.
I was highly confident you were wrong... Edited this... you might be right. This is a mere 6c CPU, no HT, so it'll juggle threads here and there with non-gaming tasks, and the examples given are all high-FPS situations.

@OP try to confirm you are CPU limited by running a graphically intensive game instead, one where you get 100% GPU utilization at sub 60 FPS preferably, that also ran on your 1080 and you still have an idea how it performed.
He still shouldnt bet getting worse performance then he did before. At worst it should be the same if it was a CPU bottleneck.
We live in a world that within past twenty years, 95% of games are optimized to run with NVIDIA hardware.
Simple as that.


gtx 1080 has better scalability and performs better at the same MB+CPU.
This is the fact that you should memorize.
total nonsense.
 
Even if you choose the worst optimized AMD game that simultaneously is the best NVIDIA optimized game the FPS numbers make no sense.
The RX 6750 XT is by far better than the GTX 1080 and I wouldn't use the approximate % performance of TechPowerUP as a hard rule.

If it was CPU bottleneck it should produce at least the same frame rate numbers of the GTX 1080.
With only criterion the redacted to death Firestrike photos! the only thing I can think of is GPU thermal throttling while gaming.
To get more help you should use the AMD layout metrics and take some screenshots running the PC games in question. [CPU/GPU clocks, thermals, utilization etc]
 
Side note: Currently an nVidia GPU user (a choice made - simply because this was the best deal at that time: best bang for the buck). And even tho - while owning mostly AMD GPUs (for same reason - best bang for the buck at that given time) - i can agree that GPU Drivers were a common issue for AMD... this fanboy biased responses - both illogical and unrealistic - are pure cringe. Definitely not helpful (can even be discouraging - for some users looking for an answer or some help with an issue). It's 2023 FFS... every tech fan - should be able to understand by now - that this GPU giants (be it AMD or Nvidia - even Intel) - do/did all they can for a higher profit (and not for the sake of their fan base). That being said - it's not AMD + fans vs nVidia + fans, but more like... US (the users) vs Them (the Corporations). Fanboyism was always irrational - but we've reach a point where it doesn't make any sense anymore (there's simply way to much evidence - which proves that this big corporations - are all about them - even adapting shady practices for the sake of selling more products).

On Topic: I'm curious about the specs of the card under heavy load. Try GPU-Z and run a demanding game - even log the results - let's see if it performs as it should (at its design specs). Even tho, PCIe 3.0 is a limiting factor (should perform better under PCIe 4.0 - as others mentioned above) - for a bottleneck - the CPU (the only suspect in this case) should be running close to 100%. Not at the same time (exit GPU-Z while doing the next step) - but you could also use https://gpuopen.com/ocat/ - to monitor the GPU and CPU - easiest way to spot a bottleneck (again, the CPU - is the main suspect in this case). I think i5 8600K was just good enough - to run in tandem with GTX 1080 - without either of the two bottlenecking (thus, stable and higher FPS).

As for AMD drivers & software - you could also try this basic tweaks/changes meant to fine tune settings for performance or better visuals:


Sometimes it helps.
 
I don't know if someone said it before, but did you check the connections for your video card? Get the manual and follow it and then make sure you fully plugged cables.

Also, do you run any kind of apps in the background? Can you list them?
Try testing the graphics card with another computer...
 
He still shouldnt bet getting worse performance then he did before. At worst it should be the same if it was a CPU bottleneck.
Wha... no? Its a different GPU and driver, and these games run on DX11 and even on DX9.

Nvidia supremacy, that's exactly what this is. CPU hit on DX11 is much lower on green, and if you pick exclusively very high FPS titles, you're magnifying that gap. When you run games north of 300 FPS, every millisecond counts, lose a few CPU cycles and you're down 50~150 FPS momentarily no problem.

Still - your software advices I can only agree with. Always run a gaming rig as lean/light as possible.

Side note: Currently an nVidia GPU user (a choice made - simply because this was the best deal at that time: best bang for the buck). And even tho - while owning mostly AMD GPUs (for same reason - best bang for the buck at that given time) - i can agree that GPU Drivers were a common issue for AMD... this fanboy biased responses - both illogical and unrealistic - are pure cringe. Definitely not helpful (can even be discouraging - for some users looking for an answer or some help with an issue). It's 2023 FFS... every tech fan - should be able to understand by now - that this GPU giants (be it AMD or Nvidia - even Intel) - do/did all they can for a higher profit (and not for the sake of their fan base). That being said - it's not AMD + fans vs nVidia + fans, but more like... US (the users) vs Them (the Corporations). Fanboyism was always irrational - but we've reach a point where it doesn't make any sense anymore (there's simply way to much evidence - which proves that this big corporations - are all about them - even adapting shady practices for the sake of selling more products).

On Topic: I'm curious about the specs of the card under heavy load. Try GPU-Z and run a demanding game - even log the results - let's see if it performs as it should (at its design specs). Even tho, PCIe 3.0 is a limiting factor (should perform better under PCIe 4.0 - as others mentioned above) - for a bottleneck - the CPU (the only suspect in this case) should be running close to 100%. Not at the same time (exit GPU-Z while doing the next step) - but you could also use https://gpuopen.com/ocat/ - to monitor the GPU and CPU - easiest way to spot a bottleneck (again, the CPU - is the main suspect in this case). I think i5 8600K was just good enough - to run in tandem with GTX 1080 - without either of the two bottlenecking (thus, stable and higher FPS).

As for AMD drivers & software - you could also try this basic tweaks/changes meant to fine tune settings for performance or better visuals:


Sometimes it helps.
The 8600 is more than fine for a 6750XT as well, but the OP shouldn't be running games with uncapped FPS on it. It'll harm frametimes more than it helps.

The real solution here is an FPS cap or any semblance of it; Fast Sync; VRR, whatever. That CPU is fine for these games. The next step is limiting background tasks while gaming, but honestly, just limiting the FPS is sufficient.

@OP you should check out DX12 performance as well, and compare - the 6750XT should pull ahead in native DX12 content.

Seeing so many Nvidia shills here in this thread is just remarkable... Even comparing the 1080 to the 6750 XT is laughable... And there's people saying the latter is slower... Holy sh*t... And all of you saying how good the Nvidia drivers are... HardwareUnboxed made a great video about driver overhead on Nvidia cards. In a CPU limited scenario (which OP has), a 6950XT has no problem blowing the 4090 out of the water (even if it's techically a card that is twice as fast).
View attachment 281232
View attachment 281234Source:
Older source:
Instead of fighting who is better, maybe try helping the OP with his problem.
The 3Dmark results OP posted show DOUBLE GRAPHICS SCORE for the 6750 XT compared to the 1080. The problem is somewhere else, the GPU seems to be working just fine. Let's chill and wait for a reply from OP.

Sorry for the rant.
Context. Its a thing.


This is a documented fact. Now re read the OP.

Another fact:
1674930723856.png


Therefore - OP should test different games to get a better understanding of relative performance here and what limits his rig. Still though he was already CPU limited on his 1080 in examples given, 6750XT isn't going to do anything. Now run that stuff in 4K max detail and watch the 6750XT pay off.
 
Last edited:
@VTTX just get a refund as @WASH DOGS said and buy a graphics card (whichever) that is 100% better or rather more in the reviews on CPU similar to yours and be happy.
 
At this point I wouldn't be surprised if OP never responds again considering the cesspool of comments
 
Based on the number of processing units the cards are identical, Radeon only has a few hundred MHz clock advantage and incompetence(TM), sorry, infinity, cache.
Looks like nvidia's HSR (hidden surface removal to not draw what is not seen) is still superior even on Pascal cards. Not to mention well polished drivers.

I have only one question: why would you go and buy a graphics card that is so similar (equal) in the number of work units to your existing graphics card? o_O

Ok, so it has all these new-fangled features, but when it comes to pure rendering, the GTX 1080 is still mighty powerful for its age, size, consumption.
At best it can only be 42% better than your existing GTX 1080.
I have a rule about buying a graphics card or a CPU: it has to be 100% faster or more than its predecessor.
That's not how any of this works. You're basically comparing a 4790k to some 10th gen i3.

There are improvements to the gpu core, clocks, etc. You comparing gpu core counts and thinking it's the same is just, wow.
 
At this point I wouldn't be surprised if OP never responds again considering the cesspool of comments
Sadly, one of the reason's I replied was to bump his post to get them more help, but much of the replies were totally unhelpful to the OP.
 
This totally sounds like a driver problem on that AMD card. I just don't see crap like this on NVIDIA so much, they just work. Of course, we mustn't mention the melting power connector on the 4090 as it breaks the image of NVIDIA perfection. Oh well, at least the card will work so much better until it catches fire. :laugh:
 
Look, it's simple mathematics...

Cheese has holes.
More cheese = more holes.
More holes = less cheese.
Therefore, more cheese = less cheese.
 
The 6750XT is at best 40-45% faster than the 1080.
It's a tier higher than that. 40% faster would be going from a gtx 1080 to a 6650XT

6750XT is almost as fast as a RTX 3070 Ti, is a solid card and a gigantic jump from a 1080

Where does it show a 60% difference? I think that sounds incorrect but would like to see the source.
1080 is exactly on par with the original RTX 2060

The MSI 6750XT review at 1080p has the 6750XT as 1.69x the frames as a 2060 (that's almost 70% faster)
5700XT is 26% slower than the 6750XT
1080 is 15% slower than the 5700XT
TPU doesn't have a graph with both the 6750XT and the 1080.

I didn't do the math with 2080ti as a substitute, but I'll take your word for it.
You don't even to cross reference multiple cards, the original 2060 is basically +/- 3% of the gtx 1080 so that gives you a solid reference point right there.
 
I cant help to think that your platform is holding the new card back with the 8600 and a pcie 3 motherboard
Pcie 3 is very minimal difference over pcie 4, the plat maybe not strong enough.

If it were me a fresh OS would remove greencrap
 
@VTTX ,I apologize for not reading the whole thread once over since the day before yesterday , because heavy headed due to covid-19/flu/super void cold.

Have you tried restoring default graphical setting in game?
 

Not sure if you are aware but AMD completely rewrote it's DX11 driver with the 22.5.2 update (which was 3 quarters ago).

AMD is close to Nvidia nowadays when it comes to DX11. RDNA in general is much better in DX11 then GCN or Polaris was, that driver update just further closed the gap.

Mind you COD Vangaurd is a DX12 game so that he's having the same issue with that game kind of disproves that idea that it's AMD's DX11 performance that's the problem here. It couldn't hurt to try some of his other games in DX12 as you say, at least for the ones that support both APIs.

On top of the above, AMD has lower CPU overhead and is the superior choice for those with older or lower end CPUs:

A good chunk of the replies in this thread make a mockery of a tech forum that is supposed to provide help to those that need it. Many of you avoided even answering the OP's question, instead throwing around assumptions without even so much as a rudimentary google search to validate information before posting.


To the OP, what kind of CPU and GPU utilization are you seeing? The best way to gather this kind of information is to have HWInfo running with logging enabled. That way you can have it collecting info while you game.

Given that you are seeing normal performance in a benchmark but lower than normal performance in games this could be a case of the GPU core, VRAM, or GPU VRM overheating. Just a preliminary guess though as no variables have been isolated yet.

EDIT **

It may also be beneficial to try lowering the GPU power limit incrementally to see if it has any effect on performance. This is just to determine if the PSU is failing to provide enough power.
 
Last edited:
it is fun that OP post only pic of bench and not game and he is not writing anymore in his post this make me laugth! Seems NV bots hits TPU XD
 
Back
Top