• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

4080 Super - Nvidia framerate limiter = performance problems?

Joined
May 11, 2025
Messages
100 (2.70/day)
System Name ReactorOne
Processor AMD 9800X3D
Motherboard MSI MAG X870E TOMAHAWK WIFI
Cooling ARCTIC Liquid Freezer III Pro 360
Memory G.SKILL Flare X5 32GB DDR5-6000 CL28
Video Card(s) PNY RTX 4080 Super 16GB Verto OC
Storage 2TB Samsung 990 Pro / 2TB WD SN850 / 8TB WD Red Plus
Display(s) LG 32" 1440p 16:9 165hz
Case Fractal S
Audio Device(s) Aune X1S Anniversary / Edifier R1700BT
Power Supply Dark Power 13 1000W
Mouse Evoluent VerticalMouse 4
Keyboard Corsair K65 Plus Wireless 75% Mechanical, Hotswappable Switches
Software Windows 11 Pro
I am in the habit of limiting my framerate through the Nvidia Control Panel to control the heat and noise in my system. In the past this has worked well and performance was rock solid using this method.

When I lock the framerate, the system suddenly struggles to hit the frame cap, even though it was previously producing far more frames than the cap I choose. I initially thought this was a problem specific to one game (Helldivers 2) so I just waited for a patch. Recently I noticed the same symptoms in theHunter: Call of the Wild so apparently it's more than one game, possibly there is a graphics api feature that triggers the response, idk. In this game, my system hits 130 - 150 frames unlocked, and then if I cap it at 80, it struggles to hit that and often hangs out below that in the 60s and 70s.

My specs:
Intel 12700k cooled by NH-D15
Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3600 CL14 Memory
MPG Z690 EDGE WIFI DDR4 Motherboard
4080 Super
WD SN850 NVME SSD
be quiet! Dark Power 13 1000 W 80+ Titanium

Also, I don't believe it's directly related to this problem, but I'm getting high hotspot temps from my GPU, thinking about repasting but maybe I should try and RMA? It's PNY. The temps are low 90s and it make me nervous about the card's longevity. The card doesn't seem to throttle based on the hotspot.

Thanks for looking!
 
What NVidia driver version are you on? 57x.xx have been having loads of issues. Wouldn't surprise me if this was another one. The latest that is considered stable is 566.36. I'd try downgrading to that if you haven't yet.
 
Thanks for the info.

I was up-to-date on my drivers at the time that you asked. Since then, I've rebooted in safe mode and used DDU to clean out the Nvidia drivers. I then installed version 566.36. while still offline.

Unfortunately, the behavior is the same. I noticed something else though, turning on V-Sync in the game options has the same effect as capping the framerate in the Nvidia driver. (reducing the overall framerate below the cap which it was easily exceeding before)
 
If the game you're trying to play has no anti-cheat and the driver limiter doesn't work, try Special K. It has an excellent frame rate limiter.

Also - unrelated to the frame limiter issue - you must have upgraded recently, I take it? Bear in mind that you'll be routinely running into a similar problem I am currently facing with my machine... but twice as bad. You're on 16 GB of RAM, using a 16 GB GPU. Expect funny behavior and inconsistent performance on newer games as Windows will overcommit memory beyond your system's capabilities, and that will lead to crashes and lower than expected framerates. Running into the same with some games on my 32 GB machine ever since I upgraded to the RTX 5090.

566.36 is a solid driver, no need to upgrade from that on Ada yet. The latest Release 575 drivers work well, but I'd stick to 566 for now, until Nvidia works out this mess with the current release branch.
 
If the game you're trying to play has no anti-cheat and the driver limiter doesn't work, try Special K. It has an excellent frame rate limiter.

Also - unrelated to the frame limiter issue - you must have upgraded recently, I take it? Bear in mind that you'll be routinely running into a similar problem I am currently facing with my machine... but twice as bad. You're on 16 GB of RAM, using a 16 GB GPU. Expect funny behavior and inconsistent performance on newer games as Windows will overcommit memory beyond your system's capabilities, and that will lead to crashes and lower than expected framerates. Running into the same with some games on my 32 GB machine ever since I upgraded to the RTX 5090.

566.36 is a solid driver, no need to upgrade from that on Ada yet. The latest Release 575 drivers work well, but I'd stick to 566 for now, until Nvidia works out this mess with the current release branch.
Very interesting app, Special K. That's new to me. Seems to work alright in Call of the Wild. I wish I could also use it for Helldivers 2.

What does the amount of VRAM have to do with the amount of dram? The GPU is newish, got it when it came out. I didn't notice any issues at first, and was pretty happy with the performance I got when capping the frames at 80 in really taxing games like Helldivers.

I'm kicking around the idea of grabbing a 9800x3d tomorrow from Microcenter not sure if that will fix actually help me or not, on paper my cpu shouldn't be a big bottleneck at 1440p but Special K showed it wass bottlenecking me in Call of the Wild which I found surprising.

I can't shake the feeling that something is not right with my setup that wasn't before. I don't think using framerate limiter in the Nvidia Control Panel should be causing such performance instability, it never did before.
 
Very interesting app, Special K. That's new to me. Seems to work alright in Call of the Wild. I wish I could also use it for Helldivers 2.

What does the amount of VRAM have to do with the amount of dram? The GPU is newish, got it when it came out. I didn't notice any issues at first, and was pretty happy with the performance I got when capping the frames at 80 in really taxing games like Helldivers.

I'm kicking around the idea of grabbing a 9800x3d tomorrow from Microcenter not sure if that will fix actually help me or not, on paper my cpu shouldn't be a big bottleneck at 1440p but Special K showed it wass bottlenecking me in Call of the Wild which I found surprising.

I can't shake the feeling that something is not right with my setup that wasn't before. I don't think using framerate limiter in the Nvidia Control Panel should be causing such performance instability, it never did before.

Yeah, Special K is a pretty neat toolkit, it allows you to configure the API level stuff behind the scenes and has targeted fixes for some games, like NieR Automata. Also supports enhancements for HDR, on-the-fly texture replacements, automatic DLSS version upgrades and preset configuration, allows you to inject Reflex support on games that don't have it, lots of stuff really, it's pretty awesome. Unfortunately, since it's a injectable DLL tool that directly changes API functions, it's pretty much indistinguishable from a cheat in the way it works. Can, will, and to be honest, should get you banned from competitive games.

As for the RAM and VRAM thing, it's mostly due to the Windows memory manager and how it requests resources, there is a (really technical, developer-faced) article on it:


The tl;dr here is basically that to get the best out of your GPU today I guess you need between 1.5 to 2x the VRAM capacity, or at the very least a lot of page file space. A lot of people jumping on the recent VRAM hypetrain are about to get hit with this, I personally didn't have weird performance drops nor many out of memory related crashes on the same system with the RTX 4080 before. I'm waiting for the 8000 MT/s+ 2x 64 GB kits to release, G.Skill should have them soon, I need those since I run a ROG Apex board that only has 2 memory slots. If not possible, I'll probably grab a 2x48 kit on a good price sometime, should do me well enough too.
 
I use MSI Afterburner/RivaTuner to limit my framerates, it was working fine on RTX 3090 up till I sold it 2 months ago. It seems to be working fine on RTX 4080 and 4090 with the latest Nvidia drivers, at least I didn't notice anything weird. You can give that a try.

I love Special K! Especially the one with almonds, not the most healthy of breakfast but it taste good with milk.

prod_img-1385296_my_08852756304497_2204071540_p_1.png


The tl;dr here is basically that to get the best out of your GPU today I guess you need between 1.5 to 2x the VRAM capacity, or at the very least a lot of page file space. A lot of people jumping on the recent VRAM hypetrain are about to get hit with this, I personally didn't have weird performance drops nor many out of memory related crashes on the same system with the RTX 4080 before. I'm waiting for the 8000 MT/s+ 2x 64 GB kits to release, G.Skill should have them soon, I need those since I run a ROG Apex board that only has 2 memory slots. If not possible, I'll probably grab a 2x48 kit on a good price sometime, should do me well enough too.

I don't know, I have yet to see any of my 24GB graphics cards use more than 20GB of VRAM except maybe in Diablo IV. That game just use whatever amount of VRAM that is available, maybe it was a bug, the usage went up to 22GB.
 
Last edited:
I don't know, I have yet to see any of my 24GB graphics cards use more than 20GB of VRAM except maybe in Diablo IV. That game just use whatever amount of VRAM that is available, maybe it was a bug, the usage went up to 22GB.

Newer games are super VRAM hungry, expect this to change in the near future. Oblivion Remaster's using ~11 GB at what's effectively 1080p (4K with Performance DLSS)

1747028695482.png
 
I must hang my head in shame. :oops: Part of my problem was self-inflicted. I forgot I had my PC set to the Power Saving profile, which is customized to limit the CPU to 50% power.

It's not a total waste though, I learned about Special K which is pretty cool. And the conversation about RAM is interesting. I do have a sneaking suspicion that HD2 doesn't like my 16gb ram even though it doesn't use it all.

Maybe someone can give me advice on if my GPU temps are safe while we are here. I just finished a game of HD2 at 1440p 80fps(mostly) at native scaling, and according to HWiNFO64, the overall GPU temperature maxed at 79.6 C, while the hot spot hit 102.1 C. I notice in realtime (thanks to the OSD) that sometimes the delta is 20C or a bit more. Average overall GPU temp was 68.9 C and the average hotspot temp was 84.4.
 
RTX 4080/4080S can run up to 100, hotspot into 115 range. You're good. 80 is around the area where it starts to take the foot off the pedal and reduce clocks a bit, it'll run a bit faster if you increase fan speed. BTW it'll never use all your RAM, ever, because if it does, Windows will crash ;)
 
RTX 4080/4080S can run up to 100, hotspot into 115 range. You're good. 80 is around the area where it starts to take the foot off the pedal and reduce clocks a bit, it'll run a bit faster if you increase fan speed. BTW it'll never use all your RAM, ever, because if it does, Windows will crash ;)
That is reassuring to hear about the temps, but it still seems a bit sketch on a conceptual level. My last GPU was a GTX1070 and I nursed that along for like 7 yrs. I'm wondering if the 40 series can survive as long at such temps.

Regarding the ram usage, I don't see it sitting at 15 or 15.5 or something like that , but I'm open to the idea that the OS might be conservative. BTW, because I chose to go with DDR4 at the time to save money, it is hard to find nice ram kits for my system now. If I wanted to go to 32GB, I'd have to settle for CL16 DDR4-3600 while I currently have CL14 DDR4-3600. Do you think it would be worth it?

This is part of the reason I was considering the 9800x3d, I figured I could fix my ram mistake and "future proof" the cpu at the same time. I might gain frame time consistency benefits even though that is way too much CPU for my GPU I am realizing.
 
That is reassuring to hear about the temps, but it still seems a bit sketch on a conceptual level. My last GPU was a GTX1070 and I nursed that along for like 7 yrs. I'm wondering if the 40 series can survive as long at such temps.

Regarding the ram usage, I don't see it sitting at 15 or 15.5 or something like that , but I'm open to the idea that the OS might be conservative. BTW, because I chose to go with DDR4 at the time to save money, it is hard to find nice ram kits for my system now. If I wanted to go to 32GB, I'd have to settle for CL16 DDR4-3600 while I currently have CL14 DDR4-3600. Do you think it would be worth it?

This is part of the reason I was considering the 9800x3d, I figured I could fix my ram mistake and "future proof" the cpu at the same time. I might gain frame time consistency benefits even though that is way too much CPU for my GPU I am realizing.

The GP104 used on the 1070 has about the same temperature rating, and my GTX 480s from back in 2010 still worked by the time I passed them on by 2022. Should be fine.
 
If the game you're trying to play has no anti-cheat and the driver limiter doesn't work, try Special K. It has an excellent frame rate limiter.

Also - unrelated to the frame limiter issue - you must have upgraded recently, I take it? Bear in mind that you'll be routinely running into a similar problem I am currently facing with my machine... but twice as bad. You're on 16 GB of RAM, using a 16 GB GPU. Expect funny behavior and inconsistent performance on newer games as Windows will overcommit memory beyond your system's capabilities, and that will lead to crashes and lower than expected framerates. Running into the same with some games on my 32 GB machine ever since I upgraded to the RTX 5090.

566.36 is a solid driver, no need to upgrade from that on Ada yet. The latest Release 575 drivers work well, but I'd stick to 566 for now, until Nvidia works out this mess with the current release branch.
I wonder if this is the same issue I discovered.

To summarise my issue is that I have noticed high memory commit on the system, which is especially apparent on GPU accelerated programs, with games being the most affected.

I did notice however it was a commit but not actively been used, so to compensate I have a large page file, this provides backing for the commit but not actually adding pageing i/o as its only used to map unutilised commit. Utilised still goes in RAM. I was convinced I have bugged out my system and would be fixed on a clean OS reinstall, but your post makes me curious.

The problem is not as bad when setting the global nvidia setting to not prefer sysmem fallback.

However I am not convinced we have the same issue, at least not at the same intensity, as I have had the problem on both my 10 gig 3080 and 16 gig 4080 super. Whilst you started only getting it on the 32 gig 5090.

I also dont recall it happening when I briefly had the 3080 plugged into my old 9900k board on the same build of windows, although I might check it again to be sure.
 
I've ordered some PTM7950 for my GPU. The hotspot temps were getting really worrying. On a fairly cool night with a ambient temp around 20° C and with the side panel left off, I was getting hotspot spikes up to 107° C. I was seeing regular temperature deltas (hotspot to package temp) of 20°C+ and sometimes as high as 30°C. There's no way I wouldn't have had thermal throttling later this summer. This is the PNY 4080 Super btw, it's supposed to be one of the cooler 4080S cards, and it *was* cooler when I first got it than it is now.

The TPU review for my card: https://www.techpowerup.com/review/pny-geforce-rtx-4080-super-verto/

BTW I've also heard about a phenomenon where the Power monitoring in RivaTuner can actually cause stutter for some people. Now, I have been using HWINFO but I wouldn't be surprised if the same thing were happening. One guy who had the issue said that he had multiple machines and it didn't affect all of them. But he was also working with a fresh bios, and a new Windows install so there was no indication that the cause was user error in OCing or something.
 
I've ordered some PTM7950 for my GPU. The hotspot temps were getting really worrying. On a fairly cool night with a ambient temp around 20° C and with the side panel left off, I was getting hotspot spikes up to 107° C. I was seeing regular temperature deltas (hotspot to package temp) of 20°C+ and sometimes as high as 30°C. There's no way I wouldn't have had thermal throttling later this summer. This is the PNY 4080 Super btw, it's supposed to be one of the cooler 4080S cards, and it *was* cooler when I first got it than it is now.

The TPU review for my card: https://www.techpowerup.com/review/pny-geforce-rtx-4080-super-verto/

BTW I've also heard about a phenomenon where the Power monitoring in RivaTuner can actually cause stutter for some people. Now, I have been using HWINFO but I wouldn't be surprised if the same thing were happening. One guy who had the issue said that he had multiple machines and it didn't affect all of them. But he was also working with a fresh bios, and a new Windows install so there was no indication that the cause was user error in OCing or something.
Can you please check and tighten the screws on back side of gpu which hold backplate to gpu, i had the same hotspot problem on my previous gpu rtx 3090 and it was solved by tithening the screws
 
Can you please check and tighten the screws on back side of gpu which hold backplate to gpu, i had the same hotspot problem on my previous gpu rtx 3090 and it was solved by tithening the screws
Were the screws noticeably loose?
 
I wanted to report back. I just applied PTM7950 to my PNY 4080 Super. Mind flipping BLOWN.

I ran 3DMark Steel Nomad after installing it and whereas before I was getting hotspot spikes of about 107°C, this time my hotspot was 80.1°C.

Before re-pasting, the delta between my hotspot and average temp was 20°C-30°C under load. Now, the maximum temps reached in this test had an 11.6°C difference!

What's more, my GPU fans were constantly hitting max RPMs before in order to achieve those awful temps, and this time it looks like they maxed out at less than half their capacity. 1375 RPM instead of 3000. notbad.jpg

Needless to say, I am very pleased. The kit I bought was enough for two pastings, and I'm now considering putting the other portion on my CPU, which reports that while all cores are idling at < 30°C, somehow the Tclt/Tdie is at 43°C.

BTW, I found a scratch on the heatsink of my GPU slightly overlapping the area that would contact the main die. I can't imagine how it happend, but it was definitely done in the factory since I just used alcohol wipes to remove the old paste. Maybe the original thermal paste was not able to properly fill the extra gap created by this scratch. I'm attaching a pic of the scratch.

Also, forgot to mention that I did try tightening the screws a bit before I resorted to repasting, no change
 

Attachments

  • IMG20250526001517.jpg
    IMG20250526001517.jpg
    1.4 MB · Views: 34
Last edited:
Yeah, PTM is good stuff. Your temps should improve a little further still with some more cycles.
 
Yeah, PTM is good stuff. Your temps should improve a little further still with some more cycles.
I've heard about that and it would be pretty cool. (heh, pun) Right now my idle temps are actually below the review sample by about 5°C although the hotspot is 2.1°C higher under load. Differences in ambient temps and fan placement could account for some difference but I guess we'll just see.
 
Back
Top