• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

HAGS (hardware accelerted graphics scheduler): enable or disable?

Joined
Dec 12, 2020
Messages
1,755 (1.09/day)
I know this is an MS tech. but it only works with GPU's. My 9700k @ 5180Mhz. is almost never pegged at 100% usage on all cores by any game, but my 4090 is regularly hammered at nearly 100% usage so why would I want to put more load on the GPU?

Here's a brief write-up on it:

The Windows Display Driver Model (WDDM) GPU scheduler is responsible for coordinating and managing multiple different applications submitting work to the GPU.
This relies on “a high-priority thread running on the CPU that coordinates, prioritizes, and schedules the work submitted by various applications.”
The GPU may be responsible for rendering, but the CPU bears the load of preparing and submitting commands to the GPU. Doing this one frame at a time is inefficient,
so a technique called frame buffering has become common, where the CPU submits commands in batches. This increases overall performance,
which could manifest as an increased framerate, but it also increases input latency. When a user hits a button,
nothing will happen until the GPU gets to the next batch of submitted frames. The larger the batch, the longer the potential wait.
The Microsoft blog post describes frame buffering as practically universal, but some games allow adjusting the size of the buffer or disabling it entirely.

Hardware-accelerated GPU scheduling offloads the work from that high-priority CPU thread and instead gives it to “a dedicated GPU-based scheduling processor.”
The fact that cards as far back as Turing have the hardware to support this feature implies that it’s been in the works for some time now.
Microsoft describes the handover as “akin to rebuilding the foundation of a house while still living in it,” in the sense that this is a huge change that
will ideally be invisible to the end user. The most explicit description offered in the post is this: “Windows continues to control prioritization and
decide which applications have priority among contexts. We offload high frequency tasks to the GPU scheduling processor, handling quanta management and
context switching of various GPU engines.” Nowhere in the post does Microsoft directly claim that applications will run faster; instead,
they go out of their way to say that users shouldn’t notice any change.

That hasn’t stopped anyone from looking for magical performance improvements, though. NVIDIA and AMD have encouraged this with vague-but-positive wording.

From NVIDIA: “This new feature can potentially improve performance and reduce latency by allowing the video card to directly manage its own memory.”
From AMD: “By moving scheduling responsibilities from software into hardware, this feature has the potential to improve GPU responsiveness and to allow
additional innovation in GPU workload management in the future.” Both of these descriptions allude to latency, as does the Microsoft post.
This opens up two areas of improvement for testing, summarized by description in the graphics menu, which reads “reduce latency and improve performance.”
The first is input latency, which we can and have tested for during our coverage of Google Stadia, but we don’t think this is as big a deal as some people
expect it to be. Microsoft’s blog post describes hardware-accelerated GPU scheduling as eliminating the need for frame buffering, which is a known source of input latency.

Based on that description, hardware-accelerated GPU scheduling shouldn’t reduce input latency any more than simply disabling frame buffering,
which is already a built-in option in many games, and is further often an option in the GPU driver control panels. The second area of potential improvement is in the
framerate of CPU-bound games, since some amount of work is offloaded from the CPU. The logical assumption is that this effect would be most noticeable on low-end CPUs
that hit 100% load in games, likely more than would be the case in GPU-bound scenarios and with GPU VRAM constraints.

So is it worth enabling it? Or is it a hit and miss proposition like SAM/resizable BAR?
 
In 2023
On Windows 11 22H2

I'd say HAGS is worth trying on RTX 40 series. It's not quite guaranteed uplift like SAM, but seems more consistently favourable than ReBAR on Nvidia (ReBAR is placebo more often than not due to most titles not whitelisted). At least from seat of pants feel and a few occasional benchmarks, HAGS seems closer to SAM for Nvidia, though obviously not as significant as SAM for RDNA3.

Earlier accounts of HAGS from past years on Win 10 are not quite so relevant as it seems things have changed quite a bit since then. I kept HAGS off on Win 10 for some performance issues.

Last I checked HAGS is not available for RDNA on Win 11, so it's more an Nvidia thing now.
 
DLSS 3 requires HAGS to be enabled, so on GeForce 40, always turn it on
 
I always had it on since it was a thing, I think it helped, no problems with it.
 
I have it enabled, didnt try it on/off in games, but I do know I finally have a stutter free tales of zestiria, I did a bunch of changes at once though, but one of them is the use of HAGS.

After reading your description i am defenitly keeping it on, if you think games have a tendancy to have a heavily loaded thread (which is why per core performance remains important for gaming) this should alleviate that, even if only a little bit, also that CPU bottlenecking is nearly always much more painful than GPU bottlenecking. The only consideration to turn it off would perhaps be if one is regularly saturating their GPU and has spare CPU cycles on main game thread. (my GPU is rarely saturated).
 
For AAA games like Cyberpunk 2077 or even Metro Exodus Enhanced Edition at max. settings, where the GPU is already being heavily used, it would seem like HAGS would be a bad idea?
 
For AAA games like Cyberpunk 2077 or even Metro Exodus Enhanced Edition at max. settings, where the GPU is already being heavily used, it would seem like HAGS would be a bad idea?
No. Leave it on. Your GPU contain more hardware than what drive your framerate.
 
For AAA games like Cyberpunk 2077 or even Metro Exodus Enhanced Edition at max. settings, where the GPU is already being heavily used, it would seem like HAGS would be a bad idea?
to the contrary, if you deactivate it you can't use DLSS 3 of your 4090 anymore. I already used it there, it helps a lot with FPS at 4K Ultra everything + path tracing.
 
I'd like to know what the hell Nvidia is doing to fool Windows in the drivers. Having to enable it Just be able to use D.L.S.S 3 which we know already induces latency contradicts all three of their statements made about it lowering latency. Unless it's part of "Nvidia's Reflex" too. Again it seems like they're fooling windows.
Technically speaking Nvidia does not have "a true hardware schedular" inside of their GPU's anymore. The Gigathread engine was the last "true hardware scheduler" NVidia ever used. Nvidia's own documents show that they have a reorder/schedular from the cpu driver it's usualll listed along with the D.L.S.S 2 updates. I can also tell you Hardware acceleration doesn't work correctly with S.L.I or mGPU on my RTX 2080 ti's so far. It caused weird stutter even just sitting windows.
 
I'd like to know what the hell Nvidia is doing to fool Windows in the drivers. Having to enable it Just be able to use D.L.S.S 3 which we know already induces latency contradicts all three of their statements made about it lowering latency. Unless it's part of "Nvidia's Reflex" too. Again it seems like they're fooling windows.
Technically speaking Nvidia does not have "a true hardware schedular" inside of their GPU's anymore. The Gigathread engine was the last "true hardware scheduler" NVidia ever used. Nvidia's own documents show that they have a reorder/schedular from the cpu driver it's usualll listed along with the D.L.S.S 2 updates. I can also tell you Hardware acceleration doesn't work correctly with S.L.I or mGPU on my RTX 2080 ti's so far. It caused weird stutter even just sitting windows.
The other things you said aside, I think it's a technical prerequisite for Frame Generation or DLSS 3. Fact is also that DLSS 3 forces Nvidia Reflex on, so there's that about "latency".

I also never had any downside from activating "HAGS". I played competitive games - I would've noticed.
 
The last time I enabled HAGS in Win10 with an NVIDIA GPU, the only thing it did was prevent YouTube videos on my secondary monitor from playing while I was running a fullscreen game on my primary. So I leave it off.
 
The majority of the time, the GPU is the bottleneck, not the CPU. So it looks like your CPU is fine.
 
The majority of the time, the GPU is the bottleneck, not the CPU. So it looks like your CPU is fine.
Cant remember the last time my GPU was bottlenecking me in all honesty, 1080ti allowed me to play SGSSAA in almost every game, and 3080 made it every single game, my only GPU bottleneck is now VRAM capacity (soon to be solved friend offered me old 3090). I play 60fps capped (sometimes even 30 as is more cinematic), and usually any performance issues, stutters and such as are CPU bottlenecks.

I have now tested all of the following games on my new platform (CPU and board upgrade). Most of these use less than 30% GPU utilisation. Also with HAGS enabled although not tested without HAGS on new system.

FF7 modded via FFNX driver and various mods (considerable improvement).
FF13-2 considerable improvement now almost as good as Series S, on 9900k it really struggled.
Tales of Zestiria, all stutters gone.
FF7 Remake all micro stutters gone, the semi freezes are now small stutters or gone.
Lightning Returns all frame drops, judders gone. (likely was cache starvation as on console's it had something akin to a large l3 cache to compensate for lack of RAM).
FF7 Remake issues with textures delayed loading significantly mitigated, still not as good as PS5 but big improvement. The game no longer runs worse on NVME vs SATA (I think 9900k was bottlenecking NVME).
Starcraft 2 now maintains 60fps cap, couldnt on 9900k.
All my emulators considerable improvement. Especially RPCS3.
FF15 loading times almost halved, and texture pop in's gone, also likely due to removal of CPU NVME bottleneck.

Then we have all the reviews of new games on the market, more and more evidence platform updates can significantly affect performance, particularly in mitigating stutters (lows).

I have all my improvements even though I am still on DDR4, it potentially gets even better with DDR5 or a X3D chip.

Now of course for those who like to play at insane frame rates and constantly run GPU at 99% what you said applies, but bear in mind when a GPU is saturated the bottlenecking tends to be much more elegant, as an example gsync wont prevent CPU caused stutters.

I am glad there has been more testing of CPU's on games lately, and it is interesting how much of an impact they can have. :)

I think some of my improvements are not just horsepower, but also scheduling improvements, before I had 8 real cores, now I have 16. Putting svchost, and most other background stuff on e-cores I think has helped as well by removing scheduling bottlenecks on my p-cores.
 
Last edited:
Back
Top