• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TechPowerUp Hosts NVIDIA DLSS Client Libraries

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,884 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
TechPowerUp is hosting a repository of NVIDIA DLSS client libraries that game developers pack with their DLSS-compatible games. This library goes by the file name "nvngx_dlss.dll." Recent discussions on tech forums and Reddit revealed that the DLSS libraries, which are usually located in the game's installation folders, are user-swappable, meaning that one can replace their DLSS library file for a different version that enables a different set of features or potentially even image quality or performance improvements. So far people have shared these files through sites like Mega, which of course introduces a malware risk. That's why we decided to host these files ourselves, located in our Downloads section. We currently have 23 versions, all hand-verified to be the unmodified originals, ranging from 1.0.0 to 2.2.10. Click on the "show older versions" link below the "Download" button to list the other versions. We will keep adding to the collection as we come across more of these. Have something we don't? Comment below.

DOWNLOAD: NVIDIA DLSS DLL



View at TechPowerUp Main Site
 
been wondering if 2.2.10 bring any improvement vs 2.2.6
 
For those wondering, this doesn't work on DLSS 1.0 games (BFV for example just greys out DLSS).
 
I don't get why Nvidia doesn't warn devs to update their games with the new dlss version
 
Last edited:
If amd releases FSR dlls etc will TPU host them aswell?
 
If amd releases FSR dlls etc will TPU host them aswell?
Absolutely, but from what I understand (and have seen in the FSR titles I tested), FSR doesn't use a separate DLL.

The FSR shaders just get integrated into the game's files like its own shaders.
 
Absolutely, but from what I understand (and have seen in the FSR titles I tested), FSR doesn't use a separate DLL.

The FSR shaders just get integrated into the game's files like its own shaders.
he just provides d3d11.dll and gta5_fsr.ini thats it, yes he did modding in those to fit GTA V
 
Nóooooo, waaait ,yes please.
And Thank you,
Made testing later versions easier.
 
TPU just trying to machine learn how to let users better use their own inference as to which DLSS configuration is best. I felt the R6S configuration was better out of the 3 that I saw recently personally.
1625522243775.png
 
Is there a record of any game dev actually updating the DLSS version post release?

No one would even need to go update the dll if the developers did this via the normal patching process. Obviously there is time needed for determining if there is a benefit to using a newer version (which has already been proven in multiple examples) and then validation that nothing broke due to API changes. It just seems that Nvidia needs to push developers to keep their DLSS versions up to date.
 
Is there a record of any game dev actually updating the DLSS version post release?
Control went from 1.0 to '1.9' to 2.1, and Metro Exodus shipped with 1.0 and the new enhanced edition (that anyone who bought the original also gets) is 2.1

Although I agree, if you shipping a DLSS 2.0 game, those devs should 100% be spending the minimal amount of time needed to inject the new DLL, test it and patch the game with it - IF this is allowed after all.
 
Big round of applause here, once i saw that people could just update the .dll i knew it'd be hosted around somewhere

aquinus must be having a bad day, this got weird
 
Last edited:
If DLSS's machine learning AI wasn't sucking a bit at it's inference it would do this automatically anyway so if we're to blame or fault anyone perhaps Nvidia should be for it's marketing being a bit misleading and overhyping while under delivering.

This begs the question of why this isn't also part of the inference process of DLSS across games where it checks several DLSS DLL configurations and picks the most optimal based on the game? If anything it's a hands off opportunity for Nvidia to do better with DLSS til they can start doing what I mentioned because they should be anyway.

Nvidia would be playing with fire by going after TMU over this matter and I'm pretty certain their fully aware of it. Worst case what is Nvidia going to do send them a cease and desist and TPU writes a article about it and maybe or maybe not holds a bit of a grudge and even if TPU doesn't directly many in the tech world certainly might have a fair degree of resentment towards Nvidia on the matter.

I can see where maybe it could be considered a grey area, but is it something Nvidia wants to concern itself with at this point in time!!? Instead of making DLSS more compelling they'd effectively be doing the exact opposite retroactively. It might stir developers backlash to flip to FSR and gamer's switching over to AMD it would almost be Seppuku of Nvidia to do so. I don't think the tech community as a whole would standby Nvidia doing so.
 
Last edited:
I moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.
 
I moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.
you get it, respect
 
I've not tried 2.2.10 but 2.2.6 in Metro EE and Doom Eternal seem fine, can;t say I saw much if any ghosting before, especially in DOOM.. ripping and tearing at a locked 140fps on Nightmare difficulty, not a lot of time to nitpick!
 
I wonder if something like this would trigger anti-cheat such as BattleEye in Rainbow Six: Siege, or in Call of Duty?
 
I wonder if something like this would trigger anti-cheat such as BattleEye in Rainbow Six: Siege, or in Call of Duty?
It depends. If the anti-cheat checks if the file is digitally signed by NVIDIA, or a "known" DLSS DLL version then it should pass. If it looks for that one specific file, then it'll not pass.
 
I moved all the "this is piracy" comments to a separate thread, where you can continue the discussion.

This thread is for what different it actually makes, technical details, etc.

its not piracy, but the binaries are signed nvidia modules provided by nvidia, so you'll get a nice threatening letter from them if they actually care.
Doubt it though.

the main issue is going to be that alot of the improvements people see are simply placebo, the dlss files are just runtimes, with the game itself setting the values, newer versions don't have any new inferenced data for a game to use its just new features that games can't use because they don't know exist or bug fixes(or new bugs).
 
Last edited:
DLSS 2.0 doesn't use per-game data

That's incorrect.

What 2.0 doesn't require is for the inferencing model to be altered per game, all 2.0 games are inferenced through the same generalized model with the same retained datasets, so every new game that goes through the model further contributes to games down the line progressively improving the technology.

Nvidia said:
DLSS 2.0 has two primary inputs into the AI network:

  1. Low resolution, aliased images rendered by the game engine
  2. Low resolution, motion vectors from the same images -- also generated by the game engine
    The NVIDIA DLSS 2.0 Architecture

    A special type of AI network, called a convolutional autoencoder, takes the low resolution current frame, and the high resolution previous frame, to determine on a pixel-by-pixel basis how to generate a higher quality current frame.

    During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images.

    Once the network is trained, NGX delivers the AI model to your GeForce RTX PC or laptop via Game Ready Drivers and OTA updates. With Turing’s Tensor Cores delivering up to 110 teraflops of dedicated AI horsepower, the DLSS network can be run in real-time simultaneously with an intensive 3D game. This simply wasn’t possible before Turing and Tensor Cores.

Changing the DLL bundled with the game isn't increasing the known inferencing data, this is only possibly via NGX binary updates which can be published through GFE or as a Driver update.
 
Last edited:
are inferenced through the same generalized model with the same retained datasets, so every new game that goes through the model further contributes to games down the line progressively improving the technology.
Source? I'm not aware that NVIDIA trains DLSS 2.0 with "every game that supports DLSS"
 
Source? I'm not aware that NVIDIA trains DLSS 2.0 with "every game that supports DLSS"
It was in the announcement slides
  • One Network for All Games and Applications - DLSS offers a generalized AI network that removes the need to train for each specific game or application.
The DLSS 1.0 model started each instance of each game at each resolution completely new without any former inferencing information at hand,
Everything that 2.0 see's, it remembers and stores for potential future use, low and high resolution stills from the dlss 2.0 game are still introduced into the auto encoder so its still training on specific game data.
This removes and replaces the wrongly assumed Deterministic model that 1.0 had causing significant motion artifacts and blur

Also a good part of why 2.0 Training is faster than 1.0 and also capable of introducing detail that was never in the original image.
 
Everything that 2.0 see's, it remembers and stores for potential future use, low and high resolution stills from the dlss 2.0 game are still introduced into the auto encoder so its still training on specific game data.
Are you claiming DLSS learns while running on your or my computer? This is completely wrong

NVIDIA trains (creates) the network in their labs and ships the network to us in the DLSS DLL + evolutionary workarounds, tweaks and fixes
 
Back
Top