• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft DirectSR Runtime Based on AMD FSR 2.2

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,905 (7.37/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Microsoft revealed that its DirectSR (Direct Super Resolution) API, which seeks to standardize super resolution-based performance enhancement technologies in games, has a hardware-independent default code path that is essentially based on AMD FSR 2.2, a Microsoft Dev Manager speaking at GDC has revealed. DirectSR provides a common set of APIs for game developers to integrate super resolution—so that developers don't have to separately implement DLSS, FSR and XeSS. Rather these upscalers, and others, can register themselves with the DirectSR API, and then get fed a dozen of input parameters that they may (or may not) use to improve the upscaling quality. Since AMD has open-sourced the code of FSR 2.2 on GPUOpen, and it is entirely shader-based, and doesn't use exotic technologies such as AI, Microsoft decided to use FSR 2.2 as the base algorithm for DirectSR. If other algorithms like DLSS are available on the user system, these can be activated by the user, too, of course, but supporting them requires no extra work from the developer side.



Update 18:15 UTC: Updated the news post to make it clear that the FSR 2.2 code path is merely a default, and other upscalers are free to hook into DirectSR to provide upscaling.

View at TechPowerUp Main Site | Source
 
Oh dear.
It makes sense I guess, because FSR is relatively mature and open, but also as the lowest-common-denominator it's also the lowest quality of the three options.
 
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
 
Last edited:
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
yhea...from the source article itself:
First, Hargreaves stated, "We are not trying to unify the super-resolution upscaler to a one-size-fits-all specification, but the goal is to make it easier for developers and users to work with." [...] However, although each technology has its own advantages and disadvantages, the functions of converting the resolution of game images while restoring and enhancing the sense of resolution are the same, so the processing systems are very similar to each other.
In this case, there is no need to implement all of the DLSS/FSR/XeSS shader code in the entire game. Microsoft thought it would be a good idea to put these core processing parts under DirectSR.
With this mechanism, the game can handle all DLSS/FSR/XeSS by simply calling DirectSR by performing only the preliminary processing necessary for the super-resolution upscaler and preparing the parameters.
 
Last edited:
So long as the underlying tech/upscaling algo's can eventually be updated to ones that have been trained its not a huge issue at the moment.

I figure FSR will be on its way to being an AI trained algo before long, and a good way to cross market the MI300 etc...

The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
 
So long as the underlying tech/upscaling algo's can eventually be updated to ones that have been trained its not a huge issue at the moment.

I figure FSR will be on its way to being an AI trained algo before long, and a good way to cross market the MI300 etc...

The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
You could make the same argument about Intel. Xess doesn't need the XMX core but runs better on them. The difference is that Nvidia is in a position where they could afford to lock DLSS, when intel isn't.
 
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
A better example would be how Vulkan got integrated to DX12.
 
Oh dear.
It makes sense I guess, because FSR is relatively mature and open, but also as the lowest-common-denominator it's also the lowest quality of the three options.
It's the fallback, it's the best option for GPUs that can't inform better support like Pascal. XeSS would murder them.

FSR 2.2? Lol pass.
Reading comprehension is necessary in life.
 
The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
I really wish we could see a test of that somehow. The same argument was brought to bear for the DXR and Nvidia did release drivers with its implementation on Pascal. That did not turn out too well. Not saying DLSS would run into problems at the same degree but depending on what exactly DLSS does running tensor operations that could be anywhere between negligible and quite a large problem for running on shaders.
 
GPUOpen, and it is entirely shader-based,

in short its ass
the only reason DLSS2 was a smash hit was because the imagine quality is so good
fsr2 is not nor it will it ever be its a glorified lancoze resize
Ai training features are a must for this kind if upscaling to look good
 
Last edited:
Back
Top