• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD FidelityFX FSR Source Code Released & Updates Posted, Uses Lanczos under the Hood

This will be awesome to the new Steam Deck.

FSR can really give a boost in fps in that handheld and improve its longevity...
 
Here is 4K FSR Quality vs 1440p with Sharpen+

At least FSR looks better than a sharpening filter :D, barely...
 
At least FSR looks better than a sharpening filter :D, barely...
There are scientific papers that say Lanczos 3 'does' improve the visual quality. You have to understand there are no filters so simple and effective.

PS: since we are discussing sharpening in the case of a ringing filter which is not the industry practice, maybe change tune and advocate the bilateral filter like I do. It is nonringing and is sharpening invariant since it is a a²+b² resolve of the coordinate axis. It is more advanced and what I'm rooting for.
 
That's presumptuous of you, but I guess it depends on where you are. In my neck of the woods, an RTX 3080 goes for a min of ~2100 local dollars, but an RX 6800 XT can be had for about 1600. That's about 370USD difference converted to USD, so in what way are they 'for the same money'?

BTW, give this nVidia promotion a rest already, why doncha?:rolleyes:
Sorry to hear that, I got my 3080 from the EVGA que for MSRP USD.
 
yawn...people are so easily tricked, when images appear slightly sharper they will call it higher image quality despite the little trick called CAS destroy color details in favor of higher contrast (which make images have mold like white spots...urghhhh).

Oversharpening is indeed a concern. That being said, from my testing, FSR doesn't suffer from the awful image persistence in motion and artifacting that even DLSS 2.2 (RDR2) still suffers from. There is strong reason why DLSS 1.0 was based on a spatial algorithm, maybe Nvidia can implement FSR and get DLSS working properly :P.

Games by their nature are a motion medium, but I guess if you like standing still and closely analysing still scenes, DLSS is a great product.
 
That's presumptuous of you, but I guess it depends on where you are. In my neck of the woods, an RTX 3080 goes for a min of ~2100 local dollars, but an RX 6800 XT can be had for about 1600. That's about 370USD difference converted to USD, so in what way are they 'for the same money'?

BTW, give this nVidia promotion a rest already, why doncha?:rolleyes:

A comparison vid of FSR at Ultra vs DLSS2.2 (I think) on Marvel's Avengers....looks good enough, unlike you pixel peep.

ANd here's a comparison between FSR vs DLSS 2.2 in Necromunda Hired Gun, without pixel peeping, can you see the difference especially when playing the game?
In that Marvel Avengers comparison it appears like FSR has better color 100%, looks better in motion, higher fine texture detail, better lighting/shading, and additionally you notice the reflections better as well.

CAS destroys color details? I beg to differ...that said I do use a touch of DPX as well. I literally didn't even customize this at all for RiftBreaker I just straight up took my universal CAS/DPX configuration I've been working on with Reshade and copied it over to RiftBreaker. It took me a whole minute to copy and paste it and lighting, shading, reflections, color, texture detail all are improved a bit in a very non obstructive way. It even improves the UI elements they are more crisp and easier to read.

Default
DEFAULT 1.png

CAS + DPX (to add a touch of luminescence)
RESHADE 2.png
 
Why limit yourself to just FSR? And you get better RT with Nvidia too, for the same money.
Lol after all you don't understand.

FSR is FREE, you don't have to pay AMD to use it.
DLSS is a paid service AFTER you purchased Nvidia RTX series hardware.

One solution requires specialized hardware and the other one isn't.

There is no "For the same money" argument here when you competition demands $0 for implementation.
 
Lol after all you don't understand.

FSR is FREE, you don't have to pay AMD to use it.
DLSS is a paid service AFTER you purchased Nvidia RTX series hardware.

One solution requires specialized hardware and the other one isn't.

There is no "For the same money" argument here when you competition demands $0 for implementation.
Actually you don't understand.
If you buy a RTX card, you get both.
FSR and DLSS.

Now if you dont want to buy any card at all, obviously FSR is free.
Never claimed it wasnt.
 
Oversharpening is indeed a concern. That being said, from my testing, FSR doesn't suffer from the awful image persistence in motion and artifacting that even DLSS 2.2 (RDR2) still suffers from. There is strong reason why DLSS 1.0 was based on a spatial algorithm, maybe Nvidia can implement FSR and get DLSS working properly :p.

Games by their nature are a motion medium, but I guess if you like standing still and closely analysing still scenes, DLSS is a great product.

I played through Doom Eternal with RT+DLSS and no ghosting there, perfect 4k120hz experience :p. But yeah DLSS doesn't work with RDR2, the game performance just doesn't scale with lower res anyways.

As for ghosting, any thin objects that usually have ghosting with DLSS will just disappear with FSR, so I wouldn't call FSR being any better there LOL, unless you like missing details.
Notice that once I take 1 step back, the thin chainmail clothing on the girl's chest disappear with FSR, and this is 4K FSR UQ, once I go 1440p FSR UQ, the chainmail becomes a flickering mess.
Let's be honest here, you are not gonna notice the ghosting with DLSS until you zoom in and slow down the footage, which is not normal gameplay either.

IMO I would not use any mode lower than 4K FSR UQ, as 4K FSR Quality produce noticeable ringing artifacts
 
Last edited:
Actually you don't understand.
If you buy a RTX card, you get both.
FSR and DLSS.

Now if you dont want to buy any card at all, obviously FSR is free.
Never claimed it wasnt.
You agreed FSR is free by any terms, and people don't have to purchase new hardware for it.
You specifically stated one has to buy a RTX card to get DLSS.

You just repeated what I 've said.

What are you arguing?
 
played through Doom Eternal with RT+DLSS and no ghosting there, perfect 4k120hz experience. But yeah DLSS doesn't work with RDR2, the game performance just doesn't scale with lower res anyways.

Ewww. Definitely not, its like rubbing vaseline over the screen in a game as fast paced as D:E. I will admit, not everyone is sensitive to this however, and camps tend to split into 'vomit inducing' and 'what are you talking about'. Btw; up scaling 3840x1600, its not 4k, but its pretty bloody high.

Let's be honest here, you are not gonna notice the ghosting with DLSS until you zoom in and slow down the footage, which is not normal gameplay either.

See previous statement Ghosting is one of the primary drivers which creates borderline motion sickness for me whilst using DLSS in anything fast paced. Being fair, TAA as a whole is pretty gross in general for anyone sensitive to this, its just the 1 frame penalty in DLSS is always going to mess with anyone sensitive to temporal issues.
 
FSR isn't one type of post process it's different post process techniques forming a post processing library of options. FSR isn't done it's going to continue to evolve I'm sure.
 
Ewww. Definitely not, its like rubbing vaseline over the screen in a game as fast paced as D:E. I will admit, not everyone is sensitive to this however, and camps tend to split into 'vomit inducing' and 'what are you talking about'. Btw; up scaling 3840x1600, its not 4k, but its pretty bloody high.

See previous statement Ghosting is one of the primary drivers which creates borderline motion sickness for me whilst using DLSS in anything fast paced. Being fair, TAA as a whole is pretty gross in general for anyone sensitive to this, its just the 1 frame penalty in DLSS is always going to mess with anyone sensitive to temporal issues.

DLSS Q looks sharper than Native in Doom Eternal for me, might be because 4K vs 1440p but I played comfortably with 4K DLSS Balanced (RT modded to the max), also OLED screen might have helped masking the ghosting issue :p . Another thing is that what you are seeing is not what you are recording (120fps in-game vs 60fps recording), you need to use a high speed camera to capture perceived images (like testing the monitor with UFO test)

Well FSR needs TAA desperately so you are gonna have to deal with TAA either way.
FSR + SMAA vs FSR + TAA
 
Last edited:
DLSS Q looks sharper than Native in Doom Eternal for me, might be because 4K vs 1440p but I played comfortably with 4K DLSS Balanced (RT modded to the max), also OLED screen might have helped masking the ghosting issue :p .

Well FSR needs TAA desperately so you are gonna have to deal with TAA either way.

Different type of ghosting, I know what you are talking about (have a C9 OLED and a 5800X+6800 XT media center PC, lol). Its somewhat the problem when terms like 'ghosting' are used interchangeably for things like overshoot and TAA errors.

You already can mix FSR & TAA in some titles, but this is a shit way forward. IMO, FSR will in stage 2 be doing some form of AI reconstruction ala DLSS and tied into RDNA3 next year, and AMD's patents on it suggest that's exactly what will be occuring.
 
You agreed FSR is free by any terms, and people don't have to purchase new hardware for it.
You specifically stated one has to buy a RTX card to get DLSS.

You just repeated what I 've said.

What are you arguing?
Not really arguing, just pointing out that Nvidia cards are a better value when you buy new.
 
DLSS Q looks sharper than Native in Doom Eternal for me,
This is not a new phenomenon. Games tend to miss LOD level.

IMO I would not use any mode lower than 4K FSR UQ, as 4K FSR Quality produce noticeable ringing artifacts
Water is wet and Lanczos 3 rings... If you don't want a ringing filter, you can shop for shaders from MadVR since we have brought the age of MadVR filters in gaming. I already have my picks ready(superxbr 100/75 luma, bilateral chroma maybe with supersampling).
 
This is not a new phenomenon. Games tend to miss LOD level.


Water is wet and Lanczos 3 rings... If you don't want a ringing filter, you can shop for shaders from MadVR since we have brought the age of MadVR filters in gaming. I already have my picks ready(superxbr 100/75 luma, bilateral chroma maybe with supersampling).

So FSR is just a dude trying out different post-process filters and apply it to different games :D
Kinda hard to imagine doing that would actually compete with an A.I algorithm that systematically improving itself.
 
So FSR is just a dude trying out different post-process filters and apply it to different games :D
Kinda hard to imagine doing that would actually compete with an A.I algorithm that systematically improving itself.
MadVR has neural filters too. They are crap and make up artifacts, especially in blades of grass.

Don't overestimate stupid filters when compared to their actual counterparts.
 
Actually you don't understand.
If you buy a RTX card, you get both.
FSR and DLSS.

Now if you dont want to buy any card at all, obviously FSR is free.
Never claimed it wasnt.
Free and arguably looks better and will even unofficially work on a GTX980 Maxwell card versus locked down arguably looks worse and harder to implement. FSR isn't perfect and has room to improve certainly ,but it's free and fixes my biggest issue with Nvidia cards since the 8800GT onward when things changed with negative lod bias only allowing for -3.0 rather than -15.0 with earlier GPU's with RivaTuner.

I hate overly blurry and smeary scene's and Nvidia's had entirely too much of that in the last decade. I do like MFAA, but of nearly all Nvidia's AA techniques it's about the least obtrusive on scene blur which also mutes color a bit. Less is more and better job you do with vertex pixel shader post process the better a scene will look as a whole in the end which is where AMD is killing it relative to Nvidia.

In the video of Necromunda GamerGuy posted and Marvel Avengers FSR had better lighting, shadows, reflections, color, and texture detail plus the scene despite slightly lower average FPS had better in motion FPS if you played the video back at x0.25 speed and comapared in the Necromunda video side by side of FSR/DLSS in the scene with the fans spinning it was like night and day DLSS looks terrible by contrast. You get better light shafts and motion of them with FSR. Believe it or not DLSS's temporal TAA kills a lot of scene detail outright with the post process.

I'm not going to defend Nvidia on DLSS for the sake of it when FSR objectively looks superior side by side in damn near everyday from what I can tell. They both are acceptable enough I suppose, but one is easy to implement and free while the other is a ringing example of corporate behavior at it's worst rather than competitive and at it's best. DLSS or similar could've easily been free and AMD is proof of it.

Different type of ghosting, I know what you are talking about (have a C9 OLED and a 5800X+6800 XT media center PC, lol). Its somewhat the problem when terms like 'ghosting' are used interchangeably for things like overshoot and TAA errors.

You already can mix FSR & TAA in some titles, but this is a shit way forward. IMO, FSR will in stage 2 be doing some form of AI reconstruction ala DLSS and tied into RDNA3 next year, and AMD's patents on it suggest that's exactly what will be occuring.
You can mix CAS & TAA in reshade just fine, but I stopped using TAA in reshade in favor of DPX alongside CAS because it does a better job at it in practice with lower overhead. When I do need to upscale the render image with a texture filter technique contrast adaptive LUT has become my favorite followed by clarity which I have 3 configurations I made more light weight for soft light, linear light, and hard light that I mix together if using them. I really don't care much for TAA it's got a nauseating smearing to me.

I feel you get way more upside from good tone mapping and controlled selective sharpening. The trick is good layering and blending. Why I like the adaptive LUT is it's designed for color gradient blending in mind and it's lighter weight than TAA or Clarity for that matter as well. For a texture filter technique it's probably the most reasonable and lightweight compromise you'll get for the IQ upside it can provide in certain area's w/o at the same time introducing in motion temporal artifact issues that come from texture filtering effects that don't play nicely with pixel level scene sharpening that all scene have a bit off to offset blend two or more render frames together.

That last part is also why I don't like AF much and keep it either off x2 or x4 and minimize negative LOD bias in relation to it. I've started to lean away from using negative LOD bias at all actually once I learned about how the sharpening from it impacts AF and causes scene artifact superellipse issues. It also adds a minor bit of GPU overhead using negative LOD bias, but more importantly I feel by not utilizing it that it should actually leave more overhead room to better utilize CAS in place of it which does a more controlled and better job at scene sharpening than LOD bias would otherwise do itself. I haven't tried it yet, but I've contemplated trying a exaggerated CAS configuration with a bit of a positive LOD bias and seeing just what happens from a comparison standpoint.

Perhaps I'll investigate a bit with a negative, neutral, and positive LOD bias CAS configuration 3-way comparison and get a idea of what happens play with strength values a bit on the sharpening control aspect of CAS itself. It might have some benefit one way versus the other and I'd like to look into that a bit.
 
Last edited:
Not really arguing, just pointing out that Nvidia cards are a better value when you buy new.
While this topic is about the FSR tech itself.

If you wanna talk about "better value"
I bet many of us are willing to share how ridiculous MSRP vs "actual street price" on Nvidia cards are.
 
Nvidia always pumping that MRSP value, but real world it's like twice as much. That applies a bit to both companies, but tends to be worse with Nvidia because they are literally selling their hardware aimed at miners. I mean Nvidia goes as far as making specialized cards for mining and devoting stockpiles of cards for that purpose to miners. The hash rate halving is a joke and not aggressive enough 50% less profitable is still plenty profitable. Nvidia didn't go far enough and already cut right into the supply chain at the same time. They are just trying to save face really.

I'm getting a AMD GPU next time around. I was impressed with their GPU software for a cheap HP laptop so I know what to expect. It's not perfect, but gives pretty good options. I honestly like Intel's software in some area's better than AMD/Nvidia for that matter their options in their video section of software is solid with some nice CMYK control. The worst is actually Nvidia and what is crazy is you can't even brightness and contrast values by 1% it's either 2% or 3% and just skips between them. I'm guessing most likely even the old archaic, but wonderful in it's day RivaTuner allows better for calibration which is kind of sad and shows how little Nvidia gives a damn about even bog standard basic levels of image quality.
 
maybe Nvidia can implement FSR and get DLSS working properly
Almost spat out my coffee laughing when I read that, good one! :roll:

You might not like it, but it's doing its job tremendously well at this point.
arguably looks better
This is where people will always come unstuck, no matter which, you could always say "well I prefer XYZ to ABC, so to me it's better"

The best argument is that it's free and easy to imp, nobody can deny those, the rest is a personal preference of what you like in an image.
 
Almost spat out my coffee laughing when I read that, good one! :roll:

You might not like it, but it's doing its job tremendously well at this point.

This is true, the quality upscale in DLSS is excellent. Its just a shame that it magnifies deficiencies that are inherent to temporal solutions. I may have been being facetious, but I do believe DLSS 1.0 would have been successful if Nvidia took AMD's approach with FSR.
 
Almost spat out my coffee laughing when I read that, good one! :roll:

You might not like it, but it's doing its job tremendously well at this point.

This is where people will always come unstuck, no matter which, you could always say "well I prefer XYZ to ABC, so to me it's better"

The best argument is that it's free and easy to imp, nobody can deny those, the rest is a personal preference of what you like in an image.
To which I ask how can a AI know personal preference of a random individual!!? ;) I provided plenty of examples the only area's DLSS appeared better I could just as easily simulate with CAS if you want blurry and washed out color.
 
Back
Top