• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The Medium: DLSS vs. FSR Comparison

So, the screenshots look like one was smeared with freaking Vaseline, and the other as if someone overused the Photoshop sharpen filter...
 
For the "purists" on the forum that's same peepz that don't mind spending scalper prices for GPUs probable don't care about either even though a lot of times they'll need to take screenshots and zoom the shit out it just to notice a difference will dismiss either technology.
But for me any tech that will give me extra FPS with a very small drop in quality is a win.
 
I'd say AMD has more emphasis at beginning it's image quality scaling with sharpening while Nvidia is the opposite with emphasis on starting with emphasis towards blurring from the original image. Basically tap filtering AA techniques with eroding and dilating a image in essence. There are advantages and disadvantages to each. That's also why not going too heavy on one or offsetting it with the other and sort of alternating the emphasis progressively can help. Try not to stray too far from the original image though or it'll look more jarring if you over use either.
 
For the "purists" on the forum that's same peepz that don't mind spending scalper prices for GPUs probable don't care about either even though a lot of times they'll need to take screenshots and zoom the shit out it just to notice a difference will dismiss either technology.
But for me any tech that will give me extra FPS with a very small drop in quality is a win.

Totally agree. While in game, you aren't zooming in on plant leaves to determine if there is a difference. You want the game to run fluidly and smooth at max settings. I care more about the FPS numbers as compared to native resolution and if there is a discernable difference at macro level views.

At macro levels views which are being used to actually play the game, the difference is very minimal. The FPS increases are great.
 
FSR is basically a successor to console checkerboarding. Not very advanced and rushed out to stall superior tech from gaining ground. DLSS easy win
 
i haven't tried FSR so i can't make a direct comparison, but DLSS isn't perfet either, you can see shimmering and crazy stuff from time to time and in certain games (Watchdogs legion for example)
 
FSR is basically a successor to console checkerboarding. Not very advanced and rushed out to stall superior tech from gaining ground. DLSS easy win
Checkerboarding has something over FSR - temporal component in the form of previous frame. It still has shortcomings but checkerboarding gives a better result easier.
 
It reminds me of people who turn their contrast to 11 with crushed blacks to make the colors "pop" even if it's super inaccurate.
LCDs are inaccurate. They cannot strobe color swatches since they have no linear LUT curves. They are awful compared to CRTs. They are good when graphics are cartooney, don't diss hate for it, it is the truth.
 
Checkerboarding has something over FSR - temporal component in the form of previous frame. It still has shortcomings but checkerboarding gives a better result easier.
Kind of the reason DLSS 1.0 died to quickly lack of temporal component. DLSS 2+ really cut down on the shimmering and stair stepping that doesn't show up in pictures but plainly obvious when you're playing the game. Nvidia's Temporal component with current DLSS is why it's not so easily to slap into a game but it's also why DLSS can reconstruct fine detail because it's doing slightly more than just upscaling a static picture.

I'm just waiting for AMD and Nvidia to just sell GPUs that can actually handle 4k at 140fps+ so this kind of give up short cut holdover goes away.
 
I'm just waiting for AMD and Nvidia to just sell GPUs that can actually handle 4k at 140fps+ so this kind of give up short cut holdover goes away.

DLSS and FSR aren’t taking 4K and running it up to 140 FPS as a native look a like, I.E., quality or ultra quality settings.

For example, with Call of Duty Black Ops using a 6900 XTXH or 3090 vendor, the game can be ran at 100-120 FPS native 4K until you turn on ray tracing. At that point, DLSS is used to achieve around 65-75 FPS with full ray tracing.

The only game I’ve encountered that can’t run 4K native 60+ FPS without ray tracing is cyberpunk. In that case, image sharpening technology is great to the frames you desire, even with ray tracing.
 
Is it my imagination or is there a circular part of the DLSS that looks great especially close up, but on the edges of that circle it's like smeared out ?
the FSR is a little more sharp on the edges, not so great on fencing and hanging wires. that's about the only thing DLSS looks really great at fixing to me is fencing and wires . other wise DLSS it just looks like a depth of field increase to me. This weird because distance view vs real world everyone has a different limit to how far they can see so it can be realistic to some people and unrealistic to others.
 
4k ultra FSR gets stuck loading here
and the 1080p drop downs are missing the FPS numbers


From what i'm seeing, DLSS looks similar to adding AA on top. Smoother, less jaggies.

Considering FSR is so much easier to get implemented into a game, i think we can tolerate a less perfect version that works for everyone, and a gamer on a 1050ti (or my dad with his RX580 at 1080p 144hz) is gunna take what they can get
 
Both look pretty good, as usual, fine detail quality and antialiasing quality go to DLSS, getting comparatively better as the res drops. Really quite a good showing here for FSR too though, oversharpened for my liking but still very useable it seems.
 
"Considering FSR is so much easier to get implemented into a game, i think we can tolerate a less perfect version that works for everyone, and a gamer on a 1050ti (or my dad with his RX580 at 1080p 144hz) is gunna take what they can get"

I would agree here completely. I run an RTX2070 Super in my laptop and a 5700XT in my main rig. While I do see a bit of an edge in quality with DLSS, I really appreciate FSR for it's ability to be run in just about any game, on pretty much any hardware and for the very real gains in framerate it does produce. For someone running GPUs of a few generations ago, which is a heck of a lot of people considering the market chaos right now, being able to run it on their older hardware may make all the difference in the world. So yeah, I definitely am very accepting of a minor loss in quality for an almost universal increase in playability.
 
Tried FSR on The Medium with a 6900XT. Looks pretty good to me, and there was a healthy bump in frame-rates too.

Looking at the static 4K images in the web-page, those flowers in the bottom-right corner with DLSS gave me that insta-visually-impared feeling due to it being so blurry. Other than that, I'd be happy to play through using either.
 
those flowers in the bottom-right corner with DLSS gave me that insta-visually-impared feeling
Looks like the Depth of Field is causing this, as it's present to varying extents in the native image too, like the flowers in the very corner are ~just as blurry, but as you move inward the effect lingers a bit more with DLSS. Where the FSR image seems to not respect that DOF effect and tries to sharpen them and goes a bit heavy-handed on them.
 
Looks like the Depth of Field is causing this, as it's present to varying extents in the native image too, like the flowers in the very corner are ~just as blurry, but as you move inward the effect lingers a bit more with DLSS. Where the FSR image seems to not respect that DOF effect and tries to sharpen them and goes a bit heavy-handed on them.
One of those instances where static images don't translate so well. I suspect in motion, your brain will tell you things are 'OK'.
 
One of those instances where static images don't translate so well. I suspect in motion, your brain will tell you things are 'OK'.
I suspect so, and I'd wager for a lot of people FSR or DLSS would get that same OK message from the brain. I know that, among other effects, DOF can be quite hated, but like other hated effects I like it, I find it very natural to my eye. Would be better if it came with a performance boost rather than a penalty though, given what it does.
 
I must say, I'm really impressed with how FSR in 1440p and above looks. I honestly didn't think they would come this close to Nvidia.

While true, I have to admit you can see the sharpening / overly contrasted edges and I've noticed in Cyberpunk as well that this gives the whole image a bit of a 'cartoony' (cellshaded graphics) look. You get a slight impression of bad photoshop, or jpeg compression artifacts, kind of. Its slight. But its there.

I'm always turning any sharpening effects off. It tends to make aliasing even more noticeable, rather than actually being 'sharp'. But surely they can make the slider available under FSR, it looks like a bug that its not.

The DLSS pic with the person behind the net there is impressive though. At the same time... having a universally applicable tech as good as FSR with that performance boost, is much more impressive to me, and much more useful too. It makes paying the proprietary price sort of unnecessary tbh. I don't spend my days staring at zoomed in pixels.

Great article & comparison!

As for FSR, it's always oversharpened in every game that I have seen, look like cavas painting or spreading vaseline all over the screen.
Depends how much of a scaling you apply, but at higher levels it does indeed work like that. With lower levels though, there are good middle grounds to be had.
 
Both are good enough for what they do. None of them is really 100% native, none of them will ever be. On another note, its amusing to see these type of comparison always turn into muffled fanboi wars :laugh:

Even more amusing is the fact that this gen both AMD and NVIDIA committed highway robbery on our collective wallets and people still defend them with passion :D
 
One of those instances where static images don't translate so well. I suspect in motion, your brain will tell you things are 'OK'.

Well.. in motion you tend to notice just as well, any artifacts are always going to pop up sooner or later. We're too nit picky not to, or selectively blind.

Even a slight bit of sharpening will stand out eventually, as per the FSR example. Motion artifacts also stand out - monitor land has been fighting to reduce ghosting artifacts for decades on LCD. And even today at 144hz/144fps you're not completely saved. And that's just a slower color shifting pixel we're talking about.

Static image analysis only makes it clearer that what you were noticing in motion as 'unnatural', is in fact truly pixels not being what they should be. So, 'OK', perhaps, but as good as native? Nope.
 
Well.. in motion you tend to notice just as well, any artifacts are always going to pop up sooner or later. We're too nit picky not to, or selectively blind.

Even a slight bit of sharpening will stand out eventually, as per the FSR example. Motion artifacts also stand out - monitor land has been fighting to reduce ghosting artifacts for decades on LCD. And even today at 144hz/144fps you're not completely saved. And that's just a slower color shifting pixel we're talking about.

Static image analysis only makes it clearer that what you were noticing in motion as 'unnatural', is in fact truly pixels not being what they should be. So, 'OK', perhaps, but as good as native? Nope.
Could have not said it better, in the end its upscaling, however fancy it might be :)
 
A reconstructed and upscaled image has more detail than the original. It is either a piss poor job by the developer or the reviewer drinking the kool aid.
 
A reconstructed and upscaled image has more detail than the original
I suppose if you consider native to be the goal, when a reconstructed image has more detail people tend to balk at the idea, how is it possible?. But why should the native resolution image be where it ends? is that the gold standard? clearly we can push beyond. iirc DLSS is attempting to achieve (by way of training and comparison to ultra high res 16k images) a super sampled look and these images plus other great implementations demonstrate that. I'm not saying it's without flaw, but it is being demonstrated time and time again that beyond native res details are achievable.
-The neural network is trained by Nvidia using "ideal" images of video games of ultra-high resolution on supercomputers and low resolution images of the same games. The result is stored on the video card driver. It is said that Nvidia uses DGX-1 servers to perform the training of the network.
-The neural network stored on the driver compares the actual low resolution image with the reference and produces a full high resolution result. The inputs used by the trained neural network are the low resolution aliased images rendered by the game engine, and the low resolution motion vectors from the same images, also generated by the game engine. The motion vectors tell the network which direction objects in the scene are moving from frame to frame, in order to estimate what the next frame will look like
 
Last edited:
DLSS has access to the raw imagery earlier than FSR, so it can 'reconstruct' data that would have been thrown out by the time FSR sees it

Like uhhh.... seeing the data for a fence *before* a motion blur effect is added, instead of after: more to work with, so a slightly better image
 
Back
Top