Thursday, May 6th 2021
AMD's Elusive FidelityFX Super Resolution Coming This June?
AMD FidelityFX Super Resolution (FSR), the company's elusive rival to the NVIDIA DLSS technology, could be arriving next month (June 2021), according to a Coreteks report. The report claims that the technology is already at an advanced stage of development, in the hands of game developers, and offers certain advantages over DLSS. For starters, it doesn't require training from a generative adversarial network (GAN), and doesn't rely on ground truth data. It is a low-overhead algorithmic upscaler (not unlike the various MadVR upscalers you're used to). It is implemented early in the pipeline. Here's the real kicker—the technology is compatible with NVIDIA GPUs, so studios developing with FSR would be serving both AMD Radeon and NVIDIA GeForce gamers.
Sources:
Coreteks (YouTube), VideoCardz
104 Comments on AMD's Elusive FidelityFX Super Resolution Coming This June?
You said you are a software engineer, that doesn't mean anything in particular. Did you work with ML frameworks and write similar software to DLSS ?
Please, check this:
No sense? Better definition and 90fps instead 45fps... and it's better over the time. Yesterday Metro Exodus was released with DLSS 2.0, on low end cards like RTX 2060, 49fps instead 11.3fps... c'mon, DLSS get a lot more juice from GPU's.
The tech it's awesome.
P.S.
As for the "marketing" clips that shows DLSS 2 4K image looking better than the native 4K one, I only have 1 word for you: SHARPNESS.
They just add extra sharpness to fake the crispness of the texture, but if you look closer to the images you can clear distinguish the blurred jaggies due to the low res upscaled.
Plus, if that's how you feel about DLSS. .. I don't think you'll be impressed with FSR IQ. I hope I'm wrong. All that really matters is the output image on your screen, marketing is always going to be marketing.
To me the output is all the matters, if it's comparable to native, who cares one bit what the input resolution is?
I'm not bothered at all how the magic pixels make it to my screen, the image I get at the end is the important part. Rendering is all ridiculous computer magic to most anyway, judge the final image as the final image.
But the point it's that the tech it's fantastic and it's great news AMD users can also enjoy similar solutions, I hope
1) Improved lines (long grass, hair, eyebows, etc)
2) Not that noticeable with blurry textures, that is why people keep repeating that face from that weird game that looks like it's from 2003, as it barely has any texture detail, but has hair
3) Wiping out fine details
4) Adding blur when things move fast (entire screen can be blurred in Death Stranding by just quickly moving the mouse, see arstechnica review)
5) Particularly bad with small, quickly moving objects
they know that saying its "1080p" upscaled sounds less good, instead they pretend its magic and their cards can do "4k" with Ray Tracing, but in reality they are doing less then 4k to be able to that.
my problem is the marketing, and I can only repeat myself, if they compared 1080p performance to 1080p performance, ya know, apples to apples, then say "yeah ok the performacne is the same between the two products BUT ours can do this but with our version of anti aliasing so it looks better" then that is fine.
But instead they knowingly say "our card with 4k DLSS does better then the competition at 4k".....well yeah because you are not running it at 4k now are ya?
What would work and you would hardly see a difference is taking a picture at 20 megapixels and downscaling it to 5 megapixels, looking at the same picture on a 2k monitor you can hardly see a difference if you don't start pixel peeping.
So we know DLSS only works when you play at very high resolutions but the question is why the game doesn't scale properly with the textures and all that.
I think DLSS has to start from a place of wasting resources and then comes the miracle technology that helps with the waste.
I'm not holding my breath for AMD doing much in this area or getting this on consoles in the future, if the game is properly made in the first place then there is no need for DLSS and other tricks.
I'd like to compare it like using too much sharpening, when you blur the image it can sort of appear as if the image quality is better, so in some instances DLSS can appear to some people to have better image quality, but in reality it doesn't.
In fact in universally stated "Best game" for ray tracing and DLSS Cyberpunk 2077, DLSS quality is much worse than native. GamerNehus had a great video where he shows dozen of pictures at the beginning of the video and asks you to try and figure out which one is which, from native to DLSS performance to the max DLSS quality. Literally 99% of people knew the answers in terms of what is native vs DLSS.
Now in that video he showed small aspects where DLSS had slightly better text visibility, so there are some small, very specific improvements in image quality, but overall for 95% and more its easily distinguishable much WORSE quality than Native. We are also talking about rendering at 4k, which uses the best resolution, don't forget the further down you go, the worse image quality is going to be! So if you play on 1080p and use DLSS with that resolution, your DLSS output is going to be much worse than 4k.
Even a standard upscaling + Sharpening on a 4K monitor is fine and usable.
Same thing on a 1440P monitor, not that great but not that bad too. Somehow usable
on a 1080P monitor it's just bad.
that is true for all upscaling technology including DLSS. Digital Foundry (who have made many paid Nvidia sponsored presentation with very doubful claim) aren't telling you the full story and they only look at what can make thing shine better. A media that have real neutral and objective view wouldn't have made that video by example claiming that a 3080 was twice as fast as the 2080 TI. They say a lot of thing that is true, but you can mislead people by pointing the narrative into a specific direction and omitting all the negative.
It's clear that internally, DLSS add a sharpening filter. Apply it on the Native image and it will look even better. You don't need upscaling for that. Also i think Microsoft is trying to do a lot to trick people thinking there is a huge AI part into it because they want to convince people they need to buy the silicon space they added for their Pro/Business AI accelerator but many people doubt that part. Some Bug have shown that it probably use some TAA and it also ofset the rendering of each frame slightly so that the lower resolution capture a bit more detail each frame.
This is why fast moving object or scene are so blurry while they are able to get some tiny details better. They don't seems to reconstruct much, they just use previous frame data and if it move too quickly, there is no data to use.
Nothing that can't be implemented with a more open solution. And this is really where DLSS fail. That is a good technology. Everything that is around that can help to improve performance is good even if there are drawback. The main problem there for me isn't that it's a good or bad technology. It's a good tech with some drawback. The problem is that is a closed technology.
Closed technology are bad for PC gamer, end of the story.
The results I've seen for DLSS when upscaling from say 1440p-1800p to 4K are more than good to pass muster and given in a game you are not usually standing still looking for tiny flaws, I'll take image quality that looks almost as good but with 50% higher frame rates any day. I would never try and upscale 1080p to 4K, 1440p minimum.
I do hope old computer graphics take a revival. Everything looked unique to itself. This AI optimisation can do its thing, however I still have my hope for new filtering modalities. Let's make MadVR the benchmark. There is much to benefit, if it at least overcomes this overshading 'bug' in the pipeline - much ado for nothing, sales pitch, graphics relegator, quad helper pixel, planned obsolescence artifact!
In videography world there is a plugin called twixtor, it's like a 20 years old plugin that converts a video at let's say 25fps to 50fps, it analyzes the video and it creates new pixels in advance, pray you don't shoot trees or scene is not too complex, this does basically the same thing as modern plugins that called themselves AI powered.
I'm telling you, DLSS sponsored titles need to waste a lot of hardware resources so when you enable the miracle technology it "works" as intended.
Edit: Does it really need ray tracing to be on as some old rumors said?