Wednesday, March 16th 2022
AMD Set to Announce FSR 2.0 Featuring Temporal Upscaling on March 17th
AMD is preparing to announce their FidelityFX Super Resolution (FSR) successor tomorrow, on March 17th before a showcase of the technology at GDC 2022 as we previously reported on according to leaked slides obtained by VideoCardz. AMD FSR 2.0 will use temporal data and optimized anti-aliasing to improve image quality in all presents and resolutions compared to its predecessor making it a worthy component against NVIDIA DLSS 2.0. The slides also confirm that FSR 2.0 doesn't require dedicated machine learning hardware acceleration and will be compatible with a "wide range of products and platforms, both AMD and competitors".
The technology has been implemented in Deathloop where FSR 2.0 "Performance" mode with ray tracing increased frame rates from 53 FPS to 101 FPS compared to 4K native resolution with ray tracing. The slides do not reveal if AMD will make the source code for FSR 2.0 open-source as they have done for FSR and Intel is planning to do with XeSS. AMD is also expected to release Radeon Super Resolution which is an FSR driver implementation available for all games on March 17th.
Source:
VideoCardz
The technology has been implemented in Deathloop where FSR 2.0 "Performance" mode with ray tracing increased frame rates from 53 FPS to 101 FPS compared to 4K native resolution with ray tracing. The slides do not reveal if AMD will make the source code for FSR 2.0 open-source as they have done for FSR and Intel is planning to do with XeSS. AMD is also expected to release Radeon Super Resolution which is an FSR driver implementation available for all games on March 17th.
33 Comments on AMD Set to Announce FSR 2.0 Featuring Temporal Upscaling on March 17th
nothing was stopping them, literally they are a duopoly.
Just saying "native" as an all encompassing, never able to be surpassed level of quality is misleading at best. What resolution? what pixel density? what display technology? What AA method? what game? in motion? the list goes on.
The sentiment needs to die already. There are multiple ways to improve IQ over rendering the exact number of pixels, in the 'traditional way of your display natively (and more often than not, with an AA pass).
Another factor might be something like using a slow VA panel, people might not see any ghosting because their panel produces it anyway so they're just used to it all the time, and DLSS doesn't seem to make it any worse.
I'm not blind to how DLSS can negatively affect an image, but it can certainly be said that aspects of the image are positively improved upon. One of my top bugbears is shimmering, with the AA in DLSS, it manages to often greatly reduce or even completely solve shimmer. So to me, native presentation with distracting shimmer, vs a DLSS implementation with that solved the shimmer, but perhaps has some pixel crawl or minor ghosting, I'll almost certainly find the DLSS image preferable - before even counting the extra FPS given.
And there is the implementation. I remember the Elder Scrolls Online, because the assets themselves are so low detail/low texture quality, the whole thing kinda falls apart and becomes an even more blurry mess. That's probably a game/engine/level of detail where the native sharpness adds fidelity, even if it also has some aliased edges. Sharpening passes in that game (I used to screw around with reshade and stuff on it way before DLSS) were horrible too even back then, the best effort was adding some mild juice to the bland color pallette.