Monday, June 7th 2021

AMD FidelityFX Super Resolution Coming To Xbox Series X/S

Microsoft has recently confirmed that AMD's FidelityFX Super Resolution (FSR) technology will be coming to Xbox Series X/S. The new feature is AMD's response to NVIDIA's DLSS 2.0 AI processing found in RTX series graphics cards. The two technologies both aim to increase frame rates in select titles with various upscaling technologies without a significant reduction in visual quality. AMD boasts compatibility with a wider set of graphics cards including older NVIDIA GTX 10 series cards while also making the technology open-source. AMD will launch FidelityFX Super Resolution for select PC games on June 22nd which will show if they can hope to compete with the well-established DLSS 2.0. Microsoft has confirmed that they will bring the technology to their Xbox Series X/S consoles running custom AMD processors.
Microsoft
At Xbox, we're excited by the potential of AMD's FidelityFX Super Resolution technology as another great method for developers to increase framerates and resolution. We will have more to share on this soon,
Source: IGN
Add your own comment

22 Comments on AMD FidelityFX Super Resolution Coming To Xbox Series X/S

#1
Crackong
On 22th June We will see if History ( G-Sync vs Freesync ) really repeats itself.
Posted on Reply
#2
ZoneDymo
I mean yeah...obviously, here is another reveal, it will also come to the PS5!

also I said this 100 times now but I wish they made a FSR or DLSS or whatever for specifically the Ray Tracing effects, to specifically lower the resolution of the RT reflections and bounce lighting etc etc and to then reconstruct that up to a more nice looking final image.
I think that would be far more worthwhile, keep the image nice and sharp at native res and leave all those effects you look at at a glance to be upscaled.
Posted on Reply
#3
TumbleGeorge
This is for consoles but how FSR will work on next Microsoft Windows which coming soon? There will have clearly new file explorer and who knows what's other new code?
Posted on Reply
#4
Shihabyooo
ZoneDymo
I mean yeah...obviously, here is another reveal, it will also come to the PS5!

also I said this 100 times now but I wish they made a FSR or DLSS or whatever for specifically the Ray Tracing effects, to specifically lower the resolution of the RT reflections and bounce lighting etc etc and to then reconstruct that up to a more nice looking final image.
I think that would be far more worthwhile, keep the image nice and sharp at native res and leave all those effects you look at at a glance to be upscaled.
For RT effects, you reduce the per pixel sampling rate, not the pixel count itself, then find out a way to denoise the mess you have. Not technically the same, but in some reductionist, conceptual view, it is what you're looking for, and it is how the current tech works.

Can't say if actually rendering the effect to a smaller frame isn't an accepted approach though, but I'd expect it would have some horrible bleeding or motion artefacts.
Posted on Reply
#5
ZoneDymo
Shihabyooo
For RT effects, you reduce the per pixel sampling rate, not the pixel count itself, then find out a way to denoise the mess you have. Not technically the same, but in some reductionist, conceptual view, it is what you're looking for, and it is how the current tech works.

Can't say if actually rendering the effect to a smaller frame isn't an accepted approach though, but I'd expect it would have some horrible bleeding or motion artefacts.
I dont know the details of it, I know about the denoising needed but Im more going of Digital Foundry reviews, where they tested for example the console version of Watch Dogs Legion vs the PC version and found the console version running a lower resolution for the reflections, this obviously to reduce the performance hit.

So if reducing resolution of RT effects is an option to enhance performance, then I would think that is where FSR or DLSS could/should shine.
Posted on Reply
#6
lemoncarbonate
Haven't caught up to the detail yet, but I heard it will also support GTX series card as well?
My GTX 1080 Ti will be happy for a few more years.

I'm curious about how AMD will deliver it to Nvidia card.. with a separate driver maybe?
Posted on Reply
#7
ZoneDymo
lemoncarbonate
Haven't caught up to the detail yet, but I heard it will also support GTX series card as well?
My GTX 1080 Ti will be happy for a few more years.

I'm curious about how AMD will deliver it to Nvidia card.. with a separate driver maybe?
I think its just a setting in the games probably, I doubt they had access to modify an Nvidia driver to add support for it for their GTX1060 demo.
Posted on Reply
#8
Kohl Baas
Crackong
On 22th June We will see if History ( G-Sync vs Freesync ) really repeats itself.
Do you mean if DLSS will cost 200$+ more? :laugh:
Posted on Reply
#9
Vya Domus
lemoncarbonate
I'm curious about how AMD will deliver it to Nvidia card.. with a separate driver maybe?
That's not needed, it's all compute based running on DirectML. This means that any GPU that supports DirectML can run this without any other intervention.

Of course on 2000 series and up they can technically optimize it by writing a back end that uses tensor cores but knowing Nvidia they probably wont do it. Not that it's actually needed, inferencing is pretty light even without dedicated hardware.
Posted on Reply
#10
Imsochobo
lemoncarbonate
Haven't caught up to the detail yet, but I heard it will also support GTX series card as well?
My GTX 1080 Ti will be happy for a few more years.

I'm curious about how AMD will deliver it to Nvidia card.. with a separate driver maybe?
Works on standard nvidia driver.
ZoneDymo
I think its just a setting in the games probably, I doubt they had access to modify an Nvidia driver to add support for it for their GTX1060 demo.
Amd has no way of modifying nvidia drivers, they're completely closed source code.
Nvidia can however make custom AMD and Intel drivers, cause they are open source (now mainstream windows ones, but on par linux ones, as well as Vulkan driver which is shared and both are open source).
Vya Domus
That's not needed, it's all compute based running on DirectML. This means that any GPU that supports DirectML can run this without any other intervention.

Of course on 2000 series and up they can technically optimize it by writing a back end that uses tensor cores but knowing Nvidia they probably wont do it. Not that it's actually needed, inferencing is pretty light even without dedicated hardware.
I wonder if it's based on DirectML but rather language agnostic so vulkan linux and DirectML.
Vulkan and DX12 is so similar anyways they have near 1:1 mapping possible, the devs opting for vulkan is usually a bit of a different breed hence the perceived difference.
Posted on Reply
#11
lemoncarbonate
Vya Domus
That's not needed, it's all compute based running on DirectML. This means that any GPU that supports DirectML can run this without any other intervention.

Of course on 2000 series and up they can technically optimize it by writing a back end that uses tensor cores but knowing Nvidia they probably wont do it. Not that it's actually needed, inferencing is pretty light even without dedicated hardware.
Wait.. no stupid question, so how GTX users can get it without a driver from AMD? I doubt Nvidia will update their GTX driver just to adopt AMD Fidelity. Nvidia is not that generous toward their customers.
Posted on Reply
#12
Vya Domus
lemoncarbonate
Wait.. no stupid question, so how GTX users can get it without a driver from AMD? I doubt Nvidia will update their GTX driver just to adopt AMD Fidelity. Nvidia is not that generous toward their customers.
You don't need anything, that's what I am saying. If your card supports DirectML, then it will support this as well.
Posted on Reply
#13
pcminirace
Greetings. This thread is for Uskompuf, author of the article. Greetings from Spain. In my country there is a technology page called Geektopia that cites your article as a source.
"AMD has announced FidelityFX Super Resolution (FSR) which some continue to call NVIDIA's competitor to Deep Learning Supersampling (DLSS) even though the two are totally different technologies. The advantage of FSR is that it is computationally undemanding as it is a mere rescaling and therefore it has compatibility with a greater number of devices, ranging from AMD's Radeon RX 470 to the current RX 6000, through the competition and its GTX 10. But Microsoft is not going to miss the FSR party and also the will lead to the Xbox.

“We at Xbox are delighted by the power of AMD's FidelityFX super-resolution technology being another great way for developers to increase frame rates and resolution. We will share more about it shortly.

AMD has not explicitly mentioned that FSR is a competitor to DLSS but an alternative, and it is because they are different technologies. FSR is a super resolution, that is, a rescaling that starts from one resolution to take it to another. For example, from FHD to 4K. It has some processing but introduces blurriness and stelae problems and others.

DLSS, as the name suggests, is supersampling. This means that it has a target graphic resolution (eg 4K) and to achieve this it first generates the frames at a much higher resolution (eg 36K) and then reduces it to the target resolution. Supersampling is an edge smoothing technique, and there is no loss of graphic quality — quite the opposite.

However, the 'deep learning' part of DLSS has to do with the graphics actually being generated at less than 4K (eg FHD), and by artificial intelligence it is brought to 36K resolution (a rescaling by artificial intelligence) and then reduce it to 4K (a supersampling). Supersampling is computationally very intense - normally, 36K graphics would have to be generated to reduce them to 4K - but since DLSS offloads the generation of graphics (rescaling) to 36K on the tensor cores, what is done is to achieve net performance. DLSS 1 had some visual problems in rescaling, but the introduction of temporal information in DLSS 2 and a single super-trained neural network for rescaling allowed them to be eliminated. In fact, with DLSS 2 you even gain detail as in Death Stranding and other games that implement it.

In short: FSR is never going to be a competitor to DLSS. They will put it on the Xbox, it will be seen that it produces visual problems, and the developers will flee from the technology until AMD improves it as it happened with DLSS 1.

Via: TechPowerUp. "
I first read the article in TPU, then when reading the one from Geektopia, it did not seem at all that it reflected the concept you wanted to express in yours.
I certainly trust your article more than Geektopia. I think the Geektopia one is biased and seems to put words in your mouth that you never said.
Maybe my words are not very clear, because I use the Google translator because I don't know how to speak English, but I have been following your publication for a long time because of its quality and I do not like what they do with your words.
Keep going. Greetings from Spain.
Posted on Reply
#14
Vya Domus
pcminirace
DLSS, as the name suggests, is supersampling. This means that it has a target graphic resolution (eg 4K) and to achieve this it first generates the frames at a much higher resolution (eg 36K) and then reduces it to the target resolution. Supersampling is an edge smoothing technique, and there is no loss of graphic quality — quite the opposite.

However, the 'deep learning' part of DLSS has to do with the graphics actually being generated at less than 4K (eg FHD), and by artificial intelligence it is brought to 36K resolution (a rescaling by artificial intelligence) and then reduce it to 4K (a supersampling). Supersampling is computationally very intense - normally, 36K graphics would have to be generated to reduce them to 4K - but since DLSS offloads the generation of graphics (rescaling) to 36K on the tensor cores, what is done is to achieve net performance. DLSS 1 had some visual problems in rescaling, but the introduction of temporal information in DLSS 2 and a single super-trained neural network for rescaling allowed them to be eliminated. In fact, with DLSS 2 you even gain detail as in Death Stranding and other games that implement it.
You're mixing up things very badly. Nvidia is supersampling images on their servers by running the game at very high resolutions which they then use to train a machine learning model. There is no 36K supersampling involved or anything like that happening on the GPU that you use as a end user, all the GPU does is infer an output image based on the input image according to the aforementioned model.

AMD is doing the same thing, on the GPU you get input image -> model -> output image. The only difference is that AMD is not training the model per game, which will likely make it way easier to adopt it.
Posted on Reply
#15
shk021051
guys this is not direct ml or machine learning right?
Posted on Reply
#16
TumbleGeorge
shk021051
guys this is not direct ml or machine learning right?
On this weak chips? I think that ML doing on supercomputers and results of already conducted training is using in...for now in outer software like FSR, but in future in programmable cores inside CPU's and GPU's. If in future CPUs and GPU's continuing to be different chips.
Posted on Reply
#17
Vya Domus
shk021051
guys this is not direct ml or machine learning right?
It is machine learning. Or rather inferencing.
Posted on Reply
#18
Mysteoa
Vya Domus
You don't need anything, that's what I am saying. If your card supports DirectML, then it will support this as well.
It's not using DirectML, from what I have heard. This will make it exclusive to windows and that will not be good.
Posted on Reply
#19
Vya Domus
Mysteoa
It's not using DirectML, from what I have heard. This will make it exclusive to windows and that will not be good.
There is no reason for it not to be based on DirectML, which is why this is coming to Xbox since they share the same APIs with Windows. Otherwise this would have been announced on PS5 as well but it wasn't.

Since Microsoft themselves talked about this you can be certain it's using their API. That being said FSR is free to use by Sony, there is no reason for this to be exclusive to Windows since FSR can be implemented on top of anything. On Xbox and Windows it will be done through DirectML, that's all and to be honest it's not AMD's job to write a back end for every platform out there.
Posted on Reply
#20
GamerGuy
TumbleGeorge
On this weak chips?
What chips are you referring to, the RX 6000 series?
Posted on Reply
#21
Mussels
Moderprator
lemoncarbonate
Wait.. no stupid question, so how GTX users can get it without a driver from AMD? I doubt Nvidia will update their GTX driver just to adopt AMD Fidelity. Nvidia is not that generous toward their customers.
It's being done inside the game engine, it's not a driver level function


It's no different to 'resolution scale' that lots of games have had for years, just with some better programming behind it
Posted on Reply
#22
TumbleGeorge
GamerGuy
What chips are you referring to, the RX 6000 series?
All consumer hardware which use one or very small number of components. Maybe after 4-5 years on the end of evolution of consumer hardware which mean no more of 3 generation 2022, 2024 and possibly 2026. With eventually new architectures hmm exist probability of the likelihood that consumer hardware will reach such satisfactory levels of performance that we can come to terms with the fact that, in fact, consumer hardware is unlikely to ever achieve truly high performance. Although, who knows ... Maybe in 10-20-30 years there will be home quantum computers, even portable ones, and my thesis will be refuted.
Posted on Reply
Add your own comment