• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Confirms FidelityFX Super Resolution 4.0 will be AI-powered, Focused on Efficiency

Turned out that Tensor Cores and RT cores are important eh, so much that Sony is charging 200usd extra to have those cores in the PS5 Pro :D
 
Looked at this video too as it was linked in the thread, it's quickly obvious to me the compression significantly muddies the ability to truly discern between the two (given I test this myself regularly on my own 4k120 OLED, including some of the same titles shown here), plus using what is inarguably FSR's best case scenario, 4K Quality mode, we already know that's where the gap is the closest.

I called attention to this in the comments and elaborated on my own findings, how I find his 'challenge' methodology flawed and conclusions drawn interesting but far from definitive given the compression obfuscates much if not all the (admittedly reduced) differences each technique has when viewed in real time rendering on ones own monitor, and detailed how and why DF's methodology for comparisons leads to significantly less flawed comparisons and better conclusions, which happen to be in line with my own and other reputable tech press.

Well my replies got deleted and initial comment hidden from public view, and other comments I saw pointing out the same are gone or hidden too. Methinks this channel doesn't like constructive criticism... :roll: well, just another channel to ignore now I guess, if they're not willing to even respectfully listen to constructive criticism and allow it to be visible, they'd rather block their ears and continue beliving in the ...uh... point? they're attempting to make.

 
Looked at this video too as it was linked in the thread, it's quickly obvious to me the compression significantly muddies the ability to truly discern between the two (given I test this myself regularly on my own 4k120 OLED, including some of the same titles shown here), plus using what is inarguably FSR's best case scenario, 4K Quality mode, we already know that's where the gap is the closest.

I called attention to this in the comments and elaborated on my own findings, how I find his 'challenge' methodology flawed and conclusions drawn interesting but far from definitive given the compression obfuscates much if not all the (admittedly reduced) differences each technique has when viewed in real time rendering on ones own monitor, and detailed how and why DF's methodology for comparisons leads to significantly less flawed comparisons and better conclusions, which happen to be in line with my own and other reputable tech press.

Well my replies got deleted and initial comment hidden from public view, and other comments I saw pointing out the same are gone or hidden too. Methinks this channel doesn't like constructive criticism... :roll: well, just another channel to ignore now I guess, if they're not willing to even respectfully listen to constructive criticism and allow it to be visible, they'd rather block their ears and continue beliving in the ...uh... point? they're attempting to make.


Sony must have hired that same guy to do their PS5 vs PS5 Pro comparison :roll: .
 
i don't want to upscale anything.
give me GPUs that are good enough to run with native resolutions and stop making shitty games that are barely functional.
 
i don't want to upscale anything.
give me GPUs that are good enough to run with native resolutions and stop making shitty games that are barely functional.
Totally agree. Upscaling is a big step back in the visual experience for me as well.

I'd rather run native and just lower details. Did some tests on Wukong benchmark and on that seems much better way.

Upscaling is mainly about the manufacturers selling lower spec cards and increasing margins. Just as RT is largely a few better shadows for a 30% or more performance hit.

Seemingly no one is calling these "features" out for what they really are.
 
Wow. What a brilliant reply. :kookoo:
Well, you would get what you want. Just play games at least five years after they have been released. Suddenly instead of being ”shitty games that are barely functional” you’d get perfectly functional games.

I mean, why are you rushing to play ”shitty games” in the first place?
 
Turned out that Tensor Cores and RT cores are important eh, so much that Sony is charging 200usd extra to have those cores in the PS5 Pro :D

lol… except, you get 18000 graphics speed RAM, 2TB PS5 class NVME, “fancy racing stripe on the case”, and almost double the Graphics “ cylinders” lol… for 200 dollars over a ps5.

no “bluray drive” because “cars” don’t have cassette players or CD players anymore. (how long have PC’s NOT have bluray/cd players?)
finally “the“ (ps5pro) console is the “same” as a PC, as in digital only…
the console does have a bolt on, snap on solution, for those in the “past”
 
Even if FSR 4 is available to the similar scope of hardware like previous iterations, we already have XESS (DP4a) that is also available to a similar scope of hardware. It wouldn't be too worth the use unless FSR4 has miraculously less overhead than XESS (on GPUs besides RDNA 4) and gives similar IQ.
 
....I thought all upscaling (AMD, nvidia, Intel) used some sort of AI.....
 
....I thought all upscaling (AMD, nvidia, Intel) used some sort of AI.....
Neah. Idk about Intel, but AMD initially went for some (very) basic interpolation algorithms.
 
Even if FSR 4 is available to the similar scope of hardware like previous iterations, we already have XESS (DP4a) that is also available to a similar scope of hardware. It wouldn't be too worth the use unless FSR4 has miraculously less overhead than XESS (on GPUs besides RDNA 4) and gives similar IQ.
It’s always better to have more competition. Xess might not also be fully open source, so there is that also.
 
Neah. Idk about Intel, but AMD initially went for some (very) basic interpolation algorithms.
Intel uses AI on their gpus for XeSS. It’s those xmx cores. On non intel cards XeSS uses dp4a.
 
IMO if shimmerfest is the only thing fixed in FSR4 then it will be a resounding success.
That was the issue for me when I tried FSR in RPCS3.

RPCS3 can do some kind of super resolution mode then downscaled and is amazing, but doesnt work in every game, FSR 1 is also available, I tried it, and it was an improvement over native but its obvious flaw was nasty shimmering. However this is v1 of FSR, so I didnt know if FSR 2 had the same issue, I remember DLSS 1 shimmered in FF15.
 
Wow. What a brilliant reply. :kookoo:

I mean technically he is correct, its not the gpu manufactuers fault that gamedevs make new games that are so demanding the hardware can barely run it at native and at the same time we cheer that on as gamers because it gives an early glimps of the future, for example Crysis.

Playing older games will be natively maxed by newer hardware.
 
I'm really wondering how good FSR 4 is going to be because FSR 3.1 already does a good job with A.I. so with A.I. it will definitely be much better!
Maybe AMD is finally going to catch up with DLSS ! But I'm sure that both are going to push even further with more generated frame (Frame Generation) like 2x, 3x or 4x more generated frames...
 
i don't want to upscale anything.
give me GPUs that are good enough to run with native resolutions and stop making shitty games that are barely functional.
Can you be more specific? Maxing out the game and then complaining it won't run native is a bad way to complain.

Games have settings, and usually high settings look almost the same and run much better. No upscaling needed.
I have also seen examples where upscaling greatly improves Temporal AA issues. There is also upscaling at native resolution for better image clarity.

The best thing is, at least mostly, that you can choose if your want to run your game natively or with upscaling :)

I also like that some game devs just go nuts with the graphical settings. That way you can test how well future cards run.

That was the issue for me when I tried FSR in RPCS3.

RPCS3 can do some kind of super resolution mode then downscaled and is amazing, but doesnt work in every game, FSR 1 is also available, I tried it, and it was an improvement over native but its obvious flaw was nasty shimmering. However this is v1 of FSR, so I didnt know if FSR 2 had the same issue, I remember DLSS 1 shimmered in FF15.
DLSS 1, which actually used AI as training did not do well.
Current DLSS and FSR does not use AI at any point. DLSS might use the tensor cores, but despite having tensor cores the Titan V does not support DLSS.

NPU stuff in the CPU won't help FSR 4, since you would have to run the image through the CPU. And transferring that amount of data will have a huge performance impact.
In my opinion, "AI" is just marketing stuff. FSR 4 may use the special calculation units, which are also used for AI, in the current and upcoming AMD gpus.

Space Marine 2 has a very good FSR 2.1 implementation. And it will get FSR 3.1. FSR in that game also works on a GTX 980 and R9 Nano :)
 
Good job AMD, telling your fans on PC that their hardware is worthless already
How is providing versions of these technologies to a huge range of AMD cards and even Nvidia cards below the 4000 series making anyone's hardware worthless? The only company that has done this until now is Nvidia.

Obviously if FSR4 is using AI cores then of course it will need a GPU with said AI cores. I'm confused at what your trying to say? You don't have to use upscaling anyway, and probably wont be if your using an older card at lower resolutions. You know the most popular GPU is still the 3060 @1080p right? No one is using upscaling with that configuration.
 
Obviously if FSR4 is using AI cores then of course it will need a GPU with said AI cores. I'm confused at what your trying to say? You don't have to use upscaling anyway, and probably wont be if your using an older card at lower resolutions. You know the most popular GPU is still the 3060 @1080p right? No one is using upscaling with that configuration.

Just thinking about the suitability of the 3060 to 1080p today since you brought it up.

I had a quick look at the 3060's 1080p numbers in the 3050 6GB review (they're individually listed there) and there are 2 games where it's in the 30s, 3 games where it's in the 40s, a couple in the high 50s and the rest above 60 (I always ignore Cities II as it's broken). The 30s games need DLSS/upscaling as you probably can't drop graphics settings enough to compensate to the point where DLSS itself makes a smaller reduction in image quality. The 40s games you might get away with dropping settings only but you'll probably still get drops below 60fps in parts of those games, no big deal. The 50s should be easy to get to 60fps with well chosen settings.

The 7600 and 4060 each do better than this but it's only 0-10 more fps for these sub-60fps games, so they'll need to drop settings less often but will still need to do so. And that's right now. It's a reasonable assumption that someone buys a GPU to use for 3 years or longer so there will be increasing numbers of games needing help from upscaling to maintain good fps on these cards.
 
Just thinking about the suitability of the 3060 to 1080p today since you brought it up.

I had a quick look at the 3060's 1080p numbers in the 3050 6GB review (they're individually listed there) and there are 2 games where it's in the 30s, 3 games where it's in the 40s, a couple in the high 50s and the rest above 60 (I always ignore Cities II as it's broken). The 30s games need DLSS/upscaling as you probably can't drop graphics settings enough to compensate to the point where DLSS itself makes a smaller reduction in image quality. The 40s games you might get away with dropping settings only but you'll probably still get drops below 60fps in parts of those games, no big deal. The 50s should be easy to get to 60fps with well chosen settings.

The 7600 and 4060 each do better than this but it's only 0-10 more fps for these sub-60fps games, so they'll need to drop settings less often but will still need to do so. And that's right now. It's a reasonable assumption that someone buys a GPU to use for 3 years or longer so there will be increasing numbers of games needing help from upscaling to maintain good fps on these cards.
Let's not forget that those are maximum settings. Dropping a thing or two can result in huge performance gains with minimal to no visual loss in many games.
 
Let's not forget that those are maximum settings. Dropping a thing or two can result in huge performance gains with minimal to no visual loss in many games.

Certainly, I mentioned doing that (and I use DF, HUB, or BK Optimized settings in all games) but some games don't have as much span for reducing settings to recover framerates as others, FFXVI being a recent example but there are others. And that's only today's games. In 2-3 years which is a reasonable lifetime for a GPU, it's only going to get worse.

However, I think that's an overly pessimistic way to look at things as a dedicated gamer will save longer and buy higher in the range. And I and others in the family used my 1060 6G until it died earlier this year, and the 4060 and 7600 are better 1080p cards for release-current games than the 1060 was for it's release-current games, based on fps. It lasted 7 years for us because we don't play all the newest games for reasons:

too expensive, $20-30 next year is fine
gameplay not interesting for us
friends play other games
overabundance of "older" games with great gameplay with still good to great visuals

Same goes for the current 6-level cards. They will need upscaling help with some new releases, likely increasing each year. But they will also continue to perform very well for hundreds of great games released over the past 10 years. And to veer back on topic, I see no problem with AMD releasing a new series of GPUs that require dedicated hardware to achieve the best upscaling options, especially for these low end cards which IMO benefit the most. It looks like they tried with FSR 2 and 3 with OK to very good results and if more dedicated computation is needed to keep up with DLSS, well they'd better do that, hadn't they?
 
Back
Top