AMD CTO Mark Papermaster confirmed that AMD is working on a new upscaling technology that leverages AI. A key technological difference between AMD FSR and competing solutions NVIDIA DLSS and Intel XeSS, has been AMD's remarkable restraint in implementing AI in any part of the upscaler's pipeline. Unlike FSR, both DLSS and XeSS utilize AI DNNs to overcome temporal artifacts in their upscalers. AMD Radeon RX 7000 series GPUs and Ryzen 7000 CPUs are the first with accelerators or ISA that speed up AI workloads; and with the RX 7000 series capturing a sizable install-base, AMD is finally turning to AI for the next generation of its FSR upscaling tech. Papermaster highlighted his company's plans for AI in upscaling technologies in an interview with No Priors.
To a question by No Priors on exploring AI for upscaling, Papermaster responded: "2024 is a giant year for us because we spent so many years in our hardware and software capabilities for AI. We have just completed AI-enabling our entire portfolio, so you know cloud, edge, PCs, and our embedded devices, and gaming devices. We are enabling gaming devices to upscale using AI and 2024 is a really huge deployment year." In short, Papermaster walked the interviewer through the 2-step process in which AMD is getting into AI, with a hardware-first approach.
AMD spent 2022-23 introducing ISA-level AI enablement for Ryzen 7000 desktop processors and EPYC "Genoa" server processors. For notebooks, it introduced Ryzen 7040 series and 8040 series mobile processors with NPUs (accelerated AI enablement); as well as gave its Radeon RX 7000 series RDNA 3 GPUs AI accelerators. Around this time, AMD also introduced the Ryzen AI stack for Windows PC applications leveraging AI for certain client productivity experiences. 2024 will see the company implement AI into its technologies, and Papermaster couldn't be more clear that a new-generation FSR that leverages AI, is in the works.
70 Comments on AMD Working on an AI-powered FSR Upscaling Algorithm
But DirectSR should improve applying those different standards into games. Overall all 3 use the same data, motion vectors, depth info etc so developers should only expose those to DirectSR API and then solution by the vendor takes over.
Should help with implementation, quality, updates. But maybe I'm wrong we'll see.
F off with that crap, nobody wants this nonsense.
Edit: We've also got game devs cranking up system requirements to 11, and we've also got gamers who want smooth 4K gaming with ultra graphics. Gaming has become a very needy and expensive hobby for many. Very first world-ish, so to speak. Something has got to give.
Thankfully they'll probably all fold into DirectSR anyway, so in the end it's us gamers who will win as a whole over time.
This is all pretty simple, don't like it? don't use it. About the expected level of saltiness and disingenuous arguments about upscaling that I have come to expect here. We want 4k ultra cards at bargain bin prices and will only play games at "native" don't ya know!
One wonders if RDNA4 is getting tensor type cores or NPU?
And yes, more power, no matter if you use upscaling or not will always be the solution. it's just that upscaling give a tool to use to achieve performance goal.
I totally understand why people would want to not use them, but they can still be beneficial. If they had a very large support, one could buy a low/mid range GPU and still get a 4K monitor.
I am a bit less for case were the game is just poorly optimized. But for those poorly optimized game, we have to vote with our wallet, Upscaling or not. We had those before upscaling and we would continue to have them even if upscaling go away.
With sharpening dragged to the max, it looks fairly acceptable, too.
I'm tempted to get back to some testing on my 6500 XT. I've kind of forgotten how much fun budget gaming can be. :ohwell:
But that is great. In the end this is what you want from an upscaler. Like i said earlier, those upscaler should be standard in games and you should always be able to select a lower render resolution than your output resolution.
Then you can get the benefits of having a 4K monitor for all the other stuff while still owning a low/mid range GPU or just an older one.
I wonder if Sony will have their own AI upscaler in the PS5 Pro.
Zero regrets with a 4k monitor too, the other benefits are immense and upscaling is in it's prime in this territory. There's a positive and negative spin for everything, I've been told I've overbought monitor relative to my GPU and now am making sacrifices, I don't see it that way at all, I'd rather upscale 1440p to 4k than be stuck still on a 1440p display, and as we've seen at 4k output upscaling can and often does produce better than native+mehTAA results. Also OLED :)
Its the reason I stayed at 1440 UW and a 7900XTX. If I was looking at 4K for the games I play and the settings I like to use it would have pushed me to a 4090. However the extra $1200 I would have need for this GPU would have bit the wallet hard. So I'm happy where I'm at currently.
If one plays older games you can get away with 4k and something mid range so many factors are in play when making these choices.
Comments a-lá "bring us more power, these upscalers are donkey dong" are nonsensical because, if we don't count some specific outliers like 4060/6500 XT/etc, every gen is faster than the previous one by a very significant margin regardless of a use case. We're both getting more speed and more upscaling, thus 1080p gamers are happy because their 400 dollar GPUs are thriving at 100+ FPS in most games, and 4K gamers are also happy because they can use DLSS/FSR/XeSS and get their 60+ FPS with relative ease despite having a mid-three-figure USD GPU. And the image quality of 4K + DLSS at Quality is almost never noticeably worse than native. Rarely it's worse (= eyes hurt) than native in case of FSR and XeSS.
Badly optimised games? VRAM hogs? 20 FPS on 1500 dollar GPUs? Always has been a thing. Nobody thought of 50 FPS as of bad experience when I was a kid (late 90s, early 00s). Standards shifted.
Objectively, the main problems are:
• Artificial deficite (= wares are more expensive than they would be in normal circumstances).
• Lack of SFF/ITX friendly GPUs as well as ridiculously growing PSU requirements.
• AMD are very far behind NVIDIA in everything that's not pure raster performance and they don't seem to try to change that, ultimately leading to value stagnation.
Upscalers were toughly underwhelming 5 years ago. Today, they are fine. Tomorrow, I bet they will shine. Especially if AMD succeed with AI in upcoming FSR versions (only possible in parallel universes I'm afraid...).
As for the rest of your post, wholeheartedly agree :)
AMD has never been successful at either of these.