Tuesday, March 7th 2023

AMD Could Tease DLSS 3-rivaling FSR 3.0 at GDC 2023

AMD could tease its next-generation graphics performance enhancement rivaling NVIDIA DLSS 3, at the 2023 Game Developers Conference (GDC 2023), slated for March 23. While the company didn't name it, its GDC 2023 session brief references an "exciting sneak peek of new FidelityFX technologies" that will be "available soon," meaning that it isn't the recently released FSR 2.2. We expect this to be the very first look at FSR 3.0.

AMD frantically dropped in the first mention of FSR 3.0 in its Radeon RX 7900 series RDNA3 announcement presentation (slide below). The company let out precious little details of the new technology except the mention that it offers double the frame-rate versus FSR 2 (at comparable image quality). Does this involve a frame-rate doubling technology similar to DLSS 3? We don't know yet. It could just be a more advanced upscaling algorithm that doubles performance at a given quality target compared to FSR 2. We'll know for sure later this month. It would be a coup of sorts for AMD if FSR 3.0 doesn't require RX 7000 series GPUs, and can run on older Radeon GPUs, whereas DLSS 3 requires the latest GeForce RTX 40-series GPUs.
Sources: Lance Lee (Twitter), VideoCardz
Add your own comment

70 Comments on AMD Could Tease DLSS 3-rivaling FSR 3.0 at GDC 2023

#26
AusWolf
evernessinceYep, the technology is kind of pointless unless they figure out a way to generate the next frame and get rid of the latency hit.
That is impossible as long as you generate the extra frames without taking user input into account, which is what the whole idea of frame generation is about.
Posted on Reply
#27
duladrop
mb194dcFSR and DLSS are just fancy ways of reducing image quality or making other sacrifices to get performance?

You can probably just tweak game graphics settings slightly for similar results in most titles.
I only based this opinion from one of Moore's law is dead guest, who is one of from a gaming devs. He articulated in sort of, It will probably makes the dev's lazy, but the thing is, it's making them as dev easy to develop games that is already optimize from the get go, and if they have to have to adopt and do it natively they are fine with that. But he point out that we are now in the stage of the game where the user's preference of graphics quality is of importance and that requires an atrocious amount of geometry processing and to do it natively will demand a GPU that is very expensive like 4090.

And If their game runs only on that GPU, IMO how the hell can they sell their game if only a handful of gamers can afford on it? that's where the beauty of upscaling comes in handy like FSR and DLSS. In a sense these technology save us money on buying just a mid tier card instead of us pushing our purse to buy 4080 or 4090, and the same time it saves the gaming industry and it enables them developed an upscaled games in FSR/DLSS, while capitalizing the hardware raw high specs like MB's Displayport 2.1 bandwidth and the sharpness of 4K resolution/8K resolution to reach that quality while doing it in an upscaled settings.

The point here if you have a 1080p or 2K monitor run it on Native, if you have a 4K for the love of games run your game on FSR/DLSS, for you to enjoy the sharpness of your 4K resolution while playing on high fps. I don't have 4K monitor but with the sharpness of this monitor it might be hard for you to pinpoint those rough edges that you can see clearly on a 1080p monitors.

I am not fluent in english, my apology if my grammar is bad here.
Posted on Reply
#28
Space Lynx
Astronaut
evernessinceThe bar is not high to beat DLSS 3.0.



Yep, the technology is kind of pointless unless they figure out a way to generate the next frame and get rid of the latency hit.
I was thinking more about frame generation when I made that comment.
Posted on Reply
#29
EatingDirt
duladropNot necessarily Accurate you are just taking it into DLSS3's implementation of Frame Generation accounts, It some way AMD admits that it is somewhat like the Frame Generation but the way they implement their AI processing is different. It will only be clear soon when they truly unveils their FSR 3.
It is accurate. Frame generation will never decrease input latency because you're not actually interacting with the frames the frame generation is producing.

Latency is increased slightly, though typically not by a noticeable amount(+/- 5-10ms) because it's an extra thing, in this case inserting frames, that the GPU has to do.
Posted on Reply
#30
JAB Creations
ChomiqIf it's not limited to AMD only cards I'm all in.
Yeah, you should buy Nvidia instead because DLSS works great on my RX 6800. :roll:
Posted on Reply
#31
evernessince
AusWolfThat is impossible as long as you generate the extra frames without taking user input into account, which is what the whole idea of frame generation is about.
Yes, impossible with the way game engines are currently designed. You'd need access to a bunch of data in addition to user inputs that would only be available once the CPU completes another main game loop.

It's likely a better approach to reduce CPU overhead associated to the GPU, that way you can generate real frames instead of fake latent frames.
EatingDirtIt is accurate. Frame generation will never decrease input latency because you're not actually interacting with the frames the frame generation is producing.

Latency is increased slightly, though typically not by a noticeable amount(+/- 5-10ms) because it's an extra thing, in this case inserting frames, that the GPU has to do.
The latency hit has been in excess of 30ms in some reviews. As HardwareUnboxed points out, the latency hit is noticeable if your initial FPS is too low and it didn't make sense to enable at all if your FPS is already high. I don't remember the exact sweet spot numbers but I believe you want to be between 70 FPS and 120 FPS for the benefits to outweigh the cons.
Posted on Reply
#32
matar
Let's hope this move by AMD with push Nvidia to let RTX 3000 series to use DLSS 3.0
Posted on Reply
#33
duladrop
EatingDirtIt is accurate. Frame generation will never decrease input latency because you're not actually interacting with the frames the frame generation is producing.

Latency is increased slightly, though typically not by a noticeable amount(+/- 5-10ms) because it's an extra thing, in this case inserting frames, that the GPU has to do.
When I put a comment on FSR 3 I believe I was not mentioning anything about the latency but fps performance since you brought that out, yeah you have a point there, but as to how the RDNA 3's AI accelerators are being used on their Fluid Motion Frame with respect to latency's penalty, is yet be seen on games. So with FSR 3 + Fluid Motion Frame + HYPR-RX seems like they are considering all factors to boost the performance of their RDNA 3 Cards.
Posted on Reply
#34
trsttte
duladropHe articulated in sort of, It will probably makes the dev's lazy, but the thing is, it's making them as dev easy to develop games that is already optimize from the get go
There's an easier way around this that most companies are already using or jumping on: just use an established engine (usually Unreal Engine but there are other options, Decima from Guerilla Games seems pretty nice for example) instead of keeping on reinventing the wheel

It's a compromise between doing new things all the time and meeting basic quality goals without major scraficies elsewhere. Given how the quality of new releases has been decreasing further and further without being particularly innovative I'd say it won't be that much of a sacrifice for a while
Posted on Reply
#35
Minus Infinity
AMD should make it so FSR3 leverages Tensor cores and then developers could abandon DLSS, as they would use AMD"s open source FSR and getting accelerated performance on Nvidia and AMD cards. Could also leverage whatever Intel uses in Alchemist. Having to support three different upscaling technologies must be a total PITA.
Posted on Reply
#36
Fluffmeister
Very nice, I look forward to this potentially doubling my performance in games that use FRS 2, thanks AMD and their investors!
Posted on Reply
#37
Dirt Chip
The way AMD tail after NV with those "upscaling" features (FSRx\DLSSx) just show why NV can charge more for their product.
All agree that it is a must have thing, and the game is who have more and in what quality (and game adaptation).
Good luck to both, as ARK still stand shy in the corner, yet to enter the big boys fight.
Posted on Reply
#38
BoboOOZ
trsttteThere's an easier way around this that most companies are already using or jumping on: just use an established engine (usually Unreal Engine but there are other options, Decima from Guerilla Games seems pretty nice for example) instead of keeping on reinventing the wheel

It's a compromise between doing new things all the time and meeting basic quality goals without major scraficies elsewhere. Given how the quality of new releases has been decreasing further and further without being particularly innovative I'd say it won't be that much of a sacrifice for a while
Nvidia has great marketing. They know that if they do the same thing as everybody else, they would end up having to compete on price. Instead, they keep implementing new non standard stuff, this gives them the possibility to constantly move the goalposts and keep the competition scrambling behind.
Yes, the users get slightly less performance in the end, but that's not what matters, what matters is what they are prepared to pay more for innovative Nvidia products. Brilliant.
Posted on Reply
#39
umeng2002
DLSS 3 works well. In fact, it probably better than DLSS 2. Hopefully AMD's answer won't be far behind nVidia.
Posted on Reply
#40
Denver
The Million Dollar Question: Will AMD expose Nvidia's lies once again by showing a solution that doesn't require dedicated hardware (ASIC) and AI? :P
Posted on Reply
#41
AusWolf
evernessinceThe latency hit has been in excess of 30ms in some reviews. As HardwareUnboxed points out, the latency hit is noticeable if your initial FPS is too low and it didn't make sense to enable at all if your FPS is already high. I don't remember the exact sweet spot numbers but I believe you want to be between 70 FPS and 120 FPS for the benefits to outweigh the cons.
And that, in my opinion, is what makes frame generation useless. I don't need more performance when the game already runs above 70 FPS, and I most definitely don't want more latency when it doesn't.
Posted on Reply
#42
ratirt
I'm not a fan of the FG feature and I hardly thing I will be ever. You can always get some FPS with the settings adjustment. Plus there are those bugs and image quality related problems with the generated frames. I only hope this feature, will not make the companies produce way less powerful GPUs for a hard buck and mitigate the low FPS problem with the feature. If it is to improve experience sure but I hope we will not get to the point we rely on it no matter what hardware you get.
Posted on Reply
#43
wolf
Performance Enthusiast
Vayra86Yeah I didn't quite understand the whole point of FG in the first place.
Just say it's not for you, because surely you understand the point, if you've seen it with your own eyes at least. It looks more fluid for broadly equal to the same latency as before it was enabled, and entirely reasonable people that have used it give it some merit, it has a point, ie:
W1zzardI upgraded my work PC to RTX 4080, and have been playing with FG on for the last few hours and it's just absolutely stunning. No issues or anything, just double the FPS. I am constantly hoping to find issues to report, but nothing
Don't get me wrong, it's far from perfect and I understand the criticism, nothing is above constructive criticism, but the feature has merit, at least AMD agrees...
Posted on Reply
#44
AusWolf
wolfJust say it's not for you, because surely you understand the point, if you've seen it with your own eyes at least. It looks more fluid for broadly equal to the same latency as before it was enabled, and entirely reasonable people that have used it give it some merit, it has a point, ie:
Sure, but is it that great below 60 FPS as well? I think we'll see how great FG is when there are capable graphics cards that run the newest games below acceptable frame rates. If it makes 60 out of 30 without added latency, I'll agree that it's great. Making 200 FPS out of 100 is snake oil territory for me.
wolfDon't get me wrong, it's far from perfect and I understand the criticism, nothing is above constructive criticism, but the feature has merit, at least AMD agrees...
It's not about agreeing. It's about following trends to stay competitive.
Posted on Reply
#45
Vayra86
wolfbut the feature has merit, at least AMD agrees...
But for what purpose - that is the real question I talk about.
Posted on Reply
#46
BoboOOZ
Vayra86But for what purpose - that is the real question I talk about.
For influencers, testers and for the general public, which is more than 90% of the market. You must realize that the average tech savvy Techpowerup forumite is at least in the 5% percentile of the population understanding wise. So Nvidia's marketing doesn't work on you, well they still win in the other 95% of the population and they force AMD to react and scramble instead of innovate, because AMD is also interested in that larger market.
Posted on Reply
#47
wolf
Performance Enthusiast
AusWolfMaking 200 FPS out of 100 is snake oil territory for me.
I mean sure that's one example of the useful range, from what I've seen it's excellent at turning 50-70 fps into 80-120 fps, feels and looks fantastic.

I'd highly recommend finding a way to try it for yourself, and hey you might still think it's snake oil after, but it's the only real way to get a sense of it. I found it very impressive.
Posted on Reply
#48
AusWolf
wolfI mean sure that's one example of the useful range, from what I've seen it's excellent at turning 50-70 fps into 80-120 fps, feels and looks fantastic.

I'd highly recommend finding a way to try it for yourself, and hey you might still think it's snake oil after, but it's the only real way to get a sense of it. I found it very impressive.
That's what I mean: the 50-70 FPS range is fluid enough for me not to want anything more. I play with a driver-level 60 FPS lock anyways. Freesync kicks in at 48 Hz/FPS on my monitor, so I'm not even bothered by minor fluctuations. When we can test how FG makes 60 FPS out of 30, I'll be interested enough to form a more elaborate opinion about it. Until then, it's snake oil (imo).
Posted on Reply
#49
wolf
Performance Enthusiast
AusWolfThat's what I mean: the 50-70 FPS range is fluid enough for me not to want anything more. I play with a driver-level 60 FPS lock anyways. Freesync kicks in at 48 Hz/FPS on my monitor, so I'm not even bothered by minor fluctuations. When we can test how FG makes 60 FPS out of 30, I'll be interested enough to form a more elaborate opinion about it. Until then, it's snake oil (imo).
Well yeah, if as a gamer all you want for is 1080p60 without visual bells and whistles (like RT), then I doubt FG of any flavour is for you. having tried it I see it as a great little piece of tech that essentially has no downsides, sure it doesn't improve latency AND visual fluidity, but just one is still a net benefit to the experience, and as you know from me, high fidelity and high framerates are right up my alley.

Really keen to see if AMD can pull a rabbit out of a hat on this one, it took a minute, but they basically did with FSR 1.0 and 2.X all things considered.
Posted on Reply
#50
ratirt
wolfI mean sure that's one example of the useful range, from what I've seen it's excellent at turning 50-70 fps into 80-120 fps, feels and looks fantastic.

I'd highly recommend finding a way to try it for yourself, and hey you might still think it's snake oil after, but it's the only real way to get a sense of it. I found it very impressive.
I don't think you understand the problems and/or doubts people are bringing here. It is not about trying and loving it. It is a tech a feature that is giving something that gamers can enjoy. I mean, you say it like there isn't any other way to get better FPS that is one thing. Second, you argue that people did not try it so they should not speak about it or raise concerns. For instance, I have not tried it but my concern is not how it works or performs. (it has flaws but these will be addressed for sure) I only hope, your 50-70FPS turning into 80-120 wont be a future of top end graphics that will be forced to use these technologies to be able to achieve that performance, costing horrendous money (as we see now) sill, despite the graphics card actual performance. In my eyes, that would have been a disaster no matter how great FG, DLSS3 in this case is. Second concern, I don't know what if this DLSS3 is going to stay and it will be a standard for future cards. It is very important from a consumer perspective since you pay for it. If new cards have DLSS4 or other preliminary tech, that would not become a standard and it creates a problem with support. Devs are more keen for supporting tech that is currently used or where there is a lot of people using the devices supporting that technology. This problem is also connected to NV's view which in my eyes I literally am not sure what it is. Releasing new tech like DLSS3 not being supported on older cards, yet DLSS2 is still there. DLSS4 or 5 coming maybe and proprietary. It would seem NV doesn't know what they want to achieve and as always, consumer will pay for the tryouts they are doing. This would not have been a problem if all DLSS variations were limited to NV but any recently bought card would be able to use the tech. It is not that case though and something tells me, DLSS4 will be limited to latest gen as well (if it's released that is)
BTW: Just because you have not experienced something or seen it, doesn't mean you know nothing about it. We have brains to think as well not just experience things.
Posted on Reply
Add your own comment
May 1st, 2024 02:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts