• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Does DLSS look and perform the same on different architectures?

Joined
Feb 24, 2023
Messages
3,956 (4.92/day)
Location
Russian Wild West
System Name D.L.S.S. (Die Lekker Spoed Situasie)
Processor i5-12400F
Motherboard Gigabyte B760M DS3H
Cooling Laminar RM1
Memory 32 GB DDR4-3200
Video Card(s) RX 6700 XT (vandalised)
Storage Yes.
Display(s) MSi G2712
Case Matrexx 55 (slightly vandalised)
Audio Device(s) Yes.
Power Supply Thermaltake 1000 W
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 11 / 10 / 8
Benchmark Scores My PC can run Crysis. Do I really need more than that?
As the title says, I'm curious if DLSS is working better on Ada than it's working on, say, Turing.

Let's say we have some game that runs 50 FPS on both 4060 and 2070 Super at, say, maxed out non-RT 1440p without DLSS. Will the 4060's FPS count be higher than 2070 Super's if we enable DLSS? Will DLSS look better on the newer GPU?

If you're curious about FSR the answer to the latter question is no, it looks identical on any GPU. Newer GPUs, however, tend to achieve more boost with FSR enabled than the older ones.

DLSS is built differently and I have no RTX cards collection to test it.

[Ada exclusive features like Frame Generation are outta equasion, I'm purely talking DLSS features available on ANY RTX GPU]
 
Theoretically it could perform better since RTX 4000 series GPUs since their tensor cores are 4th gen. There is documentation of the H100, a GPU that uses 4th gen tensor cores performing better in tensor operations. Also depends on the tensor core count.
 
Last edited:
If you're curious about FSR the answer to the latter question is no, it looks identical on any GPU.
Looks identical doesn't actually mean that it is, no way too different GPU's will output everything that's identical even if the differences could be minute. Same goes for DLSS, one also has to take into account the rest of the system as final output would vary depending on that as well.

It's an academic exercise though & if you're playing the same game with exact same settings, on different setups, you're unlikely to notice major differences between cards of the same class/tier ~ the keyword being major!
 
  • Like
Reactions: bug
Looks identical doesn't actually mean that it is, no way too different GPU's will output everything that's identical even if the differences could be minute. Same goes for DLSS, one also has to take into account the rest of the system as final output would vary depending on that as well.
I had a 1080 Ti and an RX 6700 XT. As different as it gets. Used both on the same display which was calibrated to the exact same colours using professional equipment and help from the guy who knows his colours. The rest of the system also stayed identical.

However, we both failed to figure out what GPU looks better if we use FSR in games. 1080 Ti achieved worse performance uplift (50 to 73 FPS with FSR at Balanced, or +46%) than the RX 6700 XT (64 to 101 FPS with FSR at Balanced, or +58%) but neither showed more artifacts/ghosting/blurriness than the "competition."
 
This was with still images or something or just hands on experience during game play? Stills would probably be a better gauge of that.
We actually played the game, albeit extra slowly: just walking around the town and shooting stray cans and bottles. Stills, however, also proved identical or near identical.
 
Low quality post by las
DLSS 2.x Looks the same but Ada has better hardware support for more features like DLSS 3 and Frame Gen.

And yeah FSR looks identical on all cards, performance might diff a little but not much. FSR still has too much shimmering and artifacts compared to DLSS/DLAA. This is typically why AMD users hate to hear about DLSS/DLAA. In many games, even Intel XeSS beats FSR, so AMD users should always try XeSS as well, if it is present. RTX users can just enable DLSS/DLAA and play, it will be the best solution every time, for now. Lets hope AMD can improve it further.

There were some bad driver that allowed Frame Gen to be enabled on 3000 series but it was a stuttery mess because Ampere lacks optical flow accelerator. You can google it. Meaning, Nvidia did not limit Frame Gen to 4000 series, they added hardware which made FG work well but 3000 is not able to do it. Thats why parts of DLSS 3 is not useful on 3000 series but they can still use DLSS 2.x which is also present in all new games still (and this won't change)

And this dedicated hardware is the reason why Frame Gen works much better than AFMF. Probably also the reason why DLSS/DLAA easily beats FSR.
 
Last edited:
Beginner mistake: just play the game, check whether it's fun. Don't pixel-peep.
Flickering and quality of the upscaling is usually covered by a review somewhere.
 
DLSS 2.x Looks the same but Ada has better hardware support for more features like DLSS 3 and Frame Gen.

And yeah FSR looks identical on all cards, performance might diff a little but not much. FSR still has too much shimmering and artifacts compared to DLSS/DLAA. This is typically why AMD users hate to hear about DLSS/DLAA. In many games, even Intel XeSS beats FSR, so AMD users should always try XeSS as well, if it is present. RTX users can just enable DLSS/DLAA and play, it will be the best solution every time, for now. Lets hope AMD can improve it further.

There were some bad driver that allowed Frame Gen to be enabled on 3000 series but it was a stuttery mess because Ampere lacks optical flow accelerator. You can google it. Meaning, Nvidia did not limit Frame Gen to 4000 series, they added hardware which made FG work well but 3000 is not able to do it. Thats why parts of DLSS 3 is not useful on 3000 series but they can still use DLSS 2.x which is also present in all new games still (and this won't change)

And this dedicated hardware is the reason why Frame Gen works much better than AFMF. Probably also the reason why DLSS/DLAA easily beats FSR.
This is not a "DLSS is leagues better than FSR and other obvious stuff" kinda thread.

just play the game, check whether it's fun.
Having fun in video games is illegal, m8. Unless you got a loicence...
 
DLSS 2.x Looks the same but Ada has better hardware support for more features like DLSS 3 and Frame Gen.

And yeah FSR looks identical on all cards, performance might diff a little but not much. FSR still has too much shimmering and artifacts compared to DLSS/DLAA. This is typically why AMD users hate to hear about DLSS/DLAA. In many games, even Intel XeSS beats FSR, so AMD users should always try XeSS as well, if it is present. RTX users can just enable DLSS/DLAA and play, it will be the best solution every time, for now. Lets hope AMD can improve it further.

There were some bad driver that allowed Frame Gen to be enabled on 3000 series but it was a stuttery mess because Ampere lacks optical flow accelerator. You can google it. Meaning, Nvidia did not limit Frame Gen to 4000 series, they added hardware which made FG work well but 3000 is not able to do it. Thats why parts of DLSS 3 is not useful on 3000 series but they can still use DLSS 2.x which is also present in all new games still (and this won't change)

And this dedicated hardware is the reason why Frame Gen works much better than AFMF. Probably also the reason why DLSS/DLAA easily beats FSR.

Both Turing and Ampere have fully functional optical flow acceleration capability. Their performance is lower than that of Ada's but to a questionable degree: DLSS Frame Generation is supported on a lowly cut-down AD107 4050 mobile but unsupported on a full die GA102 3090 Ti, and those are worlds apart in performance. I'm willing to bet in OFA performance as well. So there's that.
 
Both Turing and Ampere have fully functional optical flow acceleration capability. Their performance is lower than that of Ada's but to a questionable degree: DLSS Frame Generation is supported on a lowly cut-down AD107 4050 mobile but unsupported on a full die GA102 3090 Ti, and those are worlds apart in performance. I'm willing to bet in OFA performance as well. So there's that.
True but significantly improved in Ada over Ampere.

Frame Gen support was bypassed several times on both 2000 and 3000 series but neither of the GPUs were able to run it well and had big fps drops and stuttering probably because OFA being too slow, who knows.

It did prove one thing tho, Nvidia did not limit FG to 4000 for no reason.
 
Nvidia wont release their source code so there is no way to know for sure but the lack of transparency is an answer in of itself.

It's an academic exercise though
But it's a useless one, a + b + c is not the same as a + c + b on any computer, of course nothing is literally the same but for all intents and purposes it's not noteworthy.
 
Low quality post by izy
True but significantly improved in Ada over Ampere.

Frame Gen support was bypassed several times on both 2000 and 3000 series but neither of the GPUs were able to run it well and had big fps drops and stuttering probably because OFA being too slow, who knows.

It did prove one thing tho, Nvidia did not limit FG to 4000 for no reason.
FSR 3 frame gen works great in some games on RTX GPUs, tested myself.
 
Low quality post by las
FSR 3 frame gen works great in some games on RTX GPUs, tested myself.
So have I, DLSS FG is still better, less artifacts especially in motion and way less shimmering with DLSS/DLAA instead of FSR as well

Nvidia wont release their source code so there is no way to know for sure but the lack of transparency is an answer in of itself.
Ofcourse they won't. They are not stupid. Why should they use R&D funds to invent stuff and give it away for free? They are running a business.
 
Ofcourse they won't. They are not stupid. Why should they use R&D funds to invent stuff and give it away for free? They are running a business.
This makes no sense since they claim their implementation relies on their hardware, if they make the code available it will help no one, only they can make the hardware. Nvidia didn't invent ML upscaling, there is no secret sauce.

The reason they wont do it has nothing to do with "giving stuff for free", they just don't want people to know why features actually get locked and how.
 
Low quality post by las
This makes no sense since they claim their implementation relies on their hardware, if they make the code available it will help no one, only they can make the hardware. Nvidia didn't invent ML upscaling, there is no secret sauce.

The reason they wont do it has nothing to do with "giving stuff for free", they just don't want people to know why features actually get locked and how.
What would the point be, when they won't release the chip design and learn people how to make RT/Tensor cores too?

Yeah there's secret sauce, thats why DLSS/DLAA beats FSR with ease and will continue to do so. Techpowerup has tons of comparisons and Nvidia takes the win every time.

DLSS FG also beats AFMF easily.

AMD is trying to combat hardware solutions with software solutions. It will not be possible is what I expect. Lets see. AMD should invest heavily in FSR and AFMF if they want to compete.
 
Low quality post by Vayra86
What would the point be, when they won't release the chip design and learn people how to make RT/Tensor cores too?

Yeah there's secret sauce, thats why DLSS/DLAA beats FSR with ease and will continue to do so. Techpowerup has tons of comparisons and Nvidia takes the win every time.

DLSS FG also beats AFMF easily.

AMD is trying to combat hardware solutions with software solutions. It will not be possible is what I expect. Lets see. AMD should invest heavily in FSR and AFMF if they want to compete.
Has it occurred to you that these software solutions in both AMD and Nvidia's case, run on hardware? GPUs are largely programmable since quite some time now.

You're delusional. The only secret sauce Nvidia has always had, is that they spend a lot more time on their software and on integration into games. The moment they stop funneling money into that, is the moment the technologies gets relegated to the sidelines, or is so common the extra TLC is no longer bearing any fruit.

The latter is where DLSS/FSR/XeSS are headed. DLSS is just ahead of the pack, but its not special or secret.
 
Low quality post by OneMoar
Has it occurred to you that these software solutions without proper hardware backing generally result in sub optimal results
theres a reason DLSS outperforms XESS/FSR in image quality and that is the hardware todo the extra processing required to achive the native-quality upscaling

call it stream processors tensor cores or compute units ect ect without dedicated hardware backing it its hardly worth the effort
AMD is trying the tack-on approch and its not going to pay off like dlss not untill they get down to work and start engineering the hardware needed to make it work as well as there competator

AMD is making the typical keeping up with the joneses folly.
throwing half backed software solutions in a attempt to look like there competator

while DLSS / framegen can work stand-alone it used a significant ammount of resources, and works a signficiantly better when the game- provides it cues and hints as to what the scene is going to contain along with motion vectors and other hints this results in a superior image. something FSR is physically incapable of doing being a driver level - post - scaler

You're delusional. The only secret sauce Nvidia has always had, is that they spend a lot more time on their software and on integration into games. The moment they stop funneling money into that, is the moment the technologies gets relegated to the sidelines, or is so common the extra TLC is no longer bearing any fruit.

The latter is where DLSS/FSR/XeSS are headed. DLSS is just ahead of the pack, but its not special or secret.
WRONG the streamline sdk is free for anybody to use and supports intergrating ALL 3 DLSS/XESS/FSR
nvidia has put a lot of work into making there software stack accessable and easy to implament they have also gone out of there way to make sure that implamenting there competitors software works
you can freely switch between xess and fsr and dlss without making ANY code changes
this is how you have mods translating DLSS < > FSR because NVIDIA did the work to make that possible
 
As the title says, I'm curious if DLSS is working better on Ada than it's working on, say, Turing.

Let's say we have some game that runs 50 FPS on both 4060 and 2070 Super at, say, maxed out non-RT 1440p without DLSS. Will the 4060's FPS count be higher than 2070 Super's if we enable DLSS? Will DLSS look better on the newer GPU?

If you're curious about FSR the answer to the latter question is no, it looks identical on any GPU. Newer GPUs, however, tend to achieve more boost with FSR enabled than the older ones.

DLSS is built differently and I have no RTX cards collection to test it.

[Ada exclusive features like Frame Generation are outta equasion, I'm purely talking DLSS features available on ANY RTX GPU]
In theory the newer cards should perform better, in what is the question. Though maybe we should take equivalent products higher in the product stack to make sure that they are not being starved of resources, you know, utilizing the improved tech to cut costs.
Any chance we have someone with time, money and/or gpus on their hands that is willing to test this?
 
Low quality post by las
Has it occurred to you that these software solutions in both AMD and Nvidia's case, run on hardware? GPUs are largely programmable since quite some time now.

You're delusional. The only secret sauce Nvidia has always had, is that they spend a lot more time on their software and on integration into games. The moment they stop funneling money into that, is the moment the technologies gets relegated to the sidelines, or is so common the extra TLC is no longer bearing any fruit.

The latter is where DLSS/FSR/XeSS are headed. DLSS is just ahead of the pack, but its not special or secret.

All their AI money will drip down into gaming, and pretty much all new games have DLSS/DLAA, or can be modded easily to use it.
Nvidia owns Gaming, Enterprise and AI markets right now.

DLSS/DLAA is clearly way more advanced than FSR. Just like DLSS FG is more advanced than AFMF.

This is why Nvidia is ahead. Do I hope AMD will get FSR on par, sure. But I doubt they are willing to blow that many R&D funds. Most of their funds are going into CPU/APU sector and maybe AI GPUs. Gaming tho, I doubt they care much. They sit at like 10-15% marketshare here and dropping. AMD can't compete in high-end and even gets destroyed in mid-end. Intel and AMD battle for low-end market.

AMD have console market (except Switch which is Nvidia) and their desktop GPU sales are mostly in the low to mid-end. Trying to stay relevant in the high-end space is waste of money for them, since expensive AMD GPUs don't sell well.

AMD needs some good value GPUs again, in the mid-end market, to regain marketshare. Thats what Radeon 8000 series is going to be. The best 8000 card tho, is probably beating 7900XT still, at much lower power usage, hopefully.
 
Last edited:
Low quality post by Kapone33
All their AI money will drip down into gaming, and pretty much all new games have DLSS/DLAA, or can be modded easily to use it.
Nvidia owns Gaming, Enterprise and AI markets right now.

DLSS/DLAA is clearly way more advanced than FSR. Just like DLSS FG is more advanced than AFMF.

This is why Nvidia is ahead. Do I hope AMD will get FSR on par, sure. But I doubt they are willing to blow that many R&D funds. Most of their funds are going into CPU/APU sector and maybe AI GPUs. Gaming tho, I doubt they care much. They sit at like 10-15% marketshare here and dropping. AMD can't compete in high-end and even gets destroyed in mid-end. Intel and AMD battle for low-end market.

AMD needs some good value GPUs again, in the mid-end market. Thats what Radeon 8000 is going to be. The best 8000 card tho, is probably beating 7900XT still, at much lower power usage, hopefully.
Prove to me that all PC Games released from 2023 to now have DLSS.
 
Low quality post by las
Prove to me that all PC Games released from 2023 to now have DLSS.
Pretty much all AAA games I have played since 2021/2022 had DLSS/DLAA/Reflex etc.

DLAA is what I mostly use to increase visuals far beyond native.


DLSS 2.0 back in 2020 was the eye opener for me.

DLAA aka Ultra Quality preset is the worlds best AA solution.

DLSS/DLAA/FSR/XeSS are slowly but surely replacing regular AA solutons in new games. And this is why AMD should go all-in on this tech and forget about RT for now. FSR and AFMF is what they need to focus on.

I know its hard to accept but its the route developers are going. Native res gaming is dying, because its inferior.
 
Both Turing and Ampere have fully functional optical flow acceleration capability. Their performance is lower than that of Ada's but to a questionable degree: DLSS Frame Generation is supported on a lowly cut-down AD107 4050 mobile but unsupported on a full die GA102 3090 Ti, and those are worlds apart in performance. I'm willing to bet in OFA performance as well. So there's that.

Obviously 3000 series could do framegen aswell... it's just the typical nvidia bs to sell new hardware.

That i wouldn't ever use framegen aka interpolation is an entirely different matter though...

True but significantly improved in Ada over Ampere.

Frame Gen support was bypassed several times on both 2000 and 3000 series but neither of the GPUs were able to run it well and had big fps drops and stuttering probably because OFA being too slow, who knows.

It did prove one thing tho, Nvidia did not limit FG to 4000 for no reason.

Oh no, they absolutely did it for a reason... to sell more 4000 series cards...
 
From my testing across Turing and Ampere, I've not noticed any quality differences between the architectures, which tracks as the algorithm is the same.

I've noticed different performance uplifts though, for example 2080 VS 3080 the 3080 has considerably higher overall tensor performance, and can upscale a given frame size faster, Nvidia even published a list of cards and how long they take to upscale images to common resolutions.

I've not done exhaustive, measured testing in it mind you, but in like for like situations I believe I've seen Ampere generate the frames faster than Turing and essentially give a higher fps uplift on a given frame to be rendered.

This should be relatively straight forward to do qualitative testing by measuring frame times.
 
Last edited:
Obviously 3000 series could do framegen aswell... it's just the typical nvidia bs to sell new hardware.

That i wouldn't ever use framegen aka interpolation is an entirely different matter though...



Oh no, they absolutely did it for a reason... to sell more 4000 series cards...

Nah 2000 and 3000 series can't do it properly. This has been tested, simply google. It was accidentally unlocked in a driver/game and it was tested. It was stuttering and unstable when Frame Gen was enabled on 2000/3000 series.

They can use AFMF but its alot worse than Frame Gen. Alot more artifacts and shimmering + ghosting issues especially when moving.

Nvidias features ACTUALLY WORK because of the dedicated hardware.
 
Last edited:
Back
Top