• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Forspoken: FSR 3

Using Performance preset: TAAU is a flickering mess, FSR has ghosting on particles and flicker in front, DLSS has flickering in back fence area Xess has noise under the tetris piece and TSR has flicker like DLSS on back and like FSR on front.
Don't know. Tested myself and I'm betting on a bug in the menu. TSR and DLSS looks identical to me.
 
Don't know. Tested myself and I'm betting on a bug in the menu. TSR and DLSS looks identical to me.

I'm seeing the same on closer inspection so this could be a bug or UE5 Temporal Super Resolution under the hood is the same thing as Nvidia DLSS ;> We need an Nvidia owner to show us diferences between TSR and DLSS
 
Man, here I thought it'd be a fun weekend, oh well, we still have to see how it looks from an nvidia card, so there is still the chance.

It does make me wonder how it even got to showing up in the menu, that the engine had a failsafe to simply default to using tsr, and that tsr looks as good as it does (at least with things like particles)
 
Sadly, it makes a lot more sense that the Talos devs would have prepared a DLSS implementation that's not ready, temporarily redirected it to TSR, and that's that. Menu fuckup rather than an actual PoC. Oh well.
Nvidia user plz confirm the difference between DLSS and TSR, if any?
Actually, if it is indeed just DLSS not being ready except as a menu option, even on an Nvidia GPU, it'll be the same result.

Best at this point would be to contact the devs and check with them. (and keep video evidence beforehand obviously :>)
 
Last edited:
Testing it myself on a steam deck quality / performance wise it very much seems to be TSR. Considering tensor cores are real and are actually use this by far makes the most sense.
 
Seems you can fix FSR ghosting and particles problems by adding
[SystemSettings]
r.FidelityFX.FSR2.ReactiveHistoryTranslucencyLumaBias=1

in {SteamLibrary}steamapps/compatdata/2312690/pfx/drive_c/users/steamuser/AppData/Local/Talos2Demo/Saved/Config/Windows/Engine.ini bold part for Windows users i presume.

 
In other news in The Talos Principle 2 Demo they forgot to software lock Nvidia DLSS from working on AMD GPUs in Linux thru Proton :>


Maybe TPU should investigate this mega fuck-up that basically confirms Nvidia DLSS doesn't use any "AI" or "Tensor Marketing Cores"

Probably just display bug but you can easily test compare to FSR performance but also TSR lowest preset and XeSS performance preset FSR2 is not exactly amazing even on 3840x1600 altho there some exceptions where it looks somewhat oke, i dunno what dlss supose to look like so only an NVIDIA user can tell if its real or not.
 
I did some comparisons in maxed out native 4K. "DLSS", XeSS and TSR look identical, even when zoomed in, and all carry the same performance hit in terms of increased VRAM and GPU usage. TAUU and FSR look slightly different, and are lighter on the hardware. Both exhibit particle ghosting under the tetrominos.
 
In other news in The Talos Principle 2 Demo they forgot to software lock Nvidia DLSS from working on AMD GPUs in Linux thru Proton :>

View attachment 316259View attachment 316258

Maybe TPU should investigate this mega fuck-up that basically confirms Nvidia DLSS doesn't use any "AI" or "Tensor Marketing Cores"
So turns out I'm not dumb after all and all those r/nvidia screamers have no clue what I was talking about when I said it was written for Pascal and CUDA

Just ran the demo with Arc A750. FSR vs DLSS vs XeSS vs TSR-

The Talos Principle 2 Demo - Imgsli

TSR and DLSS looks same. I don't know man, zoom it and decide urself.
Definitely not the same thing, zooming in on details, some details are sharper on one than the other and vice versa, hard to say how much from one image though
 
So turns out I'm not dumb after all and all those r/nvidia screamers have no clue what I was talking about when I said it was written for Pascal and CUDA
Whatever the redditors say, remember that they're not thinking it.
Someone else was paid to think up something and shill it until they all repeated it.

Also I'd advise against r/nvidia and r/AMD. One is a cult house, the other's the cult house's dumpster.
Seriously, the number of people who only go on the latter to shit on AMD is insane. It's not even possible to say something positive. Then you notice that some of them spend their time on r/nvidia praising everything and on r/AMD shitting on everything, and you quickly understand what's going on.
 
So DLSS being available on AMD hardware in one game is just a menu bug and it's actually TSR..... Thank goodness everyone stayed calm and waited for it to be tested and verified before rampantly airing their biases. Oh wait. Just another day on TPU :rolleyes:
 
So DLSS being available on AMD hardware in one game is just a menu bug and it's actually TSR..... Thank goodness everyone stayed calm and waited for it to be tested and verified before rampantly airing their biases. Oh wait. Just another day on TPU :rolleyes:
Honestly, I just wanted it to be true because it would've been hilarious, and given me something of comedy for this weekend, but sadly since chances are it's not true that just means that the weekend won't be as hilariously chaotic as I would've liked
 
I see, it's what makes sense (though I am surprised it shipped like this)

Oh well, at least my weekend is clear now, also TSR has improved it seems since I last saw it in action, that's good
Seems you can fix FSR ghosting and particles problems by adding


in {SteamLibrary}steamapps/compatdata/2312690/pfx/drive_c/users/steamuser/AppData/Local/Talos2Demo/Saved/Config/Windows/Engine.ini bold part for Windows users i presume.

why the hell don't more devs do this if it's as easy as just changing one parameter, it's one of the weakest aspects of FSR and it's that easy to mitigate? Man
 
Demo was just updated and doesn't run anymore on Linux :>>
Talos2DemoUpdate.png

L.E: After a Steam restart it's working again but they removed DLSS and added a sharpness option :

Talos2DemoDLSSRemoved.png
 
Last edited:
So they implemented a fallback for TSR when chosing DLSS but they didn't implement Nvidia software locks.
You don't have any idea what you're talking about I'm afraid. Every game developer gets DLSS files already compiled from NVIDIA, they don't compile anything themselves and don't implement any "software locks". The only thing devs can do is make a mistake in game menu and allow DLSS to be selectable without RTX graphics... but that doesn't make DLSS run on non-RTX hardware. And Tensor requirement is real, you can check this with NVIDIA Nsight (sorry, not with Radeon in the PC :roll:).
 
In theory it may be possible to translate the calls to something AMD gpu's understand, yes tensor cores are used for DLSS, however the same type of calculations can also be done on gpu cores.

That being said the prospect of someone writing a translation layer are slim and the performance doing so may also not be great as tensor cores are more efficient at it. But if you take stable diffusion as a benchmark 7900xtx can keep up with a 3090ti.
 
In theory it may be possible to translate the calls to something AMD gpu's understand, yes tensor cores are used for DLSS, however the same type of calculations can also be done on gpu cores.
Sure it's theoretically possible but the amount of reverse engineering to make that happen would be insane.
 
You don't have any idea what you're talking about I'm afraid. Every game developer gets DLSS files already compiled from NVIDIA, they don't compile anything themselves and don't implement any "software locks". The only thing devs can do is make a mistake in game menu and allow DLSS to be selectable without RTX graphics... but that doesn't make DLSS run on non-RTX hardware. And Tensor requirement is real, you can check this with NVIDIA Nsight (sorry, not with Radeon in the PC :roll:).

Amd provides lots of tools to analyze their GPUs : https://gpuopen.com/tools/

In theory it may be possible to translate the calls to something AMD gpu's understand, yes tensor cores are used for DLSS, however the same type of calculations can also be done on gpu cores.

All Windows games i play on my PC are already translated to Vulkan by Proton so everything can be made to work on any GPU.
 
Amd provides lots of tools to analyze their GPUs : https://gpuopen.com/tools/
But you can't run DLSS on Radeon GPUs and Radeon GPUs don't have Tensor cores so you can't check if DLSS uses Tensors having a Radeon in your PC.
All Windows games i play on my PC are already translated to Vulkan by Proton so everything can be made to work on any GPU.
It's obvious DLSS could run on GPUs with matrix accelerators (RDNA 3 and Xe) but NVIDIA doesn't want that and won't make that work.
 
There you go, guys: tested on an AMD graphics card, the conclusion is similar:

Let's keep in mind that it's still in preview phase.
 
Well now will never known what was or not using DLSS on my RX 6800 since latest demo update removed it from game options.

Also not having "Tensor Marketing Cores" doesn't mean same math/matrix multiplications can't be run on any other GPUs. I wonder how the world did matrix multiplications for AI before Nvidia invented the Marketing Cores :>
 
Also not having "Tensor Marketing Cores" doesn't mean same math/matrix multiplications can't be run on any other GPUs. I wonder how the world did matrix multiplications for AI before Nvidia invented the Marketing Cores :>
It's always been possible on the CPU, just a lot slower, as far as I know. Kind of like physics processing.
 
Back
Top