• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's DLSS-rivaling AI-accelerated Supersampling Tech is Named XeSS, Doubles 4K Performance

Joined
Mar 20, 2019
Messages
556 (0.30/day)
Processor 9600k
Motherboard MSI Z390I Gaming EDGE AC
Cooling Scythe Mugen 5
Memory 32GB of G.Skill Ripjaws V 3600MHz CL16
Video Card(s) MSI 3080 Ventus OC
Storage 2x Intel 660p 1TB
Display(s) Acer CG437KP
Case Streacom BC1 mini
Audio Device(s) Topping MX3
Power Supply Corsair RM750
Mouse R.A.T. DWS
Keyboard HAVIT KB487L / AKKO 3098 / Logitech G19
VR HMD HTC Vive
Benchmark Scores What's a "benchmark"?
So, now we have three proprietary upscaling technologies, with one being less proprietary but fairly primitive. I don't think this kind of segmentation will last for long since developers don't want to limit their target audience and certainly don't want to spend money on implementing three separate technologies to achieve a single goal. So now:
- If the deciding factor will be a financial incentive for the developers, nVidia will win,
- If ease of implementation, FSR will win.
- If performance, Intel has the upper hand, if their first party benchmarking is to be believed. But they will have to really make it into a polished product since they have zero market share and brand recognition as far as dedicated GPUs go.

Personally I think if AMD can create FSR 2.0 with improved quality and performance AND make it hardware agnostic, it will be a clear winner for developers and consumers. As much as I admire the complexity and elegance of AI, the mass market works on the KISS principle.
 
Joined
Jan 8, 2017
Messages
8,944 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.

You do realize this is literal nonsense, right ? It's the temporal solutions that look horrible in motion, because they blend information from past frames into the current frame. Spatial upscaling introduces zero temporal artifacts, because ... it's not temporal.

Can't believe I have to explain this.

So, now we have three proprietary upscaling technologies, with one being less proprietary but fairly primitive.
It's not "less proprietary, it's competently open.

- If the deciding factor will be a financial incentive for the developers, nVidia will win,
This isn't about money, it's about time and effort. DLSS takes a lot of time to implement, I have some insight into the game development world and I can tell you that developers have to work for months to get DLSS even remotely close to working properly and it's always a side project because it's just not that important to them. FSR on the other hand is pretty much a couple of days worth of work. If it wasn't for the sponsorship campaigns of Nvidia, I bet most studios wouldn't even think about using it.
 
Last edited:
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
You do realize this is literal nonsense, right ? It's the temporal solutions that look horrible in motion, because they blend information from past frames into the current frame. Spatial upscaling introduces zero temporal artifacts, because ... it's not temporal.
At the same time lack of temporal component introduces potential for shimmering which FSR shows in practice. Also, past frames introduce additional data to use for frame reconstruction which helps with the end result.
 
Joined
Mar 20, 2019
Messages
556 (0.30/day)
Processor 9600k
Motherboard MSI Z390I Gaming EDGE AC
Cooling Scythe Mugen 5
Memory 32GB of G.Skill Ripjaws V 3600MHz CL16
Video Card(s) MSI 3080 Ventus OC
Storage 2x Intel 660p 1TB
Display(s) Acer CG437KP
Case Streacom BC1 mini
Audio Device(s) Topping MX3
Power Supply Corsair RM750
Mouse R.A.T. DWS
Keyboard HAVIT KB487L / AKKO 3098 / Logitech G19
VR HMD HTC Vive
Benchmark Scores What's a "benchmark"?
This isn't about money, it's about time and effort. DLSS takes a lot of time to implement, I have some insight into the game development world and I can tell you that developers have to work for months to get DLSS even remotely close to working properly and it's always a side project because it's just not that important to them. FSR on the other hand is pretty much a couple of days worth of work. If it wasn't for the sponsorship campaigns of Nvidia, I bet most studios wouldn't even think about using it.
So, it's about money. Implementation time is money. DLSS takes time to be done right = it costs money. FSR is easier so it costs less, but not zero amount of money. Hence, if nVidia can offset DLSS cost with financial or marketing incentives they will win. Everything is always about money, especially in the entertainment industry where money is elevated to a level of deity.
 
Joined
Jan 8, 2017
Messages
8,944 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
SR is easier so it costs less, but not zero amount of money.

No, it really is effectively nothing in terms of cost, unless you can point out to something that I am missing here.

Hence, if nVidia can offset DLSS cost with financial or marketing incentives they will win.
But this is what you don't understand, studios stand to gain nothing from this. They waste development time for something that can only be used on one platform by one particular sub group of that customer base. When they have to choose between allocating resources between getting DLSS working and getting the game working, you can be sure they'll choose the latter no matter what the marketing incentives are. That's why in the few games that use DLSS, support for it was usually added post launch.
 
Joined
Mar 20, 2019
Messages
556 (0.30/day)
Processor 9600k
Motherboard MSI Z390I Gaming EDGE AC
Cooling Scythe Mugen 5
Memory 32GB of G.Skill Ripjaws V 3600MHz CL16
Video Card(s) MSI 3080 Ventus OC
Storage 2x Intel 660p 1TB
Display(s) Acer CG437KP
Case Streacom BC1 mini
Audio Device(s) Topping MX3
Power Supply Corsair RM750
Mouse R.A.T. DWS
Keyboard HAVIT KB487L / AKKO 3098 / Logitech G19
VR HMD HTC Vive
Benchmark Scores What's a "benchmark"?
No, it really is effectively nothing in terms of cost, unless you can point out to something that I am missing here.


But this is what you don't understand, studios stand to gain nothing from this. They waste development time for something that can only be used on one platform by one particular sub group of that customer base. When they have to choose between allocating resources between getting DLSS working and getting the game working, you can be sure they'll choose the latter no matter what the marketing incentives are. That's why in the few games that use DLSS, support for it was usually added post launch.
You seem to be somewhat out of touch with economy. Let's say, for the sake of argument, that FSR is incredibly, unreasonably easy, to the point it takes a day to implement into a complex product. One day of work times a hundred developers who either have to do it or at least read the documentation to maintain the code and you have a million dollars of costs. If you tell the management to either spend a million for a feature some people will use or spend five million for another version of that feature, but get some of it returned as cash or marketing, guess what the management will choose? Games aren't made by enthusiasts, they are made by accountants.
 
Joined
Aug 23, 2013
Messages
549 (0.14/day)
This is why we need a third player so badly. Let's just hope that their solution is an actual competitor to DLSS and is as good as they claim because we desperately need one to put pressure on Nvidia.
 
Joined
Jan 8, 2017
Messages
8,944 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Joined
Jul 20, 2021
Messages
28 (0.03/day)
this infographic is confusing, if its an upscaler and it improves performance, shouldn't frametime be lower rather than higher with it enabled? :confused::confused::confused:
What are you confused about? The frametime goes down with XeSS enabled compared to native 4K rendering.
I would like to know what the difference is between this and Nvidia's implementation because in base terms it all sounds the same
Particularly considering they poached the guy who invented DLSS at Nvidia recently.
They even made their own FXAA competitor you can force in the driver (it does almost nothing to edges, and it costs nearly as much performance.)

Because (unlike SMAA and FXAA) we will never be able to insert FSR, game devs are going to have to choose on or two techs to bother with!
1) Are you talking about AMD's MLAA? Because that's actually the predecessor to Nvidia's FXAA.
2) We already can inject FSR using ReShade. There's no reason anyone can't inject FSR, it's very easy and simple to do, since it's just a shader.
I am aware that NVIDIA support DP4a since Pascal, but is there any documentation on when AMD start supporting DP4a?
RDNA2 is the first AMD architecture with INT32 support.
 
Joined
Mar 26, 2014
Messages
83 (0.02/day)
Location
Somewhere under a Rock.
System Name LongBoi
Processor Ryzen 5950X
Motherboard Gigabyte X570S Aorus Master
Cooling EK 360mm AIO D-RGB
Memory 4x8gb Corsair Vengeance RGB Pro (CMW32GX4M4D3600C16)
Video Card(s) Asus Strix RTX 2080 Ti OC
Storage 2x WD Black (SN850X 2TB(OS), SN770 2TB), 2x Seagate HDDs (2TB+3TB)
Display(s) Asus TUF VG27AQ
Case Lian Li O11 Dynamic Evo
Audio Device(s) Nani!?
Power Supply EVGA Supernova P6 1000w
Mouse Logitech G502 Hero
Keyboard Redragon K556
VR HMD VR is BS.
Software Windows 11 Pro (2H23)
So it's called "SSeX" but backwards...... I'm so sorry, I had to... :roll::roll::roll::roll:
 
Joined
Mar 28, 2020
Messages
1,649 (1.10/day)
XeSS will be implemented using open standards and should work across all platforms similarly to FSR.
In my opinion, "open standards" is likely used to try to sway developers from bespoke tech like DLSS. But being open here is one part of the equation, the second being how easy is it to implement as compared to DLSS and the more primitive FSR? It is the ease of integration without sacrificing too much details that will ultimately win developers over. Intel can sponsor some games here and there to get the developers to implement XeSS, but it is not a sustainable method. People can argue that FSR is a low bar because it is not "smart", but from the game developers' standpoint, you get performance back without a significant hit to performance, plus, its very easy for them to implement. Imagine when they had to optimise a game to make it run at a performance target on consoles, that part can be taken care of easily by FSR. So less work for them.
 

HTC

Joined
Apr 1, 2008
Messages
4,604 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 2600X
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Nitro+ Radeon RX 480 OC 4 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 19.04 LTS
We need some kind of baseline to be able to quantify this "double" the performance... hehe

Yup: for example, if it doubles the performance @ 4K from ... 1 FPS to 2 FPS ... it probably isn't as good as they think ...
 
Joined
Jan 8, 2017
Messages
8,944 (3.35/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Yup: for example, if it doubles the performance @ 4K from ... 1 FPS to 2 FPS ... it probably isn't as good as they think ...

It's not really impressive anyway, of course it doubles the performance if the internal resolution is much lower than native. It's not a miracle that this happens.

RDNA2 is the first AMD architecture with INT32 support.
?

All GPUs have support for INT32, otherwise they wouldn't be able to work and RDNA1 had support for mixed INT32 execution.
 
Last edited:
Joined
Apr 24, 2012
Messages
1,594 (0.36/day)
Location
Northamptonshire, UK
System Name Main / HTPC
Processor Ryzen 9 5900X / Ryzen 7 2700
Motherboard Strix B550i / B450i Aorus Pro
Cooling Lian-Li Galahad 360 / Wraith Spire
Memory Corsair LPX 2x16 3600MHz / HyperX Predator 2x8GB 3200MHz
Video Card(s) RTX 3080 FE / ARC A380
Storage WD Black SN770 1TB / Sabrent Rocket 256GB
Display(s) Acer Z301c / 39" Panasonic HDTV
Case Corsair 2000D / Cougar QBX
Audio Device(s) Yamaha RX-V379 / Realtek ALC1220
Power Supply Corsair SF600 / BeQuiet SFX Power 2 450W
Mouse Logitech G900
Keyboard Drop Sense75 with WQ Studio Morandi's
VR HMD Rift S
Software Win 11 Pro 64Bit
What are you confused about? The frametime goes down with XeSS enabled compared to native 4K rendering.
it would have been less confusing if they put framerate instead of frametime on the X axis, otherwise it just looks like it decreases performance
1629446338576.png
 
Joined
Sep 17, 2014
Messages
20,953 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
So, it's about money. Implementation time is money. DLSS takes time to be done right = it costs money. FSR is easier so it costs less, but not zero amount of money. Hence, if nVidia can offset DLSS cost with financial or marketing incentives they will win. Everything is always about money, especially in the entertainment industry where money is elevated to a level of deity.

They wont win because they are only offsetting that cost in a small selection of games. That is the whole point. You cant keep up with all those games getting released but going forward in time, the need for DLSS to get decent perf will only get higher.

Look at PhysX. Nuff said. Where it was implemented, it worked admirably. But it was never everywhere and still at odds with other physics engines even if they worked less good - devs wont be happy to support just half the market with a different experience.

The end result is ALWAYS that its effectively just being used for marketing, appearing in high profile eye catchers. Look at DLSS support history for perfect proof of that. And RTX is more of the same. , but with an even higher dev investment.

Nothing is free and time to market is money too. The bill cant ever get paid in full, and believing it will is just setting yourself up for another deception.
 
Last edited:
Joined
Nov 13, 2007
Messages
10,235 (1.70/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
So, now we have three proprietary upscaling technologies, with one being less proprietary but fairly primitive. I don't think this kind of segmentation will last for long since developers don't want to limit their target audience and certainly don't want to spend money on implementing three separate technologies to achieve a single goal. So now:
- If the deciding factor will be a financial incentive for the developers, nVidia will win,
- If ease of implementation, FSR will win.
- If performance, Intel has the upper hand, if their first party benchmarking is to be believed. But they will have to really make it into a polished product since they have zero market share and brand recognition as far as dedicated GPUs go.

Personally I think if AMD can create FSR 2.0 with improved quality and performance AND make it hardware agnostic, it will be a clear winner for developers and consumers. As much as I admire the complexity and elegance of AI, the mass market works on the KISS principle.


FSR will win in terms of broad adoptability - DLSS 2.0/3.0 will win for ultimate quality -- I think it will be same outcome as the current adaptive sync situation. You will still have "GSYNC Ulitmate" which is pretty sweet but most displays will use the Freesync and it will be awesome - in this case FSR will be a must for most games, with nvidia studios ones pushing DLSS as well. But for sure FSR 2.0 is the more promising tech due to the openness.
 
Joined
Aug 6, 2020
Messages
729 (0.54/day)
1) Are you talking about AMD's MLAA? Because that's actually the predecessor to Nvidia's FXAA.

No, I'm talking about Intel's own knockoff of the tech:


They say it has nothing to do with MLAA, but what who knows?

This is the only option you have in your control panel to insert full-scene post-process (so, if thi is the level of support you can expecrt on Xe, you'd better get used to bone-dry game override options, and maybe q handful of "only invented here" techs.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Xe written in Cyrillic reads like "he" as if Raja is hinting us at something, hehehe.

, the Intel algorithm can either use XMX hardware units (new in Intel Xe HPG), or DP4a instructions (available on nearly all modern AMD and NVIDIA GPUs). XMX stands for Xe Matrix Extensions and is basically Intel's version of NVIDIA's Tensor Cores
If you were wondering whether DLSS not running on 10xx series was nVidia being into money milking to much: it was.

I am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
Uh, I'm one of the few in "temporal: no thank you" camp, I guess.
And stop showing that DLSS 2 (the TAA derivative) stuff showing good results on scenes that barely move.

that's why it's so bad, especially in motion.
Amazing post.
You should stop watching videos by "3080 is 2 times faster than 2080" and "8k gaming with 3090" (totally not shills) folks.

Now that there are three, it gets a whole lot harder to defend that idea. Three inventors of a wheel where two are destined to fail is quite a bit more risk than the odd 50% adjusted for market(ing) share. Devs won't go about supporting three technologies either. They want them fed to them or they're not happening.

Uh, I suspect you have missed the two elephants in the room:

1) Effort matters. Is it low? Well, heck, devs can slap a bunch of various implementations in, no prob.
2) There is NO competition between "some crap that runs only on one manufacturer's HW" and stuff that runs on everything. Like at all. The former is in survival "do I even still make sense to exit" mode. The latter can be inferior, but will still do fine, all it needs is being better than standard upscaling solutions. (effort does still matter though, but with FSR we have seen "hard to distinguish from true 4k" and super low effort to implement too, so, hell, good luck with that)
 
Joined
Mar 20, 2019
Messages
556 (0.30/day)
Processor 9600k
Motherboard MSI Z390I Gaming EDGE AC
Cooling Scythe Mugen 5
Memory 32GB of G.Skill Ripjaws V 3600MHz CL16
Video Card(s) MSI 3080 Ventus OC
Storage 2x Intel 660p 1TB
Display(s) Acer CG437KP
Case Streacom BC1 mini
Audio Device(s) Topping MX3
Power Supply Corsair RM750
Mouse R.A.T. DWS
Keyboard HAVIT KB487L / AKKO 3098 / Logitech G19
VR HMD HTC Vive
Benchmark Scores What's a "benchmark"?
FSR will win in terms of broad adoptability - DLSS 2.0/3.0 will win for ultimate quality -- I think it will be same outcome as the current adaptive sync situation. You will still have "GSYNC Ulitmate" which is pretty sweet but most displays will use the Freesync and it will be awesome - in this case FSR will be a must for most games, with nvidia studios ones pushing DLSS as well. But for sure FSR 2.0 is the more promising tech due to the openness.
It's true. One might become the "open for everyone" option with the better proprietary technology being the "high end" option for some. Makes me wonder how Intel can push their technology with zero market share and zero recognition. They do have a lot of money and leverage on the market though.
Finally something interesting in the PC world after years of "slightly better numbers and more blinking lights" from the same companies over and over again.
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
On desktop they will have 99 problems, in laptops they should prevail as already with CPUs.
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
You seem to be somewhat out of touch with economy. Let's say, for the sake of argument, that FSR is incredibly, unreasonably easy, to the point it takes a day to implement into a complex product. One day of work times a hundred developers who either have to do it or at least read the documentation to maintain the code and you have a million dollars of costs. If you tell the management to either spend a million for a feature some people will use or spend five million for another version of that feature, but get some of it returned as cash or marketing, guess what the management will choose? Games aren't made by enthusiasts, they are made by accountants.
1 vs. 5 million?
 
Joined
Oct 12, 2005
Messages
682 (0.10/day)
Uh, I'm one of the few in "temporal: no thank you" camp, I guess.
And stop showing that DLSS 2 (the TAA derivative) stuff showing good results on scenes that barely move.

I am on the same boat as you as for the temporal artefact. Although it's not all game that have that, the implementation of DLSS is very unequal and they seems to always use the best examples to make DLSS shine.

That also let me think that the AI portion in DLSS isn't that much AI. There is simply not enough time to do real AI processing there. And cut me with the pre-learned craps, they probably just figured what would be the best algorithm. They probably use the tensor core and do some calculation in int8 or other AI data format but that doesn't means much.

AI seems to be the new Nanotechnology of the past where everything you stamp with the word get trending and get financing. Don't get me wrong, there was good stuff coming out of this, but a lot of things were just not that at all.

To me, a real deep learning up sampling would be able to eliminate completely those ghosting artefact. Also, it could also leverage less the temporal data and more the AI portion to resolve details. But this take a lot of calculation power and you probably can't run that in realtime. So what we have is a glorified proprietary temporal upsampling.

But also, don't get me wrong, Shimmering that FSR can bring are as annoying if not more than temporal upsampling.
 
Top