• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel's DLSS-rivaling AI-accelerated Supersampling Tech is Named XeSS, Doubles 4K Performance

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel plans to go full tilt with gaming graphics, with its newly announced Arc line of graphics processors designed for high-performance gaming. The top Arc "Alchemist" part meets all requirements for DirectX 12 Ultimate logo, including real-time raytracing. The company, during the technology's reveal, earlier this week, also said that it's working on an AI-accelerated supersampling technology. The company is calling it XeSS (Xe SuperSampling). It likely went with Xe in the name, as it possibly plans to extend the technology to even its Xe LP-based iGPUs and the entry-level Iris Xe MAX discrete GPU.

Intel claims that XeSS cuts down 4K frame render-times by half. By all accounts, 1440p appears to be the target use case of the top Arc "Alchemist" SKU. XeSS would make 4K possible (i.e., display resolution set at 4K, rendering at a lower resolution, with AI-accelerated supersampling restoring detail). The company revealed that XeSS will use a neural network-based temporal upscaling technology that incorporates motion vectors. In the rendering pipeline, XeSS sits before most post-processing stages, similar to AMD FSR.

While AMD's FSR technology is purely shader based, the Intel algorithm can either use XMX hardware units (new in Intel Xe HPG), or DP4a instructions (available on nearly all modern AMD and NVIDIA GPUs). XMX stands for Xe Matrix Extensions and is basically Intel's version of NVIDIA's Tensor Cores, to speed up matrix math, which is used in many AI-related tasks. The Intel XeSS SDK will be available this month, in open source, using XMX hardware, the DP4a version will be available "later this year".



View at TechPowerUp Main Site | Source
 
riiight.

So is it blurry, temporal artefacts everywhere ?

We'll never know, Intel really needs to start showing tech instead of talking of tech 5 years down the line, what a screaming baby they've become.

I just want to see Products!
 
I am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
Well, AMD's tech is pretty much open but marketing wouldn't allow promotion of tech from other brands coz it will make them look inferior.
 
Well, AMD's tech is pretty much open but marketing wouldn't allow promotion of tech from other brands coz it will make them look inferior.
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
 
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
I should try that in RE Village as it supports it.
 
In my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
 
Intel graphics and Intel press materials... keep your expectations in check, people :)
 
In my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
XeSS will be implemented using open standards and should work across all platforms similarly to FSR.
 
this infographic is confusing, if its an upscaler and it improves performance, shouldn't frametime be lower rather than higher with it enabled? :confused::confused::confused:
 
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
You should read up on FSR and think about what you typed
 
They should stray away from SS mark.
 
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.

ermm it really isnt "so bad"... and there are plenty of artifacts with DLSS as well.
infact I dont even understand that comment because TAA IS temporal and that has artifacts in motion....

On Topic:
I would like to know what the difference is between this and Nvidia's implementation because in base terms it all sounds the same
 
It look like Intel is stating that XeSS will work with any GPU vendor. that is good, very good. I do not know if the GPU need to have special function or any dx12 ultimate gpu will do...

As for artefact on temporal solution, it's way harder to do a good implementation but it's not impossible. But it's probably more something AAA studios will be able to do. And that is also one of the great point of FSR, not as good as temporal solution, but easy to work with and quick to implement for smaller studios.

So with XeSS and FSR, there are probably no reason that a game doesn't implement one or both of them in the future...
 
You should read up on FSR and think about what you typed
What do you mean? FSR is quite literally a slight modified Lanczos upscaling plus slightly toned down CAS.

Temporal component with motion vectors is basically more contemporary method for upscaling (or reconstruction) that both DLSS and now XeSS are using. FSR will no doubt evolve to include the temporal component but today it does not have any of that.

I am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
From what it looks like, this is what Intel intends to exploit. If they open up XeSS, that is quite likely going to succeed in becoming the "standard" way of upscaling quite fast.
It look like Intel is stating that XeSS will work with any GPU vendor. that is good, very good. I do not know if the GPU need to have special function or any dx12 ultimate gpu will do...
From what Intel disclosed, Xe-HPG seems to go pretty heavy on matrix math, even heavier than Nvidia. IIRC AMD is going to have something similar in RDNA3.
No doubt everything is doable with shaders but more purpose-built units are likely faster for these ML-based techniques.
 
Last edited:
In my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
Yeah, this just sounds like "too little, too late."

They even made their own FXAA competitor you can force in the driver (it does almost nothing to edges, and it costs nearly as much performance.)

Because (unlike SMAA and FXAA) we will never be able to insert FSR, game devs are going to have to choose on or two techs to bother with!
 
Last edited:
ermm it really isnt "so bad"... and there are plenty of artifacts with DLSS as well.
infact I dont even understand that comment because TAA IS temporal and that has artifacts in motion....
Comparing FSR and DLSS 2.0 the major difference isn't obvious only if you're blind or if you're comparing screenshots in static scenes. In motion DLSS 2.0 made a big improvement over 1.0, it's almost transparent, sure, there are artifacts from it's temporal nature but the performance improvement worth it in most cases, at least in scenic games where the quality of static image and the ability to run the game at maximum settings with good enough FPS is more important.
And while TAA is, by definition, temporal, FSR isn't. So it isn't really a fair comparison in the first place, people and media comparing them and concluding 'yeah, they're comparable' are simply lying to themselves. Like that HU video where they compare a bunch of static screenshots while saying nothing about FSR's performance in motion where it's all but a blurry mess, better than nothing i guess, but not at all comparable to even first gen DLSS.
You should read up on FSR and think about what you typed
I don't know what you were reading, but FSR is very much not temporal. It's a relatively simple downscaling-upscaling based on Lanczos, a popular opensource upscaling algorithm. There are no way to make it temporal while retaining it's ease of implementation by game developers, it's simply a post-process shader and was developed as such. Whether AMD would eventually come up with an actual alternative to DLSS is another question. I sure hope so, but first, they would need to add DL acceleration blocks to their hardware.
 
What do you mean? FSR is quite literally a slight modified Lanczos upscaling plus slightly toned down CAS.

Temporal component with motion vectors is basically more contemporary method for upscaling (or reconstruction) that both DLSS and now XeSS are using. FSR will no doubt evolve to include the temporal component but today it does not have any of that.

From what it looks like, this is what Intel intends to exploit. If they open up XeSS, that is quite likely going to succeed in becoming the "standard" way of upscaling quite fast.
From what Intel disclosed, Xe-HPG seems to go pretty heavy on matrix math, even heavier than Nvidia. IIRC AMD is going to have something similar in RDNA3.
No doubt everything is doable with shaders but more purpose-built units are likely faster for these ML-based techniques.
What i meant for him to read up on was the " that's why it's so bad, especially in motion." of his statement because there is not temporal component like you stated.
 
DP4a instructions (available on nearly all modern AMD and NVIDIA GPUs)
I am aware that NVIDIA support DP4a since Pascal, but is there any documentation on when AMD start supporting DP4a?
 
I am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?

Nope. Same goes for Physics engines, another such underrated thing in games that works here and there but never as a standard thing. We could've had so much more of it.

Its a reason not to fund proprietary bullshit, but it requires competitors in the market, and so far, all we had was two giants trying to invent the best sauce for the same plate of food. But its the same plate of food.

Now that there are three, it gets a whole lot harder to defend that idea. Three inventors of a wheel where two are destined to fail is quite a bit more risk than the odd 50% adjusted for market(ing) share. Devs won't go about supporting three technologies either. They want them fed to them or they're not happening. Another marketing idea that won't be working anymore, is the good old difference of 'AMD open, Nvidia closed'. Its no longer a real thing to stand out with from the 'crowd' when a third player deploys yet another policy. Three is a great number for us.
 
Back
Top