Thursday, May 2nd 2019

Intel Xe GPUs to Support Raytracing Hardware Acceleration

Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.

"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.
Source: Intel
Add your own comment

59 Comments on Intel Xe GPUs to Support Raytracing Hardware Acceleration

#1
ebivan
I'll belive it, when I see it. Will Xe be manufactured on Intels mystic 10nm node?
Posted on Reply
#2
btarunr
Editor & Senior Moderator
ebivan said:
I'll belive it, when I see it. Will Xe be manufactured on Intels mystic 10nm node?
No, they'll use Samsung 5 nm EUV.

They don't want to miss the bus with market-relevance at launch and end up with another Larrabee this time.
Posted on Reply
#3
MT66
Intel XeForce XeTX 1000!
Posted on Reply
#5
Tsukiyomi91
what type of ray-tracing method it's going to be using? real-time or not? If it's real-time then Nvidia has a competitor while AMD doesn't (yet again), but if it's rendered after rasterization, then Nvidia has no competition & will continue to milk the market.
Posted on Reply
#6
sutyi
Tsukiyomi91 said:
what type of ray-tracing method it's going to be using? real-time or not? If it's real-time then Nvidia has a competitor while AMD doesn't (yet again), but if it's rendered after rasterization, then Nvidia has no competition & will continue to milk the market.
Probably not the gaming related kind if it uses CPU based AVX512 for denoising. If you want any sort of realtime RT then all of the "magic" needs to happen on the graphics card, so you can get at least semi decent framerates and framepacing.
Posted on Reply
#7
Tsukiyomi91
I mean it's obviously not gaming related but graphics intensive workflow like model rendering, casting rays on objects etc.
Posted on Reply
#8
londiste
Notable that they are talking about studio workflows, not focusing on real-time and talking about Intel Rendering Framework. They are not saying anything about DXR or Vulkan.
Posted on Reply
#9
notb
ebivan said:
I'll belive it, when I see it. Will Xe be manufactured on Intels mystic 10nm node?
Outsourced to Samsung or TSMC (likely the former). They've invested so much in the 10nm capable of high clocks - no point in wasting it for GPUs.
As for the hardware ray tracing - Intel is one of the biggest FPGA players. This is definitely within their reach even at this moment. How it fares against RTX is another story.
Tsukiyomi91 said:
what type of ray-tracing method it's going to be using? real-time or not? If it's real-time then Nvidia has a competitor while AMD doesn't (yet again), but if it's rendered after rasterization, then Nvidia has no competition & will continue to milk the market.
Ray tracing works works the same. You either push frames fast enough to be considered "real time" or not.
What do you mean by "rendered after rasterization"?
sutyi said:
Probably not the gaming related kind if it uses CPU based AVX512 for denoising. If you want any sort of realtime RT then all of the "magic" needs to happen on the graphics card, so you can get at least semi decent framerates and framepacing.
Cloud gaming platform.
Signal is encoded and decoded, goes through multiple routers and switches, travels hundreds of km via wire or wireless.
And you worry about denoising on a die 20cm away.
londiste said:
Notable that they are talking about studio workflows, not focusing on real-time and talking about Intel Rendering Framework. They are not saying anything about DXR or Vulkan.
The first part of the article is about their existing products. Rendering movies and complex static 3D models happens on CPUs.
GPUs are too slow (ray tracing is sequential, i.e. heavily single-threaded). Also, complicated models are way too big for RAM available on GPUs.
The article mentions a few CPU features and libraries that were created to accelerate ray tracing.

The second part is about future GPUs. It doesn't mention cloud gaming explicitely but that's the best use case.
Posted on Reply
#10
Mysteoa
My yet to be released GPU also supports Ray Tracing and also Wave Tracing.
Posted on Reply
#11
londiste
notb said:
The first part of the article is about their existing products. Rendering movies and complex static 3D models happens on CPUs.
GPUs are too slow (ray tracing is sequential, i.e. heavily single-threaded). Also, complicated models are way too big for RAM available on GPUs.
The article mentions a few CPU features and libraries that were created to accelerate ray tracing.

The second part is about future GPUs. It doesn't mention cloud gaming explicitely but the best use case.
The pbolded part is what it seems to be about. Reading between the lines, Intel will implement the same solutions on GPU instead of CPU. This does not mean an explicit hardware support for ray-tracing (similarly to Nvidia's RT Cores or otherwise).
Posted on Reply
#12
goodeedidid
When are they coming out?? I want to spend money already, geez...
Posted on Reply
#13
bug
Tsukiyomi91 said:
what type of ray-tracing method it's going to be using? real-time or not? If it's real-time then Nvidia has a competitor while AMD doesn't (yet again), but if it's rendered after rasterization, then Nvidia has no competition & will continue to milk the market.
What kind of question is that? If you don't need real-time, you can do ray tracing on pretty much any GPU. Hell, you can do it in software, using no GPU at all. Of course this is about RTRT.
Posted on Reply
#14
kastriot
It should be called Xe740 like his granddad.
Posted on Reply
#15
notb
londiste said:
The pbolded part is what it seems to be about. Reading between the lines, Intel will implement the same solutions on GPU instead of CPU. This does not mean an explicit hardware support for ray-tracing (similarly to Nvidia's RT Cores or otherwise).
I don't see how that could happen.
Intel Embree, rendering environment described in the text, is a CPU solution - heavily utilizing AVX instruction set. You can't just move it to GPUs.
And really... GPUs are awful at ray tracing. That's why Nvidia made an ASIC and sacrificed some GPGPU potential in RTX cards.

Intel may be planning to make new PCIe RT accelerators to speed up ray tracing nodes, but that would not be GPUs, but new generation of x86 coprocessors. In fact Xeon Phi are currently used for rendering. But that would be just improving products for a segment Intel already dominates.
RTRT for cloud gaming would mean entering a new market - something Intel must do to grow.
Posted on Reply
#16
Crackong
Oh yes, a CPU-based de-noiser.
RIP Tensor cores
Posted on Reply
#17
Vya Domus
Just so you people know, "support for hardware raytracing" may not necessarily mean actual dedicated hardware inside the GPU but rather it can simply mean they adapted existing hardware for some new functions that they now expose at the ISA level.
Posted on Reply
#18
SoNic67
Crackong said:
Oh yes, a CPU-based de-noiser.
RIP Tensor cores
Nobody knows if the Xe de-noising will be fast enough to be real time, but if it will be... yeah, no need to waste the GPU silicon on RT.
Posted on Reply
#19
bug
kastriot said:
It should be called Xe740 like his granddad.
Would be a nice touch if they did that. But this time around Intel is talking data center SKUs so their plans must be much bigger than before.
I just wish their parts were as disruptive as i740 was.
Posted on Reply
#20
SoNic67
Vya Domus said:
Just so you people know, "support for hardware raytracing" may not necessarily mean actual dedicated hardware inside the GPU but rather it can simply mean they adapted existing hardware for some new functions that they now expose at the ISA level.
The de-noiser is essential for real time hardware RT. So that might be what they mean.
Posted on Reply
#21
Totally
Feels like another Larrabee, I don't think Xe is meant for gaming.
Posted on Reply
#22
jabbadap
bug said:
What kind of question is that? If you don't need real-time, you can do ray tracing on pretty much any GPU. Hell, you can do it in software, using no GPU at all. Of course this is about RTRT.
Pascal can do DXR in real time albeit the performance is another question... So until Intel says it's supports fully HW accelerated DXR/Vulkan this has nothing for gamers. I see this as direct competitor to AMDs RadeonRays or Nvidia Optix.
Posted on Reply
#23
dj-electric
Totally said:
Feels like another Larrabee, I don't think Xe is meant for gaming.
Makes sense, otherwise Intel Graphics would be talking about gaming, pricing of cards for gamers, game engines and other gaming aspects in social media and events.


oh wait.
Posted on Reply
#24
notb
SoNic67 said:
Nobody knows if the Xe de-noising will be fast enough to be real time, but if it will be... yeah, no need to waste the GPU silicon on RT.
?
"de-noising" (Nvidia DLSS) is one thing, ray tracing is another.
Vya Domus said:
Just so you people know, "support for hardware raytracing" may not necessarily mean actual dedicated hardware inside the GPU but rather it can simply mean they adapted existing hardware for some new functions that they now expose at the ISA level.
The phenomenon you're describing is called "programming". It makes it possible to build universal hardware that can do different things. Great stuff!

...

Yes, "hardware raytracing" means a dedicated chip that does exactly what's supposed to be doing (just like hardware encoding/decoding, hardware compressing, hardware random number generating).
Posted on Reply
#25
bug
jabbadap said:
Pascal can do DXR in real time albeit the performance is another question... So until Intel says it's supports fully HW accelerated DXR/Vulkan this has nothing for gamers. I see this as direct competitor to AMDs RadeonRays or Nvidia Optix.
In other words, Pascal can't do ray tracing in real time.
Posted on Reply
Add your own comment