• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Could Intel create an equivalent tech to SLI/Crossfire?

Joined
Dec 12, 2020
Messages
1,755 (1.08/day)
Since Intel is very much the FNG in the high performance GPU world would it be possible they could shake up the GPU world and bring back some sort of multi-GPU tech for gaming? I'd think w/PCIe 4.0 bandwidth and resizable BAR/SAM, multi-GPU tech. could possibly be even better than SLI/Crossfire ever was.
 
Probably, but AMD & Nvidia have already killed multi-GPU for gaming not that long ago, so I don't see a point of bringing it back.
 
I feel like Intel needs to work on a lot of other things before this should even make it's way onto the list of things their GPU division needs to do.
 
Crossfire/SLI is dead. My conspiracy theory is because these dual-GPU cards would be exclusive halo products to prevent mid-range SLI beating out single halo cards. Instead we are still on monolithic dies and SLI hasn't returned. Both AMD and NVIDIA will go the chiplet route soon. Either this up coming generation or next.
 
I feel like Intel needs to work on a lot of other things before this should even make it's way onto the list of things their GPU division needs to do.
But if Intel had some sort of modern multi-GPU tech that addressed the shortcomings of SLI/Crossfire, then it's possible they could compete on perf. w/the duopoly at the high end.
In addition to resizable BAR, PCIe 4.0/5.0 increased bandwidth there's also hardware accelerated GPU scheduling. Maybe by leveraging these three technologies Intel could make a viable run at high-end multi-GPU gaming?
 
I believe Intel ARC cards do "have mGPU support" which is a feature for DX12 games that developers have to support.
All of AMD's RDNA line up supports mGPU, it's only Nvidia who doesn't support it without S.L.I & S.L.I certified motherboards.
There are not that many Dx12 games currently to be honest compared to DX11 has about 11 times as many games.
Currently there are 350 DX12 games, & out of those 350 games half support type of raytracing.
 
Last edited:
I was a fan of SLI when it worked.. I don't really think its needed these days?

It would be better if it used VRAM differently.

Anything can be done these days.. just gotta make it.
 
I was a fan of SLI when it worked.. I don't really think its needed these days?

It would be better if it used VRAM differently.

Anything can be done these days.. just gotta make it.
I had a R9 290 Crossfire back in 2019 and the multi-GPU support was already pretty meh back then. Switched to 980 Ti and even though it had less raw horsepower, games ran much more smoothly since no more micro-stuttering etc.
 
Could? Yes. Would they want to? Or anyone for that matter? No. Multiple GPU rendering is dead, and honestly it was more of bandaid fix for anemic performance back then. It's all about finesse now. Things like shader execution reordering - that's the path forward.
 
I am typing on an SLI certified board right now, just incase they want to bring it back :laugh:
 
I had a R9 290 Crossfire back in 2019 and the multi-GPU support was already pretty meh back then. Switched to 980 Ti and even though it had less raw horsepower, games ran much more smoothly since no more micro-stuttering etc.
mGPU is better, it doesn't use a.f.r most the of the time which cause that "microstuttering". You should try it with two 6700 XT they support mGPU too. In fact all RDNA type GPU's support mGPU. The only thing I believe you'll lose is Rebar with mGPU enabled on AMD cards.
A.F.R was known to have Frame time issues the day it came out. I can go dig up 18-year-old articles if you'd like proof?
Scissor & tile rendering was shunned by reviewers, because of lack of "even load" on each card & lower frame rates even though scissor rendering & tile rendering both had more consistent frame times.
 
So you have to give up resizable BAR/SAM to utilize mGPU?

Why can't mGPU be implemented in drivers transparently (i.e. to games developers)?
 
SLI is dead, even more dead now pcie slots have been cannibalized.

The only thing I would like to see is the ability to add a second card for VRAM capacity purposes, but I think even that would have issues, so instead GPUs need to come with a means of upgrading VRAM post purchase.
 
My 4070 gets 26GB/s from pcie bridge. Maybe when pcie v7 comes, it becomes 100GB/s and they bring back sli. Scaling issues are because of communication bandwidth.

Another possibility is raytracing dominates the graphics and becomes 100% of application. Raytracing is very scalable with multi gpus.

Old days had dedicated physx gpu. Maybe someone starts a dedicated RT thing.

Why not re-animate the corpse of long-gone physx? I would like to see destructible environment more than a shiny environment. Bring heavy-weight physx back. Not that weak-ss particle physx when your feet collides few items on ground.
 
Last edited:
Would be nice if you could combine iGPU + dGPU. AMD had that, a while ago. iGPUs are becoming more powerful - it would be a shame not to use them. Even a 32 EU UHD 770 could be tasked with something. But I think it needs to be done via OS: you install drivers for iGPU and dGPU and OS manages them both, at the same time.
 
The current graphics APIs dont have the things required by automagical scaling of igpu+dgpu. Game developers with extra care & time (is money) can do that.

Dota 2 for example, used gt1030 and k420 at the same time. But really a single bigger gpu would be better including every other game. Like 1050ti.

Multi gpu should only be thought of when the biggest gpu alone is not enough. Currently only few games make rtx4090 kneel down.
 
Well! Is this real problem for sli/crossfire?
When a GPU tries to access VRAM of another GPU, it will pass through pcie (there is no nvlink for desktop gpus currently), so it should be fast. Otherwise a lot of work have to be duplicated on both GPUs which decreases scaling.
 
When a GPU tries to access VRAM of another GPU, it will pass through pcie (there is no nvlink for desktop gpus currently), so it should be fast. Otherwise a lot of work have to be duplicated on both GPUs which decreases scaling.
OK, I agree that high throughput is needed. But is PCIe 7.0 speed really necessary given what actual speeds are used with NVlink for custom cards, even in the case of the 3090 sli? Let's not forget the main advantage of the bridge, that it does not occupy the number of PCIe lanes on the motherboard.
 
Let's not forget the main advantage of the bridge, that it does not occupy the number of PCIe lanes on the motherboard.
You mean the CPU-direct PCI lanes? Lanes coming out of CPU without chipset route has less latency. For example, rtx 4070 is plugged to such bridge on my pc and it can deliver ~500MB/s random-access(4K) throughput from single-thread usage in windows 11 and much higher in Ubuntu. On my old pc, it had only around half of it. Many NVME drives also slower than this at single-thread. Also serial access means higher bandwidth so pcie version will always be important for both GPU-communication and file-access.
 
Last edited:
You mean the CPU-direct PCI lanes? Lanes coming out of CPU without chipset route has less latency. For example, rtx 4070 is plugged to such bridge on my pc and it can deliver ~500MB/s random-access(4K) throughput from single-thread usage in windows 11 and much higher in Ubuntu. On my old pc, it had only around half of it. Many NVME drives also slower than this at single-thread. Also serial access means higher bandwidth so pcie version will always be important for both GPU-communication and file-access.
I mean physical bridge interconnect like this:

1690706773046.png
 
Ah, yes, the negotiator. It requires extra dollars I guess.
 
No. It is completely impossible.
 
mGPU is better, it doesn't use a.f.r most the of the time which cause that "microstuttering". You should try it with two 6700 XT they support mGPU too. In fact all RDNA type GPU's support mGPU. The only thing I believe you'll lose is Rebar with mGPU enabled on AMD cards.
A.F.R was known to have Frame time issues the day it came out. I can go dig up 18-year-old articles if you'd like proof?
Scissor & tile rendering was shunned by reviewers, because of lack of "even load" on each card & lower frame rates even though scissor rendering & tile rendering both had more consistent frame times.
Not gonna waste money for that, this card will go to my 2nd rig when I upgrade this (and the 2nd rig) to something better.
 
Back
Top