• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Could Intel create an equivalent tech to SLI/Crossfire?

They could, but would they pay for it. Let's remember this was already done by AMD on AM3. I forget the exact chipset but you could use a R7 250 to get more frames. At the root though you would need developers to write that support into Games but GPUs are weak and APUs that can Game already eat 8 PCie lanes so.
 
I'd say it's plausible, *if* each cards' resources could be virtually pooled together as one MCM-over-PCIe GPU. I may be incorrect but, I believe Arc's Xe HPG lineage might allow for such a thing.
After all, Gen4x16 is considerably higher bandwidth than Gen3x16.

The issue that comes up is less the hardware, and more software.
Even if Intel created something similar for GPGPU workloads, Intel is notorious for keeping tech in-industry, or charging for 'features'. (which, they routinely get away with in-industry, but less-so in the consumer market).
Not to mention, it's yet another thing for their Arc driver devs to have to deal with; and they're doing their best already.
 
Iirc the only real issue was the lack of developers/studios actually supporting it. Sometimes it scaled really well, but most of the time it was an afterthought with minor performance improvements or just flat out unsupported.

I mean, of course it’s possible — it was standard on almost all GPUs for a decade plus. But, having used it myself three or four times, it was just too poorly supported to bother with (and yes microstutter and heat).
 
This will all of the sudden make RTX 4060 Ti 16 GB make some sense. Please don't unembarass this GPU. I object!
 
Couldn't Intel do something a bit like Lucid Hydra with direct storage to shorten the round trip time with compression/decompression. I would think between direct storage, PCIE 4.0/5.0, and DDR5 not to mention CPU's and GPU's with larger cache SLI/CF style mGPU would easily be better today.

Beyond all of that they could utilize something like CFR checkerboard frame rendering where each GPU uses it's own system resources actually that was suppose to be one of the major mGPU things with DX12 that Microsoft had touted at the time being able to leverage VRAM allocation on each GPU instead just one GPU and mirror copying it between them each.

What could done differently today with multiple GPU rendering is develop a seamless ways to leverage checkerboard frame render combined with variable rate shading techniques.

Now think about that a moment with post process and with AA and/or AF for example. Take a 3x3 pixel tile and you've got a anti-aliasing tile block multiply it and you've got several. Want a long column or row link them together want a bigger square join them together. Want it to do gradient effects no Fing problem 100%/90%/80%/F You%/60%/50% sorry 70% F you...don't draw the short straw next time no hard feelings. Either way you get the idea tile based variable rate shading seems like a obvious place to leverage multiple GPU's with checkerboard frame rendering. Want to include or exclude a tile no problem F you tile 4 row/column #'s F you both! That's the magic AI inference algorithm F you both I don't need you this time check back later when I want you to do some stuff again.
 
I feel like Intel needs to work on a lot of other things before this should even make it's way onto the list of things their GPU division needs to do.
^^THIS^^

IF
, & when, they can ever produce a credible card, with high-quality & stable drivers, to meet or beat the other 2, then perhaps they could look into the mGPU thingy, but given their other issues right now, that would be a massive waste of time & money....even IF they could convince the gamz dev's to get support it, which would probably be at least as difficult & expensive, if not more so, than developing the card(s) to start with...
 
Since Intel is very much the underdog in the gaming GPU market now and doesn't seem to have anything competitive w/the duopoly at anything except low tier, they're going to have to do something to generate interest in their GPU products.
 
Why can't mGPU be implemented in drivers transparently (i.e. to games developers)?

Because the implicit (xfire/SLI) approach adds more load to driver development process to produce compatibility profiles and, in some (many?) cases, still needs gamedevs to modify their games to achieve any meaningful gains.

Forcing the gamedev to handle multi-GPU workload distribution greatly simplifies the process, makes driver development easier, and opens up more possibilities than with the rigid, implicit approach.

On topic: KISS.
SLI/Xfire were a complex thing that required much work for gains easily beaten by a generational upgrade or even jumping tiers in the same gen.
Focusing on improving single GPU performance benefits everyone. Wasting resources on implicit mGPU benefits a few (and harms everyone else).
 
Could? Yes. Would they want to? Or anyone for that matter? No. Multiple GPU rendering is dead, and honestly it was more of bandaid fix for anemic performance back then. It's all about finesse now. Things like shader execution reordering - that's the path forward.
Shader execution reordering is a very cool feature, but to be fair, Intel beat Nvidia in bringing this feature to the market.
 
I loved SLI when it worked.

What I'd rather see today is a dedicated RT card just like we used to with PhysX.
 
I would hope Intel could create something that isn't equivalent since SLI/CF were a puke red mess of abandoned rubbish. That's just my hot take though since they were full of problems and just when you thought hey maybe they'll iron them out with some new innovation to some of the obvious bottleneck choke points they abandoned them nearly outright both. I don't think they they want to sell multiple GPU options today more than it can't be done. It's simply they've come to the conclusion they make more money by not offering it and selling more generational halo tier options and repeating the process over again while giving incremental generational uplift. People seem willing to pay it though out of desperation for more performance.

There is the developer angle as well, but instead of SLI/CF their now supporting 3 different god damn upscale techniques in place of it in some cases so really what did they gain in the end!? If they were too lazy to do SLI/CF well I can tell you right now they won't be any less lazy with upscale or RTRT effects or pretty much anything else that includes involved effort levels rather than quick production of half baked early access micro transaction riddled dumpster fire games. Just the same games are difficult to make well as a individual or small group of individuals. There is a lot that goes into making a good proper game and with expectations society has for their money spent on them.

I don't know if Intel will try take a serious swing at multiple GPU rendering or not, but I can't imagine they can do any worse at it. It would be hard to do worse given all the different improvements in area's like direct storage, PCIE, system memory, cpu cores and cache plus stuff like variable rate shading and post processing which could be offloaded to another GPU or alternate between them for techniques perhaps. They almost have to try to do worse at it today in order to do so at it today because there is no reason it should be better today if it were available with modern approaches. It would be rather impressively broken if worse. Also generational GPU progress has slowed down a lot so chances are the uplift would seem a whole lot more reasonable now even w/o perfect scaling, but it was pretty good anyway and biggest fault was micro stutter which was latency and bandwidth related and that's improved within systems a modest amount.

Notice that NVIDIA killed off SLI more at that low end and mid range earlier that tells you all you need to know about their agenda. They would much sooner push single cards at more cost instead and AMD doesn't mind either nor do the lazy developers not having to code for SLI/CF especially as their being pushed to adopt coding in fakescale technology to make your render at near non inferior render like quality relative to native yes native like as in unprocessed this how it renders we haven't tried to pitch a orange as a apple to you we rendered a apple not a orange meanwhile orange you glad I made that remark probably not if you're a fan deep fakes like watered down skim milk. I promise it's not near milk quality it's percentile milk quality don't be duped by the low grade milk with 120% water added. That's no longer milk it's closer to water. Don't have a cow man, but maybe enjoy real 100% god damn good high quality milk. Also no matter much you like almonds it's still not milk you don't get almonds out of a cow unless you feed them a lot of almonds. I'm pretty sure that's not how it's made either. Almond milk now processed with 100% cow technology the way it's meant to be made.
 
We have this, it's called "DX12 Explicit multi GPU" and it's upto game devs to implement

Which they do not.


SLI and crossfire died because past a certain point it became a serious issue to get it working in more and more complex game engines, like how DLSS etc all need to be coded in by the game devs - the market wasnt there, so they never did.

No one wants a game that needs two GPUs to run, so the higher ups see no point to marketing for it, or spending money towards it - only Nvidia and AMD ever profited, and they couldnt get games working right.

Starcraft II (DX9) had an SLI profile, by the time it's final expansion dropped that profile was "disable SLI"
Killing floor 2 had an SLI profile, which could be summed up as "flickering mess"


None of this was ever fixed, they reached a point where they couldnt do it without the help from the devs and the devs gave no shits
 
We have this, it's called "DX12 Explicit multi GPU" and it's upto game devs to implement

Which they do not.
The last one I can think of off the top of my head is RDR2 using Vulkan. Which is still a little difficult to find a good mGPU comparison of.
 
AMD Presentation on Explicit mGPU:

nVidia 1st party deep-dive on DX12 Explicit mGPU implementations:

Dunno if anyone else has noticed, but (at least) Crossfire(X) has been entirely supplanted by DX12/VK mGPU. (nVidia appears to still use SLI terminology, intermittently)
AMD very directly implies AMD MGPU works generically on pre-DX12/VK-mGPU-supporting titles. (supplanting CrossfireX)
Multi-GPU support and performance varies by applications and graphics APIs. For example, games/applications using DirectX® 9, 10, 11 and OpenGL must run in exclusive full-screen mode to take advantage of AMD MGPU. For DirectX® 12 and Vulkan® titles, multi-GPU operation is exclusively handled by the application and configured from the in-app/game menu for graphics/video.
See also: https://community.amd.com/t5/graphics/about-mgpu-technology/m-p/575585

-Even, in 3rd Party marketing. (example: My 6500 XT was advertised as CrossfireX Ready, and others still are.)
lolCrossfireX.png


As mentioned, support is abysmal
As of 4 years ago, there were only a handful of titles:
 
Last edited:
I have an A770 and an A380, I will try them when I have time.
I think dx12 mgpu games will work fine because these not need any driver support just dx12 support.
Very sad that only a few games support it.
20230803_073151.jpg
 
Last edited:
Because the implicit (xfire/SLI) approach adds more load to driver development process to produce compatibility profiles and, in some (many?) cases, still needs gamedevs to modify their games to achieve any meaningful gains.

Forcing the gamedev to handle multi-GPU workload distribution greatly simplifies the process, makes driver development easier, and opens up more possibilities than with the rigid, implicit approach.

On topic: KISS.
SLI/Xfire were a complex thing that required much work for gains easily beaten by a generational upgrade or even jumping tiers in the same gen.
Focusing on improving single GPU performance benefits everyone. Wasting resources on implicit mGPU benefits a few (and harms everyone else).
That whole last state is false.
The options of more choices should matter to everyone especially since it's a feature of DX12 itself. Also because of the obscene prices of newer generation GPU's.
 
Multi-GPU is dead with DirectX 12 and modern GPU architectures, so I highly doubt it. Not to mention, they have to get their single-GPU performance and power consumption in order first.
 
Multi-GPU is dead with DirectX 12 and modern GPU architectures, so I highly doubt it. Not to mention, they have to get their single-GPU performance and power consumption in order first.
Gtx 480 was like a 250w card. Sure the current upper tier prob pulls too much power to have SLI throughout the stack but the smaller cards are prime for it. Two 4080s under 300w each would be acceptable. But 30-50w less from the upper stack would be much much better.
 
Gtx 480 was like a 250w card. Sure the current upper tier prob pulls too much power to have SLI throughout the stack but the smaller cards are prime for it. Two 4080s under 300w each would be acceptable. But 30-50w less from the upper stack would be much much better.
You'd be limited by VRAM and PCI-e bandwidth, just like you were back in the days. That's why SLi and CF were never that popular, I guess. Not to mention Nvidia killed SLi in the lower, then mid-range, and only later in the high-end.
 
You'd be limited by VRAM and PCI-e bandwidth, just like you were back in the days. That's why SLi and CF were never that popular, I guess.
Ima need a reminder cause my old 980Tis in SLI never seemed hindered by either of those.
Doesn't NVLink pool the memory anyways?
 
Killing off support in lower end of product range was it's death nail in the coffin. It really didn't help that the lower end and mid range GPU's were more anemic on VRAM that Nvidia would put on GPU's. Meanwhile they need put a lot more VRAM on certain cards that can barely utilize and not enough of it on cards that could better utilize it. They want to force people into sooner upgrades or buying the next halo tier higher.

Ima need a reminder cause my old 980Tis in SLI never seemed hindered by either of those.
Doesn't NVLink pool the memory anyways?

That was a halo tier card though outside of Titan and outside of Telsa and Workstation GPU's that are honestly best suited for SLI. The GTX960 4GB scaled pretty well, but the VRAM was still rather limited and had it been more like 6GB would've been more ideal and memory bus was not exciting plus memory didn't scale great given the limited memory bus. It still did pretty good on scaling relative to a GTX980 though.

Now if you took a similar GPU performance Tesla card with 8GB VRAM and put those in SLI I'd expect it to easily outperform GTX960's 4GB in SLI especially at higher resolutions and might even at times give a GTX 980Ti a run for it's money.

Design limitations hamper expectations depending on usage and expectations around them. A lot of GPU's become VRAM limited too quickly or resourced limited to push the tier model above it or were just poorly balanced design. It's tricky to design something ideal for both single card usage or multiple GPU usage though from a product stack standpoint if you don't keep enough bottleneck limitations in mind so some combinations end enough a lot better balanced for either scenario or really double down on poor design limitations that aren't ideal to begin with.
 
Last edited:
Ima need a reminder cause my old 980Tis in SLI never seemed hindered by either of those.
Doesn't NVLink pool the memory anyways?
SLi stored a mirror image in your VRAM, so with two 4 GB cards, you doubled your theoretical performance (it was closer to 1.5x in practice), but you still only had 4 GB VRAM.
 
SLi stored a mirror image in your VRAM, so with two 4 GB cards, you doubled your theoretical performance (it was closer to 1.5x in practice), but you still only had 4 GB VRAM.

Direct X 12 though was suppose to allow mGPU's to each utilize their full memory allocation.
 
Direct X 12 though was suppose to allow mGPU's to each utilize their full memory allocation.
The developers still have to do the heavy lifting; no wonder SLI and Crossfire are dead.
 
Back
Top