• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Will you do multi-GPU Today?

Would you Multi-GPU Today?

  • No, even one is hard to get - all out of stock

    Votes: 0 0.0%

  • Total voters
    126
Joined
Jul 15, 2020
Messages
1,097 (0.61/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29 Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) iGPU | NV 1080TI FE
Storage Micron 256GB SSD | 2*SN850 1TB, 230S 4TB, 840EVO 128GB, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid: 2*TOUGHFAN 14Pro+2*Stock 14 inlet, NF-A14 PPC-3000+NF-A8 outlet
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2281 MC: i5-2400=i9 13900k SC | i9-13900k=35500
So it`s dead and buried now*, but will you go SLI/Crossfire if it was back as an option with good driver and game support, considering current GPU`s size, power consumption and cost?
That's gows for all tiers, not just the top ones.
Consider you can have memory pool for the GPU`s (that tech was somewhere around I think back at the days as well as mixing differed SKUs of GPU`s).

You can choose up to 3 box but please don`t mix yes and no ;)


*Care to see 7-way 4090 with linear scaling?


Closeup-photo-of-1-7x-NVIDIA-GeForce-RTX-4090-in-mining-rack.png

 
Last edited:
it was a bad idea then, it's still a bad idea now
 
I would under certain conditions.
For example, with two nano-class GFXs whose combined cost is lower than the top single GFX.

For example - two 350-euro GFX instead of one 900-euro which is considerably slower than the two.

The thing is that the GFX market now is super screwed, absolute chaos.

1. Super expensive
2. Super large
3. Cut-down PCBs - what for?
4. Reduced performance options - no longer CrossFire and SLi available, working and optimised for
5. "Moore's law" long dead and buried - we no longer get 60-70% performance improvements with new generations, now 10-30% is the new "norm"
- performance dictates much higher prices - 30% higher performance for 70% more money...

:banghead:
 
My first SLI were a couple of Geforce 2's and they worked gloriously most of the time. My second was doing this with, I think, 670's and it was a complete waste of time.
 
Currently still running a pair of 780 Ti's because newer games just aren't very interesting to play and the games I do actually play either support SLI or they run plenty fast on just one card anyway.

Would I do multi-gpu with new cards, probably not since it's just not necessary. I don't plan on running 8K 120+.
 
Nope there's no need nowadays outside of niche situations.
 
My GPU repair test rig is a mining board so I can run multi-GPU, but I only use it for folding@home to burn in test cards until I sell them on eBay...
 
I played around with the idea of trying out Crossfire for fun when I had a single RX 570 but even then it was on its way out for most games I was playing so I didn't do it.

With current cards, hell no its expensive enough to buy 1 and a single card already draws too much power for my taste.
 
Man this thread already has forgotten the mighty GTX690. I still run mine at times for the nostalgia and cool factor.

Do I want another for quad-SLI? Hell yeah.
 
Man this thread already has forgotten the mighty GTX690. I still run mine at times for the nostalgia and cool factor.

Do I want another for quad-SLI? Hell yeah.
Very nice, with 2GB VRAM :)
 
Very nice, with 2GB VRAM :)

Per chip, or 4 GB in total. Depends on how SLi works, though, if it copies in mirror the two memory pools.

But it wasn't so power hungry as today's hottest and largest single-GPU cards such as RTX 4090... :rolleyes:

1671132039864.png


1671132081304.png
 
I think AMD must definitely return to multi-GPU. Because now Moore's "law" no longer works, they can easily extract more performance with dual-GPU setups.

Instead of pitiful performance upgrades - RX 6800 XT : RX 7900 XT - only mediocre 27%. 2-year generation cadence !
 
Ignoring all the additional complications multi-GPU added for developers of both GPUs and games, the reason it was abandoned was primarily insurmountable problems related to latency and frame-pacing.

Not only were the best mult-GPU solutions no better in these regards to a single GPU, they typically added additional latency with added buffers, sync cycle waits, and additional scheduler overhead. In order for a multi-GPU solution to match a single-GPU for latency and responsiveness, you need a higher framerate - so gaming at 100Hz on SLI feels like gaming at 60Hz on a single GPU.

That was unacceptable to most of the audience when SLI was dying off, but today there's a greater focus than ever before on responsiveness and immediacy. We have high-refresh monitors, driver options from both AMD and Nvidia to optimise the rendering for minimum input lag, and a massive shift in review focus away from just peak FPS numbers, to looking at minimums and 99th percentile lows. What does it matter if you're getting 300fps if you still get stutters, hitchy animation, and input lag?

I'm not saying that multi-GPU solutions cannot ever solve these issues, but it's one of the dead-ends that multi-GPU solutions ended up in. Nobody in the graphics industry currently has any idea of how to avoid the additional latency caused by dividing up the work and sending them to physically isolated GPUs, only to then try and pace the results evenly. That was perhaps the last nail in the coffin alongside all the added expense and effort to prop up a rapidly-dying, high-maintenance, low-returns, problematic, niche market.

Perhaps if AMD ever manage to get infinityfabric working between two separate compute dies, it opens the door for future scaling of that multiple GPU dies onto separate cards, but even in the far more mature multi-socket server world where AMD, Intel, and application developers have decades of experience, there are still unsolved problems getting a single application to run on two completely separate CPUs, rather than different CCDs of a single CPU. That's CPU stuff that is often not latency-sensitive like realtime graphics - which means we are so far away from solving the latency problems for realtime graphics that, for now, multi-GPU revival for gaming is a distant pipe-dream!
 
Look at how easy AMD made it for developers, and yet they chose to not implement it. There's no incentive for developers to do mgpu.
 
Maybe they can put two GPU chiplets sitting touching next to each other, with super wide Infinity Fabric between them, and software which sees the two chiplets as a single unit with common scheduler, memory buses, etc.

It's like the current MCM approach of Navi 31 but developed further like the GCD cut in two equal slices.

1671133696348.png


1671134411814.png
 
Last edited:
I have a 3060 (12GB), so two of them would mean a TDP of 350W. That's a lot, but not unheard of today. But I don't really need that much power.
I used to have dual AMD 290, but it was a housefire. I was unable to game much without sweating like crazy. So even if full support for dGPU came back, my answer is no.
 
Never did and don't intend to.:toast:
 
  • Like
Reactions: Lei
I used SLI since I used 7600 GT cards, here are the GPUs I used in SLI over the years.

7600GT
8800GTS 640MB
8800GTS 512MB (used the EVGA Step-Up from the 640MB cards)
GTX 280
GTX 570
GTX 980Ti (only used SLI for maybe 2 months and then went to one card since it was overkill at the time)

Lots of people complain about SLI, but I enjoyed using it. Especially when you had those mid-high tier cards that would give you performance of a top end card and then some.

If SLI was still in use, I might make use of it.

Ignoring all the additional complications multi-GPU added for developers of both GPUs and games, the reason it was abandoned was primarily insurmountable problems related to latency and frame-pacing.

Not only were the best mult-GPU solutions no better in these regards to a single GPU, they typically added additional latency with added buffers, sync cycle waits, and additional scheduler overhead. In order for a multi-GPU solution to match a single-GPU for latency and responsiveness, you need a higher framerate - so gaming at 100Hz on SLI feels like gaming at 60Hz on a single GPU.

That was unacceptable to most of the audience when SLI was dying off, but today there's a greater focus than ever before on responsiveness and immediacy. We have high-refresh monitors, driver options from both AMD and Nvidia to optimise the rendering for minimum input lag, and a massive shift in review focus away from just peak FPS numbers, to looking at minimums and 99th percentile lows. What does it matter if you're getting 300fps if you still get stutters, hitchy animation, and input lag?

I'm not saying that multi-GPU solutions cannot ever solve these issues, but it's one of the dead-ends that multi-GPU solutions ended up in. Nobody in the graphics industry currently has any idea of how to avoid the additional latency caused by dividing up the work and sending them to physically isolated GPUs, only to then try and pace the results evenly. That was perhaps the last nail in the coffin alongside all the added expense and effort to prop up a rapidly-dying, high-maintenance, low-returns, problematic, niche market.

Perhaps if AMD ever manage to get infinityfabric working between two separate compute dies, it opens the door for future scaling of that multiple GPU dies onto separate cards, but even in the far more mature multi-socket server world where AMD, Intel, and application developers have decades of experience, there are still unsolved problems getting a single application to run on two completely separate CPUs, rather than different CCDs of a single CPU. That's CPU stuff that is often not latency-sensitive like realtime graphics - which means we are so far away from solving the latency problems for realtime graphics that, for now, multi-GPU revival for gaming is a distant pipe-dream!

So, instead of having two GPUs that add latency, Nvidia has removed the need by creating DLSS 3.0 (which has its own problems) and then Nvidia has also created Nvidia Reflex to reduce the latency that they added? It kind of feels like a side step. They now have one card able to re-create SLI type issues instead of letting people use SLI. I guess on the plus side of things, they don't have to create drivers for SLI, so they have that going for them.

For gaming, VRAM is shared, you get 2GB. Even on quad SLI.

I know it was a limitation of SLI/Crossfire, but mGPU on DX12 was supposed to be a better avenue for developers to support multiple GPUs and it would allow games to make use of VRAM separately on both GPUs. mGPU in DX12 was even able to allow different GPUs to work together (I think Ashes of the Singularity was built to showcase this), so you could run a game with an AMD and Nvidia card in your computer.....but I could be remembering that last part wrong about mixing GPUs.

I wonder if it's a difficulty thing for developers to add mGPU to their games or if it's a cost/time thing?
 
I am using currently 2 GPU's but not in SLI config as my second card is Matrox M9120(2d card) and it's there just to power my 2 side panels as my main RX 5700 also providing picture for my main 4K monitor and another 1080p monitor when/if I need it + VR is also connected.....also it's just much better performance wise when I watch something on youtube via second Matrox card while I am playing games on my main RX 5700 and that Matrox card is silent(no fan) and using less then 10W....so for me personally multi GPU's still makes sense.....
 
So, instead of having two GPUs that add latency, Nvidia has removed the need by creating DLSS 3.0 (which has its own problems) and then Nvidia has also created Nvidia Reflex to reduce the latency that they added? It kind of feels like a side step. They now have one card able to re-create SLI type issues instead of letting people use SLI. I guess on the plus side of things, they don't have to create drivers for SLI, so they have that going for them.

I know it was a limitation of SLI/Crossfire, but mGPU on DX12 was supposed to be a better avenue for developers to support multiple GPUs and it would allow games to make use of VRAM separately on both GPUs. mGPU in DX12 was even able to allow different GPUs to work together (I think Ashes of the Singularity was built to showcase this), so you could run a game with an AMD and Nvidia card in your computer.....but I could be remembering that last part wrong about mixing GPUs.

I wonder if it's a difficulty thing for developers to add mGPU to their games or if it's a cost/time thing?

DLSS is not a side step, it's a step back.
I am not even sure that nvidia doesn't cheat all the time and its highest quality setting doesn't use some type of hidden and unofficial DLSS to improve the FPS, hence the reports from everywhere that nvidia cheats on the image quality (lower texture resolution or some type of compression on the fly during gameplay) all the time.
 
hum, been there done that ... and nah, good with a single gpu (well i am single ... so i get along well with my GPU ... :laugh: )

although i could also tick yes for old time sake ... my last SLI setup was ... a mere pair of Asus GTX 580 Matrix Platinum 1.5gb which i got both card second hand but literally like new and all box'n stuff at 200$ a piece (roughly ) ... but even then ... it did not scale well nor work all the time, but i always had a blast whent it did :D

technically, i have 2 NX6600GT PCIe aside that i would use in a fun/nostalgic build, a Zotac 9800GX2 that i would tri SLI with an Asus 9800 GT Matrix if i manage to make them work again (second hand again less than 30$ for the GX2 and 15$ for the Matrix in auctions, proud of the Matrix, as it is a rare one :D )

for Crossfire, i have an Asus HD4870 and a HIS iceQ3 HD3870 i would love to find 2 other of the same brand and kind for fun sake

and that would also motivate me to try so check what's wrong with my MSI K8N Neo2 Platinum (well ... for other reason than multi GPU ofc ... it only had one X16 :laugh: ah.nope one AGP hehehe ... memory failing ahah! oh, well i still want to make it work again. ) and DFI LanParty UT NF4 SLI-DR, since i have a nice Venice core Athlon 64 for them ...
or i could use my SuperMicro H8DCE with 2 Opty's 270/275 for even more laugh ...

yep SLI/Crossfire make me all nostalgic (but not of the performances or stuff like that ... because it is dead done and dusted since long ahah.


other type of multi GPU well ... that can be done, and it does serve purposes for multi monitor and other applications, but it's not the same :oops:
 
Last edited:
  • Haha
Reactions: Lei
My first taste was with a BFG and PNY 6600GT on an Abit Fatal1ty AN8 lol. I played with it a few time with different generations of cards. It was great when it worked properly. It would have been better if cards had more vram back then.
 
Ignoring all the additional complications multi-GPU added for developers of both GPUs and games, the reason it was abandoned was primarily insurmountable problems related to latency and frame-pacing.
I agree that it's probably dead forever. However, there's still some research going on into improvements for SFR, i.e. split frame rendering. That would allow the avoidance of frame pacing problems, but will probably decrease the speedup compared to alternate frame rendering.
 
Back
Top