• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Will you do multi-GPU Today?

Would you Multi-GPU Today?

  • No, even one is hard to get - all out of stock

    Votes: 0 0.0%

  • Total voters
    126
Split frame rendering when the first chip renders the left part and the second chip renders the right part is better than that when the first renders the top part, while the other the bottom part.

Imagine a scene where the top part is the sky, of course that the load between the two chips is not shared in equal.

1671138900266.png
 
Maybe they can put two GPU chiplets sitting touching next to each other, with super wide Infinity Fabric between them, and software which sees the two chiplets as a single unit with common scheduler, memory buses, etc.

It's like the current MCM approach of Navi 31 but developed further like the GCD cut in two equal slices.

View attachment 274616

View attachment 274617

Isn't that what they're doing with their Radeon Instinct accelerator cards these days? The MI250X I'm pretty sure has two dies that the software sees as one and I think they've had some pretty good success with it.

1671139559541.png
 
Isn't that what they're doing with their Radeon Instinct accelerator cards these days? The MI250X I'm pretty sure has two dies that the software sees as one and I think they've had some pretty good success with it.

View attachment 274623
It's a very valid strategy for compute which has wholly different set of requirements to real-time graphics.

To massively oversimplify things, GPU compute prioritises completing the task as fast as possible (rendering 60 frames of a 3D simulation, for example) with no real regard for when the frames are rendered, or what order they're rendered in. As long as all 60 frames are rendered, the result is good, even if they're out of sequence and the GPU spent most of the time producing no frames at all, and then spat out 60 frames all at once.

Real-time graphics require a GPU to produce those 60 frames in strict sequential order, and to do so as evenly as possible. So while there is a lot of parallelism in the actual calculations per frame, a gaming GPU cannot take as much advantage of parallelism between different frames. Massively oversimplifying, again - if you had a coastal scene with land, sea, and sky - a compute card could process all 60 frames of sky first, batch-processing similar functions for each part of the sky elements, maximising cache efficiency and copy-pasting stuff that doesn't change for efficiency. Once it had done all the work it needs to on the sky for 60 frames, it can free up the cache, wipe the slate clean and start on doing all the same sort of things for the sea in all 60 frames. Rinse and repeat that for the land, and after looking like it had hung for 3 seconds doing nothing, you'd get all 60 frames at once, during one single monitor refresh. With vsync on, you'd get a single frame and 59 frames wasted. With vsync off, you'd get 59 thin slices of animation, displayed in a completely random order based on which of the final operations on each frame took longest. The net result of 60 frames in 0.3 seconds is an impressive 200 frames per second average, but the experience gaming on that is obviously unplayable.

Real-time graphics might be way less efficient by working on the scene one frame at a time, but they aren't concerned with getting all 60 frames rendered as fast as possible. Despite our obsession with high framerates, we're not actually wanting the lowest possible time interval between each and every frame. We say we want at least 60fps, but what we really mean is that we never want a single frame to take more than 16.6ms If even one of those frames takes an extra ~8ms to render, then it's a jarring stutter which breaks the illusion of fluid motion and just about everyone can spot the problem.
 
Last edited:
I think I bought one of the last SLi boards made lol, at least for AM4 :laugh:

Would be ok for older cards and some benchmarks at least :D
 
I did SLI with GTX 780 Ti cards and while the performance was great, there were often niggles. Dual GPU is just something that never worked quite right and required a lot of maintenance from NVIDIA and game developers, so no wonder NVIDIA eventually gave up on it, despite the increased sales it brought.
 
I would do multi-GPU any day of the week, providing:

1. GPU manufacturers provision it
2. Power consumption on GPUs is halved
3. Game developers implement support (for the titles i play anyway)
4. All forms of latency or any type of hiccups are ironed out to get your money's worth > performance uplift.

....all of which was never there to begin with hence impossible in 2022 and I don't believe that will change anytime soon (sadly, can't see it happening at all TBH).
 
Definitely no.

A single GPU is plenty for anything these days. That's why I'm on micro-ATX. I used to have mini-ITX boxes for a while, but they can be a pain to maintain.
 
There in poll all questions are PC gaming related this limit make me laugh.
People too often forget this is a tech website and not a gaming exclusive one.
 
Very nice, with 2GB VRAM :)
Point is to have fun. 690 is a nifty little thing to play with and BOINC registers both chips.
 
I have been doing research into Crossfire/mGPU/SLI. here's What I know.

Alternet frame render from the get go 16+ years ago was known to have the frame pacing/frame timing issuse. Reviews only raved about the high frame rates from it
Reviewers whine about the load of each graphics card not being even on split screen rendering ( one card at a lot of geometry at 99% load, while the other is barely drwing any geometery while drawing sky at 50% or much less load -_- ). Reviewers complained about tiliing having the lowest frames over & mininal improvements over a single card. Yet it had the better smoothness & game play. It also had the most even load on cards, but wasn't "consisant load" for reveiwers so they said it was bad.

Here is why It's gone. Nvidia controls the masses & they can do what they want.

If you wanted to even run an mGPU game, other than "Ashes of the singularity".
here is a list of what you need to do.

On Nvidia's side here is what you need to run two Nvidia cards for mGPU

1. you need a SLI Cerifited moatherboard, that support at least 8x/8x
2. you need both your cards need to supports SLI & ither have SLI connectors or Nvlink connectors.
3. Next, you need to know if your even has mGPU in it, after these two are met above
4. Then if the game supports mGPU & can actually see the two cards for the game, maybe you get past that part. But Nvidia purposly blocks out certian cards from being seen as mGPU together. Lower than a RTX 2070 super, no SLI or mGPU.
5. hope, that Nvidia didn't just block the game from using it like it should be able to. There are quite a few games that work in mGPU even on the current RX 6,000 series that literally don't work at all on any SLI systems. They are Dx12games too.
6. Enable SLI in the control panel, check & see if their is an improvement in performance.
7. Next hope that SLI works you'll be looking for which profile to use to rending type to get it to work.
8. If all else fails to enable it, mod the drivers with inspectpor as last restort.

Nvidias makes it as hard as possible for mGPU to work & so you will give up on it. Then buy Nvidia's fastest card, or just below the fast card.

on AMD's side Makes it very easy to use mGPU

1. you just need two cards from the same series. like two cards from the RX 6,000 series, that usually the same two cards.
2. an avabled pci-express slot, just needs two slots, I think the minuim is one by at least 4x.
3. Enable mGPU in the control panel.
4. Play games & see which work, you might be supprised by which do.

Vega 56-64 series supports mGPU/ sometimes it's crossfire, but support is fading for both.
RX 400-500 series supports Crossfire & some mGPU games.
RX 5000 series supports alot of mGPU games no longer supports Crossfire in this gerenartion that out there.
entire RX 6,000 series Supports most mGPU &, games that have out there work with it. You can find reviews with in use, but only like 3 games in the review or 3 reviews every did it.

AMD made it easy to try & use.

There is a trick AMD is using some game engines on DX12 are just ported from DX11 via a compliler or something. They still look for the driver flags that their are two cards in the system & connected together, so the support is much better on AMD side for mGPU useage.

In the end, no body cares about mGPU. Since no one reviews mGPU anymore, no one checks frame times & pacing on mGPU either, or what the hell games use it anymore.
So no one uses it, even with AMD cards with bettre support..

Gamers are lazy people they just want to plug in, click once, have good game play, good frame timing, & pacing. They do want to do a bunch things to get things running, that time is for playing games.

I think going back to mGPU would be good you don't need to find a new case. A lot of the last two generation of cards still fit in most cases.
 
Last edited:
So it`s dead and buried now*, but will you go SLI/Crossfire if it was back as an option with good driver and game support, considering current GPU`s size, power consumption and cost?
That's gows for all tiers, not just the top ones.
Consider you can have memory pool for the GPU`s (that tech was somewhere around I think back at the days as well as mixing differed SKUs of GPU`s).

You can choose up to 3 box but please don`t mix yes and no ;)


*Care to see 7-way 4090 with linear scaling?


View attachment 274593


Since 2004 both have tried and failed after 13 years so it got abandoned directly and it's up to software devs to make it work correctly, so point is it will never take off. Don't get me wrong here but I had intent of buying another R9 290 VaporX and running crossfire but it died in 2017 with sli.
 
Since 2004 both have tried and failed after 13 years so it got abandoned directly and it's up to software devs to make it work correctly, so point is it will never take off. Don't get me wrong here but I had intent of buying another R9 290 VaporX and running crossfire but it died in 2017 with sli.
From that point, if NV will try now they might succeed.
Just see how hard they managed to push RT in everyone throughout.
They have enough resources and market grasp to pull it off.
 
The only multi-gpu I had, was 4 GPUs at one point (not so long ago) in my PC (not a dedicated rig), for mining and it was quite a challenge :D

Now with these current GPUs, it's hard to even afford two, then it's the issue of fitting them and having enough power cables even if SLI/CF was still alive. Maybe some poorly optimized games would need en extra GPU to help these days :laugh:
 
There in poll all questions are PC gaming related this limit make me laugh.
What do you mean?
That the pool is only gaming oriented?
It wasn't my intention, although the main loss to multi gpu is gaming because we still have multi gpu for pro applications as I added in the link (7-way 4090 with linear scale in preformance).
Mining is indeed out of the scoop of my intention.
 
What do you mean?
That the pool is only gaming oriented?
It wasn't my intention, although the main loss to multi gpu is gaming because we still have multi gpu for pro applications as I added in the link (7-way 4090 with linear scale in preformance).
Mining is indeed out of the scoop of my intention.
Yeah, I guess I'm rather misled into making that comment by those before it, which mainly deals with the applicability and support in PC games of more than one GPU. And from what point of view he will perceive the questions in the survey, it really depends on the priorities of the reader. I apologize if I introduced any doubt with my previous comment.
 
The best iteration of Multi GPU was Polaris. As much as some people will love to hate on Multi GPU I will tell you that the only reason Nvidia abandoned Multi GPU was to get people to buy new cards. I know for a fact that Nvidia likes to break support for budget cards especially in this feature. The current cards from AMD could easily support Multi GPU as all it would take is a driver update but why would AMD want to do that in a world where we have DLSS and FSR to do the same thing? In my opinion it was one of the things that hurt AMD as the 7950 once crossfired was insanely good but as I said everything about Multi GPU was sweet on RX 570/580 cards.
 
I used SLI since I used 7600 GT cards, here are the GPUs I used in SLI over the years.

7600GT
8800GTS 640MB
8800GTS 512MB (used the EVGA Step-Up from the 640MB cards)
GTX 280
GTX 570
GTX 980Ti (only used SLI for maybe 2 months and then went to one card since it was overkill at the time)

Lots of people complain about SLI, but I enjoyed using it. Especially when you had those mid-high tier cards that would give you performance of a top end card and then some.

If SLI was still in use, I might make use of it.



So, instead of having two GPUs that add latency, Nvidia has removed the need by creating DLSS 3.0 (which has its own problems) and then Nvidia has also created Nvidia Reflex to reduce the latency that they added? It kind of feels like a side step. They now have one card able to re-create SLI type issues instead of letting people use SLI. I guess on the plus side of things, they don't have to create drivers for SLI, so they have that going for them.



I know it was a limitation of SLI/Crossfire, but mGPU on DX12 was supposed to be a better avenue for developers to support multiple GPUs and it would allow games to make use of VRAM separately on both GPUs. mGPU in DX12 was even able to allow different GPUs to work together (I think Ashes of the Singularity was built to showcase this), so you could run a game with an AMD and Nvidia card in your computer.....but I could be remembering that last part wrong about mixing GPUs.

I wonder if it's a difficulty thing for developers to add mGPU to their games or if it's a cost/time thing?
Supposed to... it does enable mGPU.
But what is the incentive here for developers?

The best iteration of Multi GPU was Polaris. As much as some people will love to hate on Multi GPU I will tell you that the only reason Nvidia abandoned Multi GPU was to get people to buy new cards. I know for a fact that Nvidia likes to break support for budget cards especially in this feature. The current cards from AMD could easily support Multi GPU as all it would take is a driver update but why would AMD want to do that in a world where we have DLSS and FSR to do the same thing? In my opinion it was one of the things that hurt AMD as the 7950 once crossfired was insanely good but as I said everything about Multi GPU was sweet on RX 570/580 cards.
I ran SLI with 2x 660 which was the cheaper route to high end performance at the time. It was completely horrible. Lots of noise, higher temps on top card, limiting OC to near stock. Stutter in almost every game, latency was clearly up compared to single card and scaling was hit/miss. More often than not, the overall experience was worse rather than better.

AMD had similar issues. Crossfire scaling was at some point better than Nvidia and shortly after, they abandoned it just the same. Mgpu proof of concept in Ashes on dx12, and not a single dev followed up on it.

The move to single GPU only was pure logic. Not some Nvidia push. Its just commercially technically and practically a complete clusterfuck of inefficiency.
 
Last edited:
People too often forget this is a tech website and not a gaming exclusive one.
Games are boring after awhile, tech is not :)
 
From that point, if NV will try now they might succeed.
Just see how hard they managed to push RT in everyone throughout.
They have enough resources and market grasp to pull it off.
RT hardly has taken off like hairworks, physx, sli...
 
The best iteration of Multi GPU was Polaris. As much as some people will love to hate on Multi GPU I will tell you that the only reason Nvidia abandoned Multi GPU was to get people to buy new cards. I know for a fact that Nvidia likes to break support for budget cards especially in this feature. The current cards from AMD could easily support Multi GPU as all it would take is a driver update but why would AMD want to do that in a world where we have DLSS and FSR to do the same thing? In my opinion it was one of the things that hurt AMD as the 7950 once crossfired was insanely good but as I said everything about Multi GPU was sweet on RX 570/580 cards.
What do mean could?
The RX 6,000 already do support mGPU.
Supposed to... it does enable mGPU.
But what is the incentive here for developers?


I ran SLI with 2x 660 which was the cheaper route to high end performance at the time. It was completely horrible. Lots of noise, higher temps on top card, limiting OC to near stock. Stutter in almost every game, latency was clearly up compared to single card and scaling was hit/miss. More often than not, the overall experience was worse rather than better.

AMD had similar issues. Crossfire scaling was at some point better than Nvidia and shortly after, they abandoned it just the same. Mgpu proof of concept in Ashes on dx12, and not a single dev followed up on it.

The move to single GPU only was pure logic. Not some Nvidia push. Its just commercially technically and practically a complete clusterfuck of inefficiency.
Nivida lost money when people just buy used cards for SLI & not buy the newest next gereation. Thats why they keep coming out with some software gimmick (D.L.S.S Raytraing for turing, D.L.S.S 2, nvidia voice for ampere, D.L.S.S. 3 Frame geration for ada lovelace) that the older generaions don't support. They know of it did support people would just buy a used card.

RT hardly has taken off like hairworks, physx, sli...

On Dx12 it has about close to 60%(149 games) of the 269 games have some sort of raytracing support, even if it's just shadows.

Vulkan is actualy the better A.P.I, yet theirs is only like maybe 2-3 gamez on vulkan support some sort of raytracing on out about the 185 games that have accese to it. Less than 1%

Vulkan is bettet in multi-card setups compared to DX12.

The notion the people want smooth game play but are even will to take frame geraation over mutli card setups, when both intoduce latency to the frame rate is ludicris.
 
What do mean could?
The RX 6,000 already do support mGPU.

Nivida lost money when people just buy used cards for SLI & not buy the newest next gereation. Thats why they keep coming out with some software gimmick (D.L.S.S Raytraing for turing, D.L.S.S 2, nvidia voice for ampere, D.L.S.S. 3 Frame geration for ada lovelace) that the older generaions don't support. They know of it did support people would just buy a used card.



On Dx12 it has about close to 60%(149 games) of the 269 games have some sort of raytracing support, even if it's just shadows.

Vulkan is actualy the better A.P.I, yet theirs is only like maybe 2-3 gamez on vulkan support some sort of raytracing on out about the 185 games that have accese to it. Less than 1%

Vulkan is bettet in multi-card setups compared to DX12.

The notion the people want smooth game play but are even will to take frame geraation over mutli card setups, when both intoduce latency to the frame rate is ludicris.
No, you are just pushing your narrative that Nvidia is the evil architect of the loss of Crossfire while AMD did the exact same thing with just about the same timing.

Here is some reflection for you; AMD had every opportunity to snatch massive market share here through Crossfire IF they had kept it going, if what you say is true. They had it running without a bridge even. Your whole argument is flawed.

The reality is, neither Crossfire or SLI were ever scaling as well as they should, increased latency, cost developers money they should be spending on game content, and is high maintenance while ever more games get released - thats a losing battle. Without game support the purpose only becomes more marginal. This is the actual practice we have seen, too, in the years preceding its deletion from GPUs.
 
Last edited:
No, you are just pushing your narrative that Nvidia is the evil architect of the loss of Crossfire while AMD did the exact same thing with just about the same timing.

Here is some reflection for you; AMD had every opportunity to snatch massive market share here through Crossfire IF they had kept it going, if what you say is true. They had it running without a bridge even. Your whole argument is flawed.
If AMD would just Advertise on TV, Tech Articles/Mags and Sponsor Game Events/Tourneys they might actually finally be known. Too many think there is only nvidia. But it makes me think there is some form of censorship (cough dellintel) by bribing a company to only sell said product and not the competitions
 
If AMD would just Advertise on TV, Tech Articles/Mags and Sponsor Game Events/Tourneys they might actually finally be known. Too many think there is only nvidia. But it makes me think there is some form of censorship (cough dellintel) by bribing a company to only sell said product and not the competitions
Nah AMD continues to fail in marketing, because they know they have never had solid enough product to truly combat the competition. They never saturated the market with RDNA for example, despite it being a strong set of cards. They still arent pushing hard on market share on gpu, despite success on CPU and pretty close parity in perf and features.

And now we have the utterly non disruptive RDNA3. Market that... good luck! The main talking point is its price...
 
Funny how those proprietary technologies don't take off. Maybe making developers do double work for half the audience is a losing proposition?
It's why it becomes abandon/shovelware so quickly.

I wonder how many RT games are on consoles( the real money makers)
 
Back
Top