• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Do you plan on using DirectX 12 mixed GPU in the future?

Do you plan on using DirectX 12 mixed GPU in the future?

  • Yes, using a single GPU vendor

    Votes: 1,566 16.3%
  • Yes, mixing GPU vendors

    Votes: 1,327 13.8%
  • I have no hope for this feature

    Votes: 1,518 15.8%
  • I prefer to just upgrade to one faster card

    Votes: 5,212 54.2%

  • Total voters
    9,623
  • Poll closed .
I have since 2009 always gone the multu-GPU way but only whit Nvidia sli. First whit 3 way sli GTX 285 and after that only 2 way whit GTX 570, GTX 660 TI and now GTX 970.
If i use SLI or DX12 multi-GPU depends on how games handels it and how many games there come whit this feature cause i asume only DX12 games can run multi-gpu whitout sli/crossfire.
And i will never go to AMD. For me AMD stands for:


AMD = ANYTHING MORE DISGUSTING or AMD = Another Misguided Discussion.
 
keep the fanboy trolling out of this.
 
And i will never go to AMD.

Then you're a moron and don't deserve to use a tech site. Techies go where the hardware says and right now AMD's architecture is looking very solid. The next die shrink should prove very fruitful for AMD. I might jump back to them if their Polaris offerings are equal to Pascal. Obviously if Polaris is better then I will go to AMD.
 
Then you're a moron and don't deserve to use a tech site. Techies go where the hardware says and right now AMD's architecture is looking very solid. The next die shrink should prove very fruitful for AMD. I might jump back to them if their Polaris offerings are equal to Pascal. Obviously if Polaris is better then I will go to AMD.
And fanboys like you that call someone else a moron cause they won't use "your favorite brand" of product shouldn't even be allowed to use the internet. He has right to use what every side he wants. Truthfully even with what you claim as AMD arch looking "solid", AMD has made a lot of bad choices over the last few years. Made a lot of questionable claims and rebadged 2-3+ year old gpu's and claimed they were "new chip's". AMD has to prove themselves and not make questionable claims about how good their card is and show its really good. Not say it is then turns out to be far from it aka Fury X vs 980ti claims of fury being 30% faster when it was even at best most cases.
 
Last edited:
And fanboys like you that call someone else a moron cause they won't use "your favorite brand" of product shouldn't even be allowed to use the internet. He has right to use what every side he wants. Truthfully even with what you claim as AMD arch looking "solid", AMD has made a lot of bad choices over the last few years. Made a lot of questionable claims and rebadged 2-3+ year old gpu's and claimed they were "new chip's". AMD has to prove themselves and not make questionable claims about how good their card is and show its really good. Not say it is then turns out to be far from it aka Fury X vs 980ti claims of fury being 30% faster when it was even at best most cases.

@arbiter - you really need to look before you jump. My favourite brand is Nvidia. My current card is a Kingpin 980ti with a rather lovely modified bitspower 980 kingpin block. Before that I had 2x Classified 780ti's, and a Titan. Before that it was 2 x AMD 7970's and before that, a GTX580. I tend to buy the fastest gpu for my resolution (1440p). It is idiotic for people to post such nonsense that they will never go 'x' brand. Unless it is for ethical reasons and in big business ethics takes a back seat. Your pro-nvidia stance is always apparent, as is shown buy your shortsighted 'fanboy' call on me. Quite amusing really.
 
@arbiter - you really need to look before you jump. My favourite brand is Nvidia. My current card is a Kingpin 980ti with a rather lovely modified bitspower 980 kingpin block. Before that I had 2x Classified 780ti's, and a Titan. Before that it was 2 x AMD 7970's and before that, a GTX580. I tend to buy the fastest gpu for my resolution (1440p). It is idiotic for people to post such nonsense that they will never go 'x' brand. Unless it is for ethical reasons and in big business ethics takes a back seat. Your pro-nvidia stance is always apparent, as is shown buy your shortsighted 'fanboy' call on me. Quite amusing really.


@the54thvoid I will back you up mate. I remember when you got your titan you got it purely because at the time it was the best card $$ could buy.

I myself like AMD as a brand, never had issues with them had 2x 5850's, 2 x 6970's & now 290X, & before those gpu's i had sli'd 8800gts 512's from nvidia & 6800gt fx 5800 ultra going all the way back to mx400, hell I sold their gpu's in my computer shop for many years & never had people complaing about drivers or bad performance, same for nvidia.

@arbiter go & read some of voids posts, he usually bashes fanboys for talking crap.
 
I'm not precisely loving Nvidia support for Fermi cards (my brother uses a GT520 for programming), they promised DX12 at the beginning of the year, is missing, and also promised Vulkan support, now they say they won't bring it because "only" a 10% of all steam users (that's including Intel and AMD) haver Fermi cards.
At the very least AMD said only GCN and delivered.
 
I'm not precisely loving Nvidia support for Fermi cards (my brother uses a GT520 for programming), they promised DX12 at the beginning of the year, is missing, and also promised Vulkan support, now they say they won't bring it because "only" a 10% of all steam users (that's including Intel and AMD) haver Fermi cards.
At the very least AMD said only GCN and delivered.
Fermi is almost 2 years OLDER then first GCN card so if there turned out to be a tech issue they can;'t support it so be it. Fermi is pretty old and even top 480/580 probably barely play most current games respectability now.
 
Vulkan support is out with the latest drivers... ;)

Not sure if it is for Fermi, but... its out now. ;)
 
That's not the problem, Fermi support was almost done for Vulkan and they simple pulled the plug, while at the same time gave low priority to DX12.
One thing is to say "we only support GCN cards, to heck with the older ones", and another completely different thing is to say for a year that you support Fermi hardware and change at the last second because "they have less users than Kepler/Maxwell" when the work was almost done.

I think I can bypass it, the GT520 is also the GT705, a Vulkan compatible card, time to use NiBiTor.
 
Well, in fairness, you (nor anyone) has no idea how close to done it was. Seeing that they did pull the plug, I am more than certain the company looked at the numbers and said, not worth it. Sucks, but, it is what it is. I have to admit though, you don't even own a card that can game very well in a uber budget GT520.... why would it matter to you personally?
 
To code in that API!
It's almost done, the 520 OEM is also the 440, 630, 720 and 730, both the 630 and 730 support Vulkan, so it's a lie that Fermi is not supported, they don't want to do it on the 400 and 500 named ones, while the rebranded are supported.
I have 2 options, or the flash works and I made a 48 shaders 520 turn into a 96 shaders 730, or I have to flash it back using the igp.
 
Between the 400/500 cards and 600/700 cards that used fermi chip, don't know what changes could been made hardware wise to the chip or firmware wise to the chip to add support for newer tech that older ones can't just add it to. AMD couldn't add freesync to all their cards when they launched it cause hardware on some older cards just couldn't support it right. This is just a guess but since no one really knows what is the issue of why then shouldn't get pissy and bash them just cause.
 
Good point, I'm angry because of a broken promise.
Nvidia can do it, even the Terascale (HD5000-6000) AMD cards should support DX12 and Vulkan, as long as the card can compute, it can run the APIs. Just look at Intel, every DX11 card except Ivy supports some form of DX12, and Vulkan is in beta even on Ivy.
 
If it was was me,i'd copy the api and translate to various forms of Linux.
Linux:The new gaming OS :D
One can dream,right?
It will be soon, if MS keeps on it's path. It sure isn't going to be Apple! HAHA
 
No I don't see it catching on. For several reasons, the most important one being an economic incentive. There is none!

Consider this:
- What does a developer gain from building and coding for totally random, asymmetrical GPU loads?
- How will an engine divide the workload given the virtually infinite number of combinations possible?
- How on earth will you avoid either huge input lag issues or massive frame time differences?

All the above could be fixed if there was some sort of commercial drive behind all this.. But there is none. GPU builders still want to sell the biggest GPU's. VR needs the lowest possible latency (which you're not getting with asymmetrical GPU loads, that's for sure), and gamers themselves want simplicity above everything else - plug and play, it needs to work without hiccups. We've already seen it is NOT cost effective to mix and match totally different performance levels of GPU, the inefficiency is massive.

We've also seen that getting SLI/Crossfire working in all games has not once, ever, existed at any point in time. There is always lacking support for at least one or two big titles. So now with DX12, we are adding a few more layers to support - not only do we need SLI profiles, we need linked GPU support, and we need engines to work well with it too. We already had GPU linking for a few years now, and it's not catching on. Not with AMD's solution, not with Virtu MVP, and it won't with DX12.

And then, bottom line, there is even still DX11 support to be given, because we will be at least in 2020 before that is completely phased out by any reason.
 
There is other way to implement it, instead of equally distributing the draw load you can just use one GPU for compute, the other for tesselation, another one for texture decoding, etc.
DX12 and Vulkan don't add universal SLI/Crossfire, they add the option to distribute all the work in different devices, not just equal rendering. Shadows of the Singularity only goes for the simpler to code option.

I can imagine AutoCAD benefiting a lot from this.
 
I had used SLI for 3 years, first with 2 GTX 660 Ti and then with 2 GTX 960, until I got tired, the first experience with a multi GPU setup was very good most of the time but with some recent AAA games getting poor SLI support out the box (Batman Arkham City for example not getting multi GPU at all) I was disappointed and switched to a single GPU. I know I'm talking about a propietary multi GPU solution versus DirectX 12 approach but let's face it, DirectX Multi Adapter option still puts the burden on devolopers' shoulders who explicitly must provide support in games and that's the problem, devs are not going to bother, firstly because of consoles nature and secondly the PC user base of multi GPUs is limited. At the end, DirectX 12 also offers devs an easy exit, implicit multi adapter support which works almost exactly to current AFR technologies, meaning Nvidia and AMD have to incorporate the feature into drivers and the same story goes on.
 
why do games even have to support this it should all be done via DX12 why leave it upto game devs who we all know are the laziest programmers in the world this is how it should work

game engine says to DX12 I need this this this and that , DX12 says yup I can do all that with this this and that hardware splits the work load for each piece hardware as to it's capability and then paints the finished frame to the screen
 
hopefully a popular game engine uses a method where the devs can flag things into 'groups'

Things such as: player character, OSD, player weapon, AA, water effects, shadows, etc etc - and then the engine can automatically divide those up between all present, enabled GPU's on the fly, much like how CPU's handle multi threading.
 
For me it´s a nope, SLI all way and forever...
This "BS" happened before [around 2012] with the Lucid Logix Virtu MVP
 
For me it´s a nope, SLI all way and forever...
This "BS" happened before [around 2012] with the Lucid Logix Virtu MVP

lucid was a flop (my mobo supports it), this in theory has none of its flaws.
 
i wonder if it would be possible to mix Intel iGPU and a lowend nvidia card ?

Currently rocking a HD520+930M combo, so every bit of extra performance counts :laugh::laugh::laugh:
 
i wonder if it would be possible to mix Intel iGPU and a lowend nvidia card ?

Currently rocking a HD520+930M combo, so every bit of extra performance counts :laugh::laugh::laugh:


if both are DX12 - yes. That was the first example used by DX12 devs, they gave a small amount of work to an intel IGP for roughly a 10% performance boost.
 
DX12 says yup I can do all that with this this and that hardware splits the work load for each piece hardware as to it's capability and then paints the finished frame to the screen

It's a lot more complicated than that. This is akin to thinking the phrase "from each according to his ability, to each according to his need" (AKA communism) will just work (actually, you almost typed that in GPU terms, lol).

Real world reality: It's harder than it sounds, on both counts.
 
Back
Top