Wednesday, July 11th 2018
An Anthem for SLI: Bioware's New Universe in 60 FPS 4K Run on Two NVIDIA GeForce GTX 1080Ti GPUs
Lo and Behold: SLI working properly. This was my first reaction whilst reading up on this potential news piece (which, somewhat breaking the fourth wall, actually did end up as one). My thought likely isn't alone; it's been a while since we heard of any relevant dual graphics card configuration and performance improvement now, as developers seem to be throwing dreams of any "Explicit Multi-GPU" tech out of the water. This slight deviation from the news story aside, though: Anthem needed two of the world's fastest GPUs running in tandem to deliver a 4K, 60 FPS experience.Naturally, this doesn't mean that much by now: performance will improve, optimizations will happen - perhaps a watering of the graphics will happen (to be fair, we have seen that before, so the precedent is there). We know that. Still, it does speak volumes that that kind of graphics power was needed. Still, SLI'd GTX 1080Ti graphics cards for 4K and 60 FPS really isn't that extravagant: remember that the Cyberpunk 2077 demo from E3 ran at 1080p on a single such graphics card. Anthem used double the graphics power to push through a fourfold resolution increase - not too shabby. Anthem is just 7 months away (February 22nd) from release, though, while the bets are still off for Cyberpunk 2077. Still, both games look glorious, and Bioware's Anthem really does showcase the Frostbite engine as never seen before. Digital Foundry even seems to think that the showcased demo wasn't running with the full effects galore they observed on their playthroug at E3 - screen-space reflections were absent, for one. It seems the PC version of the game could look even better than what it does right now. Here's to that.
Sources:
TechRadar, Digital Foundry
81 Comments on An Anthem for SLI: Bioware's New Universe in 60 FPS 4K Run on Two NVIDIA GeForce GTX 1080Ti GPUs
If i'm not mistaken, where'nt the AMD crossfire chipsets back in the days better compared to the counterparts such as Nvidia ?
After Arkham Knight's disaster, I stopped buying graphically demanding games if they don't support SLI. Gotta vote with your wallet. :)
Thanks for the article.
Seems to be a lot of inexperienced, word of mouth, SLI haters, but I'm an idiot for wanting to try Mid range sli was eating into Nvidia's high end sales so they stopped that. It really is that simple.
babeltechreviews.com/gtx-1080-ti-sli-performance-25-games/3/
51avg
60max
V sync enabled.
1080 ti sli is useless until 4k144hz drops - it won't drive it though.
I have Gsync and I absolutely hate going below 50fps still. There's no tearing but its not near as smooth as 80+fps.
- Start the program and grab a list of all available physical devices (IGP, discrete GPU etc.)
- Query the physical devices to find one that supports the features you need, then create a logical device out of it.
- Attach a command pool (the thing you've heard of in articles that makes DX12 multithreaded) to the logical device.
- Throw work into a command queue then throw that command queue at the command pool to get the physical device to do work. Rinse and repeat.
The way multi-GPU works is you just create more than one logical device, each with it's own command pool. Then you can just spread the work between the different command pools.The reason why devs don't really use it is because this is all new and they're still trying to get to grips with it.
Different GPU's have different features available and perform differently, so spreading work evenly between them usually ends up slower. So you have to load balance the work you send to the GPU which takes a bunch of testing to get just right.
Then there's the problem of cross-GPU communication. Sometimes work depends on the results of previous work (e.g. postprocessing) which could be on a different GPU, requiring communication between GPU's which add latency. That latency could end up making things slower than doing everything on a single GPU because GPUs are extremely fast and any time spent outside them (e.g. copying 3d model data from ram to vram) is wasted time. Ideally the program would require zero cross-GPU communication but that would take dev resources to redesign how they go about their renderer code.
Over time as devs learn more and tools get better, DX12/Vulkan (which are very similar) will end up replacing DX11/OpenGL* and we'll live in a magical world of cross vendor multi-GPU.
Of course the closet Mormons say that will never happen but they said the same about DX9, DX10, DX11, dualcores, quadcores and now 6/ cores. So they can safely be ignored.
*Khronos said Vulkan isn't supposed to replace Vulkan but that's what will happen. Graphics programming is commonly referred to as black magic or a dark art by other programmers because, while it may be easy to do a basic "hello world", it's extremely time consuming getting good at it. The reason for that is that pre-DX12, the API was a black box (a bloated finite state machine that does mysterious things) which was interpreted by the driver, another black box (also bloated because it's interpreting a bloated thing), which in turn relays commands to the GPU, another (you guessed it) black box.
The way you learn a black box is by throw it a bunch of inputs then seeing what it ouputs. This is time consuming enough for a single black box but when you have 3 layers of blackboxes, each interchangeable, you can imagine how this becomes a dark art.
With DX12/Vulkan, instead of a bloated FSM they give you a model of a GPU (which accurately maps to how modern GPUs work) and give you functions to directly interact with different parts of that model, removing its blackbox nature.
Since the model maps almost one-to-one with the physical GPU drivers can be a much leaner (which is why AMD is so good now, they aren't hampered by their bad drivers anymore). The driver ends up being just a glorified conversion layer which you can ignore because it's not doing anything special, removing it's blackbox nature.
And finally, since the model maps so closely to the physical GPU and the driver is ignorable the GPU also stops being a black box.
Writing a basic "hello world" is harder in DX12/Vulkan because you can essentially interact with any part of the GPU in any order. However, once you've written that "hello world" program you will have learnt most of the API. I've messed around with Vulkan and I'd compare it to C. It doesn't hold your hand but it does make eveything very transparent, so you can easily tell what your code is doing at the hardware level. It even feels the same with all the memory management.
SLI is a far cry from the golden days of 2010, when every game was at least somewhat supported, and proper SLI support was the norm by the 3 month mark.
70% when only normal scaling games are included. This means you lose 30% of the money you invested in a second card. And if you take all games into consideration (it's not certain that your beloved upcoming game will get a "better" 2 card support), it drops nearly to 50%, meaning HALF of your money is throw into dustbin when buying a second card.www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_SLI/20.html
70% when only normal scaling games are included. This means you lose 30% of the money you invested in a second card. And if you take all games into consideration (it's not certain that your beloved upcoming game will get a "better" 2 card support), it drops nearly to 50%, meaning HALF of your money is throw into dustbin when buying a second card. So the one who says it's dead: it's really dead. Awful scaling, more heat, more noise. The only acceptable scaling would be 90% for scaling games, 70-75% including the non (or less)-scaling games. Then wait for a better GPU. SLI and CF are both awful.
as for it not being cost effective.. since when has top end performance ever been cost effective.. if you want the best performance available it costs..
trog
This is the first generation that I have not run and SLI setup since I first started running SLI with 7800GTs over a decade ago, and I've run SLI with every GPU generation since then. But the support for it in games, and a large part of this is due to nVidia dropping SLI support on lower end cards, just isn't there anymore. So I opted to wait until I could afford the one big GPU and buy it instead of buying a lower teir GPU and then buying a second a few months later.
SLI and Crossfire User here with Triple 1440P @144hz (7680x1440P @144hz) monitors and 4K @60hz monitors. Although SLI and Crossfire were more prevalent generations ago especially with low and mid-tier GPU's, a setup like this is really only recommended for users at the bleeding edge pushing hardware and graphics to the limit today. SLI, Crossfire, Multi-GPU isn't for everyone. If anything major diminishing returns for users playing below 1440P @144hz with current titles, you just won't maximize the GPU's enough to warrant that second video card. Hence the complaints of people with negative multi-GPU experiences. (Game selection is another)
I personally recommend going SLI, Crossfire, Multi-GPU route only if you're playing at 4K+ Resolutions, High-Refresh, and Multi-Monitor setups with two of the highest tier GPU's. This guarantees you the best performance with all the games in the market regardless of support. I would never recommend users to go with mid or low tier SLI, Crossfire, Multi-GPU setups since you should be purchasing the single best GPU you can afford if you're playing below 1440P @144hz (including 1080P @240hz).
My point is to focus on building a balanced system matching your video card to the monitor setup for the best results. Does it hurt to pick up a GTX 1080Ti for a 1080P monitor? Not at all, but don't expect increased performance in that resolution and refresh adding another one. Even if you have the highest tier GPU available? Yea ok... Try running a single gpu to run a setup with triple 1440P @ 144hz (7680x1440P @144hz)... (That's a long wait) Really curious what monitor, resolution, refresh rate, and game selection you're playing at to say it's awful... Would love to know your current experience. i'll wait...
Most people talking smack on SLI here are talking about the old SLI hardware, haven't seen anyone talking bad about their HB SLI bridge setup, just the stuff of yesteryear.
@B-Real wow those 1080 reviews are more than 2 years old lmao we need new cards so bad.
So what are we all going to do when they keep releasing SLI games in 6 months? Keep claiming its dead? Keep asking for people to prove to them that SLI games are still coming out because they don't have the patience to look it up themselves? Looking at you @cadaveca...
So I'm left with a question, maybe my bridge is bad? OK, try new one. Nope, still doesn't work. try new board.. nope. New platform (I have them all), nope. Maybe it's a bad card? nope.. cards work fine on their own.
So, why do I have problems with getting SLI to work? I'm the guy with many machines and a store's-worth of hardware on my shelf, and nothing gets things working right. I use two different configs for monitors... 3x 1200p and a single 2650x1600...I got into multi GPU because 2560x1600 required it. Then I went with multi-monitor because of Eyefinity… that was crap too... swapped to SLI and things were better... and then things went downhill from there.
So great, it works for you, but it doesn't woprk for me. How is that possible? We use our PCs differently. To me, for SLI to be a success (or Crossfire, for that matter), we should all have the same experience, but we don't. It's too bad.
What issues are you experiencing with Destiny 2? Seems to work for my setup with both VEGA 64's and GTX 1080Ti's.
Are you playing at 60hz for your triples and 1600P? I'd argue your setup is probably still good for a single high tier gpu. I don't imagine you would be stressing a single GTX 1080(Ti) and VEGA 64 at that resolution and refresh of 60hz. What other games are you trying to run with multi-gpu?
For reference I also own a GTX 1070 FTW, GTX 970 FTW, VEGA 56 NANO, Crossfire FURY X's, Crossfire VAPOR-X 290X's 8GB, Crossfire HD7970's that i've tested with my setups.
This is such a basic issue that has come back that NVidia dealt with before, but stopped because benchmarks look better when your cards run at the maximum clocks possible. (IMHO).
EDIT: To be clear yours cards should be matching including the Core and memory clocks. Same exact card and SKU
Another thing to think about is SLI / Crossfire currently have limitations with post processing such as Temporal AA that has the possibility of crippling performance. Some games have that as default without any means to disable, such as Gears of War 4. Disabling certain post processing should increase your performance.
It's little details like this is the reason SLI, Crossfire, and Multi-GPU is more of an advanced user feature that most people are not ready or familiar with.
They use the same clocks.
Maybe that's why I'm not seeing the stuttering that you are?