• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA to Focus on 2-way SLI with GeForce "Pascal"

The majority of gamers running SLI ran 2-card SLI. I think if they focus on making that a more optimized option, that'd be good.

I suspect, however, the future of SLI is in each card producing an image for an eye on VR setups.

That could be suboptimal. Think about a flight sim in space, where you're flying close to a planet. You look one way, the planet is on the right, space and stars on the left: one card has lots of stuff to render, the other is mostly idling. Then you turn your head in the opposite direction and the situation reverses. I think cards will always be used to split the work.
 
If this turns out to be true it's going to be a big mistake for Nvidia.

What happened to the advanced SLI that NVIDIA were touting a couple of years ago with Pascal? They actually had an article on their website about it that was reported here. This was going to get rid of duplicated memory so that two 8 gig cards would show as one 16 gig and other improvements to make scaling much better.
Nvlink will be a Tesla-only product, it has nothing to do with SLI.

Even with Nvlink, the GPUs will still have to work on independent work or be continously synchronized, which will kill performance. This is why multi-GPU works so well for compute and so badly for gaming.

When rendering a frame the allocated GPU memory is typically divided into three categories; output framebuffer, temporary buffers and (static) "resources"(meshes, textures, displacement maps, etc.). The "resources" is the largest part, and have to be duplicated to every GPU in a multi-GPU configuration. Even if we attempted to do SFR again, it would still have to be duplicated. Only if someone creates multiple GPUs on a interposer with an extremely wide link can we avoid this duplication.

I thought that was HBM where multiple GPU's basically add up the amount of Vram. *shrugs
Nope, that's a misconseption. See above.

The majority of gamers running SLI ran 2-card SLI. I think if they focus on making that a more optimized option, that'd be good.
For AFR mode, the count of GPUs doesn't really matter. In theory you could use 16 GPUs if the hardware allowed it.
 
That could be suboptimal. Think about a flight sim in space, where you're flying close to a planet. You look one way, the planet is on the right, space and stars on the left: one card has lots of stuff to render, the other is mostly idling. Then you turn your head in the opposite direction and the situation reverses. I think cards will always be used to split the work.
You're forgetting that the view point is only a few inches difference. If one GPU has to render the planet, the other one does too in most cases.


SLI and Crossfire's days are numbered because in DX12 and Vulkan, the power is in the developers' hands to do whatever they want with the GPU cores. It'll largely be up to the game engine to load balance.
 
the power is in the developers' hands to do whatever they want with the GPU cores. It'll largely be up to the game engine to load balance.
That's honestly the article or information we should be getting and understanding. :toast:
 
Honestly I think this is a good move by Nvidia. Anything beyond 2 cards is really rather pointless, and always has been IMO. I can't wait to graduate school (Now officially 3 semesters left) to build a whole new rig with 2 big Pascals and newest intel HEDT platform early 2018
 
With TPU... If this Article was about RTG the title would be more.
"Radeon No Longer Supports C-F above 2-Way for Polaris; Kaput and Abandoned, Suspect sustaining on Previous Cards"

But here for Nvidia it's spun as a positive! Almost advertisement for a probably $50 widget, that RGT figured out how to do away with.

QQ some more. Seriously, are you an adult? It's hard to imagine how people like you can be so bitter about... a graphics card. Really?

Go hug your wife.

On a serious note, it is disappointing Nv didnt do away with the bridge. Is the tech AMD use patented? Or is it to keep another Nv peripheral item for absolute financial gain?

Either way, 3 way sli was always massively niche and a diminishing return. Even triple crossfire was not as good a step as going from 1 to 2.
 
On a serious note, it is disappointing Nv didnt do away with the bridge. Is the tech AMD use patented? Or is it to keep another Nv peripheral item for absolute financial gain?
I think the latter. @W1zzard tested the importance of PCI Express bandwidth and there's a lot more bandwidth available than the cards need. NVIDIA, instead of using that untapped bandwidth, still relies on ye olde bridges.
 
Still using bridges? How quaint.
 
I think the latter. @W1zzard tested the importance of PCI Express bandwidth and there's a lot more bandwidth available than the cards need. NVIDIA, instead of using that untapped bandwidth, still relies on ye olde bridges.

Oh well, either way, I won't be tricked into dual cards again. I'd rather have guaranteed sub dual performance than awesome power that just sometimes is downright crap.

Yep, a Heart Attack... and you know what I'm saying has comeuppance. I'm just call'n as I see 'em.

Eh? You're seeing too much through red tinted specs. Take 'em off and be vendor neutral. Life's way better that way.
 
Actually stepping back from 3 and 4-way SLI means exactly the opposite. Probably there weren't that many uptakers to worth keeping up the support.
SLI was already a niche of the GPU market, supossedly with only 1-2% or users using it. triple sli was a mere fraction of that, and quad sli was a fraction of THAT. It wouldn't surprise me if they dropped support for more than 2 way SLI.
After 2 way SLI you are talking very small market, Toss in to fact if you were doing 3 way SLI would you used 3 980's or would save some $ and get 2 980ti's for less. By time you pay for 3x of 1 card, you could buy 2x of next card up and have get just as much performance without the loss cause what you do lose trying to split the load to 3 cards.
 
Look, I like dual card configurations and have had some in the recent past. However, what is the point anymore if game makers are not really focusing on that aspect of gaming hardware as much as they were in the past? Also, I have a 980Ti and therefore, these cards are not going to really improve anything for me this go around.
 
This new bridge is basically nothing new, afaik we already had that with 2x SLI bridges combined. Marketing and money making, nothing more. Also it's simply outdated, AMD tech is way better (performance wise too).

But I don't care much about both techniques, DX12 multi adapter is the way to go.
 
Or it is 3 way SLI...

PCGamer - Nvidia shows off new features and software for GTX 1080

PCGamer said:
One final thing to show off before we go dark (meaning we'll be under NDA embargo) is Nvidia's VR Fun House tech demo. It leverages massive amounts of PhysX, HairWorks, and other Nvidia technologies, and it's apparently more than a little demanding. How demanding? Nvidia had to use three—yes, three—GTX 1080 cards to keep Fun House running smoothly at 90 fps. The system was from OriginPC, and it was doing VR SLI on two of the GPUs, with the third GPU handling all the PhysX calculations. I suspect the PhysX aspect was overkill and would have worked fine on lower end GeForce cards, but it's still an impressive tour de force for Nvidia and Pascal.

The bridge doesn't support Tri-SLI so its a 2+1 setup

52081_03_nvidias-new-sli-hb-bridge-pictured-gtx-1080-supports-2-way.jpg
 
Last edited:
This new bridge is basically nothing new, afaik we already had that with 2x SLI bridges combined. Marketing and money making, nothing more. Also it's simply outdated, AMD tech is way better (performance wise too).
Yea when they start making it so gpu 1 can access gpu 2's memory you won't be saying its outdated then since. That Bridge offers the cards the ability to talk to each other without the need to go back through PCI-e bus. When 2 gpu's gotta talk to other or even if one needs data from the other that PCI-e becomes very limiting and will have a lot of latency with it.
 
Yea when they start making it so gpu 1 can access gpu 2's memory you won't be saying its outdated then since. That Bridge offers the cards the ability to talk to each other without the need to go back through PCI-e bus. When 2 gpu's gotta talk to other or even if one needs data from the other that PCI-e becomes very limiting and will have a lot of latency with it.
SLI can't do that (combine memory), what you mean is DirectX 12 multi adapter, but that has nothing to do with SLI and doesn't need SLI bridges too.
 
Yea when they start making it so gpu 1 can access gpu 2's memory you won't be saying its outdated then since. That Bridge offers the cards the ability to talk to each other without the need to go back through PCI-e bus. When 2 gpu's gotta talk to other or even if one needs data from the other that PCI-e becomes very limiting and will have a lot of latency with it.

what ? i don't quite understand, care to elaborate? and which pci-e version have you used? 1? 2? 3?
 
SLI can't do that (combine memory), what you mean is DirectX 12 multi adapter, but that has nothing to do with SLI and doesn't need SLI bridges too.
what ? i don't quite understand, care to elaborate? and which pci-e version have you used? 1? 2? 3?
Even with 3.0, you have data the cpu over PCI-e going to the gpu and back. When you have memory bandwidth at rates they are, even .1sec will cause stuttering and freezes that will be noticeable. All that had to go back through PCI-e chip. So if that comes in to play that bridge could prevent a lot of it, not all.
 
I'm 100% okay with this if scaling goes up.
 
I'm 100% okay with this if scaling goes up.
games need to be coded better for it to happen. Seems to the biggest factor in SLI/CF scaling.
 
Back
Top