• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 1080 SLI

Joined
Jun 22, 2016
Messages
8 (0.00/day)
The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,924 (2.86/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Your aren't seeing things clearly. The community here understands you need a bridge for SLI. The point is you don't need this particular bridge according to the review or there is no clear benefit in paying extra for this component over what is supplied by your motherboard manufacturer.

You're actually agreeing with each other. You don't need the bridge until you go 4K 120hz. Or surround 144hz, like @Beast96GT. At least according to the specs, I would love to see it tested properly. And no, w1z's tests is not enough as he probably doesn't exceed that bandwidth, and as said on the last page it wouldn't result in FPS drops but stuttering.
 
Joined
Jun 21, 2016
Messages
5 (0.00/day)
System Name 5820k
Processor 5820k@4.85GHz/1.33v
Motherboard Asrock X99 Extreme6/3.1
Cooling Swiftech H240x+3x140 rad
Memory GSkill Trident Z@3300/14-16-16-34/1.45v
Video Card(s) MSI GTX 780 Ti@1480/7950/1.118v
Storage Crucial MX100 250GB
Power Supply BE Quiet 750W
Anyway, a 1080 Sli would not run 4K 120Hz@120FPS, nor 1080p/1440p Surround 144Hz@144FPS on any modern game...
May be a couple of 1080Ti OC'ed wil do though!
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
I'm glad you can tell 'advanced circuit design' by looking a picture. This is crazy and um, I think I'll go with Nvidia and the author of the story that says the HIGH BANDWIDTH SLI bridge has HIGH BANDWIDTH instead of listening to you.
And of course you can choose to believe whoever you wish, in this case you choose to go with someone who has something to gain from the sales of the bridge as opposed to someone who doesn't :D
 

Beast96GT

New Member
Joined
Jun 21, 2016
Messages
7 (0.00/day)
And of course you can choose to believe whoever you wish, in this case you choose to go with someone who has something to gain from the sales of the bridge as opposed to someone who doesn't :D

I bet NVidia just rolls in hundreds of dollars from HB SLI bridges! What a great money making deal! That's what you believe? Really? No getting anything past you...
 
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
why the hell rise of tomb raider is not good scaling on DirectX 12 ??
I guess Nvidia is ruining the performance again in DirectX 12 To make AMD asynchronous compute not useful for DirectX 12
like what they did by put heavy tessellation on crysis 2
See This "
"

Rise of the Tomb Raider disables the second card when run in DX12 mode and always has been. No point really in benching SLI under DX12. The review should at least mention that and also provide a DX11 score!
 
Joined
Feb 8, 2012
Messages
3,013 (0.68/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu"
Yes, every PCIe device can act as a master and use DMA to transfer data without having to go through cpu, but the cpu (driver thread) initiates the transfer and that happens at the end of every frame after all the draw calls have been issued (to correctly sync the vram of both gpus) ... as I see it, the big difference here is in hiding the latency to better control SLI frame pacing: with direct link between two gpus, as the cpu is busy issuing the draw calls, direct sli link can be busy syncing the data progressively as the draw calls arrive. Modern engines use many frame buffers and g-buffers that get composited into the final frame buffer so the bandwidth requirements are getting higher ... meaning good pcie dma transfer implementation would require batching all transfers into one so that cpu can fire and forget and go back to the draw calls. Other than saturating the pcie bus at that particular moment, that level of granularity goes against the effort to hide the latency and do proper frame pacing.
Of course, I'm just guessing as this is all speculation for the nvidia secret sauce recipe.
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
I bet NVidia just rolls in hundreds of dollars from HB SLI bridges! What a great money making deal! That's what you believe? Really? No getting anything past you...
Well I would assume that they are not in it to lose money? But of course you know very well that was not my point which was that these companies have a lengthy history of misleading consumers, the fact that some would choose to believe what they say is as I said earlier up to them, if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.
 
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...
I'm guessing he meant that the PCIe root complex sits in the CPU and since PCIe is a point-to-point connection and not a common bus, the data has to go through the CPU (the root complex) in order to reach the second card. It however does not mean that the data from GPU1 is being processed in the CPU (introducing additional latency) before it is on its way to GPU2.
 

Beast96GT

New Member
Joined
Jun 21, 2016
Messages
7 (0.00/day)
Well I would assume that they are not in it to lose money? But of course you know very well that was not my point which was that these companies have a lengthy history of misleading consumers, the fact that some would choose to believe what they say is as I said earlier up to them, if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.

You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge. I'm saying you have no reason to believe that at all, but you do. To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc. For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps. Have at it.
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge. I'm saying you have no reason to believe that at all, but you do. To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc. For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps. Have at it.

You are wrong, I am not implying anything about the HB Bridge at all, I am implying that sometimes we the consumer get misled and therefore it's not always a good thing to take what is told us by them at face value, I didn't disagree with your original comment, I merely commented on the fact that you would prefer to believe the company in the context of my first sentence here, you could have been talking about 4GB of memory on the GTX 970 and I would have said the same, or of course just to keep things fair, the performance of Bulldozer.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.62/day)
You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge. I'm saying you have no reason to believe that at all, but you do. To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc. For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps. Have at it.
What you've said, in my eyes, is "here, we got new cards, but if you want the best, buy this additional hardware."

Its hard to not to take it like that. I'm the guy that has one of the MSI advanced bridges that came out with the 9-series GPUs that I paid $50 for. But I bought it for looks, not performance. I also went from Tri-SLI 780 TI (that I paid about $700 a card for) to a single GTX980. I had one GTX980 that I was given by MSI to use in my motherboard and memory reviews, and was so happy with it that I bought a second to replace those 780 TIs for $650. I then bought the MSI bridge for when I wanted to SLI the two cards.




Now, If I want to get a couple of 1080's, which I do want, I gotta pay these prices:





I want dual 1080s because I want to upgrade my 2560x1600 Dell3008WFP( that I've had since like 2008) to a 4K panel, which is also a considerable cost, since I don't want to downsize from that 30-inch size...

And I'm left asking myself, since the "normal" bridge comes free with motherboards, how come there isn't a coupon in the box of the 1080's for this bridge? I gotta spend another "X" dollars when I'm already looking at roughly $3000 for a video solution? Peanuts! But dammit, where's the enticement to go that route? Like, I'm waiting for MSI GAMING-Z cards at least. I'd like a bridge that matches the cards, I want a decent panel, too, so I'm definitely being picky about what I gotta spend this cash on, and rightly so when it's a considerable chunk of change we are talking about. I'm not broke, either. My other hobby is RC vehicles, and I can get some damn good stuff and a heck of a lot of hours of enjoyment out of that $3000+. I spend it in my other hobby, and I'll get rebates in the least! I won't even get a free game with these VGAs...
 

Beast96GT

New Member
Joined
Jun 21, 2016
Messages
7 (0.00/day)
You are wrong, I am not implying anything about the HB Bridge at all, I am implying that sometimes we the consumer get misled and therefore it's not always a good thing to take what is told us by them at face value, I didn't disagree with your original comment, I merely commented on the fact that you would prefer to believe the company in the context of my first sentence here, you could have been talking about 4GB of memory on the GTX 970 and I would have said the same, or of course just to keep things fair, the performance of Bulldozer.

The original poster, jsfitz54, was who said there was no difference in the bridges at all, maybe other than an LED. So when you jump in and support the idea behind his logic--that Nvidia was simply creating the same product with a different name and look to mislead consumers--it would appear you feel the same way. If not, I'm glad you're open to the idea that the HB Bridge actually is different from the classic bridge. But we need more tests to determine the differences, as this article doesn't answer the question.

What you've said, in my eyes, is "here, we got new cards, but if you want the best, buy this additional hardware."

Its hard to not to take it like that. I'm the guy that has one of the MSI advanced bridges that came out with the 9-series GPUs that I paid $50 for. But I bought it for looks, not performance. I also went from Tri-SLI 780 TI (that I paid about $700 a card for) to a single GTX980. I had one GTX980 that I was given by MSI to use in my motherboard and memory reviews, and was so happy with it that I bought a second to replace those 780 TIs for $650. I then bought the MSI bridge for when I wanted to SLI the two cards.

Yep. Just like when you buy a new DDR5 motherboard, you need DDR5. At least in the case of the bridge, you don't have to upgrade, the old one will work, albeit with possibly less throughput. Ah, the price of having the best. ;)
 
Last edited by a moderator:
Joined
Feb 18, 2006
Messages
5,147 (0.78/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
so when sli works, 4k scaling is epic. Clearly a solution only meant for high-end enthusiasts, hence the 12-1800$ price point. Still though for that group several games saw the 2nd card scale above 80% at 4k. So there is something there for the money.

Still though its odd that so many games would shelve sli support. The argument of "well consoles are only single gpu" can only go so far. Consoles are all AMD. I'm sure it would be much easier to just simply go all AMD and call it a day if that was the case. So they are already spending money on NV support, why not add in SLI? Being the way NV has always catered to Devs and handed out free support for optimization you would think it wouldn't be too hard. Unless of course its Nvidia itself that is shelving sli, or at least backing away from it.
 

Tatty_Two

Gone Fishing
Joined
Jan 18, 2006
Messages
25,801 (3.87/day)
Location
Worcestershire, UK
Processor Rocket Lake Core i5 11600K @ 5 Ghz with PL tweaks
Motherboard MSI MAG Z490 TOMAHAWK
Cooling Thermalright Peerless Assassin 120SE + 4 Phanteks 140mm case fans
Memory 32GB (4 x 8GB SR) Patriot Viper Steel 4133Mhz DDR4 @ 3600Mhz CL14@1.45v Gear 1
Video Card(s) Asus Dual RTX 4070 OC
Storage WD Blue SN550 1TB M.2 NVME//Crucial MX500 500GB SSD (OS)
Display(s) AOC Q2781PQ 27 inch Ultra Slim 2560 x 1440 IPS
Case Phanteks Enthoo Pro M Windowed - Gunmetal
Audio Device(s) Onboard Realtek ALC1200/SPDIF to Sony AVR @ 5.1
Power Supply Seasonic CORE GM650w Gold Semi modular
Mouse Coolermaster Storm Octane wired
Keyboard Element Gaming Carbon Mk2 Tournament Mech
Software Win 10 Home x64
The original poster, jsfitz54, was who said there was no difference in the bridges at all, maybe other than an LED. So when you jump in and support the idea behind his logic--that Nvidia was simply creating the same product with a different name and look to mislead consumers--it would appear you feel the same way. If not, I'm glad you're open to the idea that the HB Bridge actually is different from the classic bridge. But we need more tests to determine the differences, as this article doesn't answer the question.

I quoted your post in my first post in this thread not his (the Op) so how on earth am I "jumping in and supporting his logic"? You commented on how he could know by looking at a picture and that you would prefer therefore to go with NVidia's statement/claim, I merely pointed out that in my opinion their claims are sometimes misleading, so I will say one more time, my comment has nothing to do with the bridge, just about who we choose to believe.
 
Joined
Jun 22, 2015
Messages
203 (0.06/day)
You're implying that the HB SLI Bridge was created just so that Nvidia can mislead consumers and make money, because you think there's no difference in the HB Bridge and the Classic Bridge. I'm saying you have no reason to believe that at all, but you do. To be fair, it would be better if the author of this article included more extensive testing, such as Surround and 4K @120 hz, etc. For now, it shows that's you can use a classic bridge if you run lower resolutions with slower fps. Have at it.
The artical is also a bit off about how pcie works... Dma allows pcie to dynamically allow a bus master controller so it never has to "go to the cpu". Im surprised to say the least this artical came out on a tech site without editing... There is a little overhead on a pcie set up as it operates more like a packet switched network with lower end routing capabilities...

"new posters" defending nbadia ripping off customers with two $2.5 bridges and 1 led slammed together for 40 bucks. how much are you getting paid for this?
 

Beast96GT

New Member
Joined
Jun 21, 2016
Messages
7 (0.00/day)
"new posters" defending nbadia ripping off customers with two $2.5 bridges and 1 led slammed together for 40 bucks. how much are you getting paid for this?

Defending? No. Ripping off customers? Hah, now THAT'S a statement! They holding you hostage and stealing your money, truth?? Ooo, it's all conspiracy, huh! People not trashing Nvidia--they must be getting paid! No, I'm new because I wanted to read about the SLI tests just like everyone else. EAD.
 
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Well, I don't think its any surprise the new bridge makes no distinguishable difference (Least at the moment). SLI has not been held back by the old one so why would the new one make it any better. The difference may appear later when we get ridiculous 120hz setups on 4K and maybe even surround 1440p/2160p setups but as for now its only for looks (Heck if I go this round the 1080 or 1080ti I may grab one for the looks only).

The only thing I see disappointing is the scaling in SLI... SLI scaling has been pretty good in the past and this seems unusually low even when removing the games that don't scale. Probably need to just give it some time as drivers are still premature on the cards with the new architecture. Oh well, at least we know using the old bridges won't harm our experience in anyway!
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Yes, every PCIe device can act as a master and use DMA to transfer data without having to go through cpu
A GPU still can't transfer directly to another GPU without the CPU in the current implementation.

but the cpu (driver thread) initiates the transfer and that happens at the end of every frame after all the draw calls have been issued (to correctly sync the vram of both gpus) ... as I see it, the big difference here is in hiding the latency to better control SLI frame pacing: with direct link between two gpus, as the cpu is busy issuing the draw calls, direct sli link can be busy syncing the data progressively as the draw calls arrive.
The SLI bridge has been used up to this day for the purpose of synchronizing the GPUs, which works much better than CrossFireX. Still, all texture data etc. is transfered over PCIe. It's possible that Nvidia would want to transfer something over the SLI HB, but that is still going to be limited to a few MB per frame in order to not cause too much latency.

Any game which needs to synchronize a lot of data in a multi-GPU configuration is going to scale very badly, basicly rendering the multi-GPU configuration pointless; E.g. if your game should scale to multiple GPUs, then any shared data should be complete before the next GPU starts or your game is going to suffer. Transferring hundreds of MB is basically out of the question, since it's too costly.

Modern engines use many frame buffers and g-buffers that get composited into the final frame buffer so the bandwidth requirements are getting higher ... meaning good pcie dma transfer implementation would require batching all transfers into one so that cpu can fire and forget and go back to the draw calls. Other than saturating the pcie bus at that particular moment, that level of granularity goes against the effort to hide the latency and do proper frame pacing.
Of course, I'm just guessing as this is all speculation for the nvidia secret sauce recipe.
Although split frame rendering is still technically possible, there are basically no game which utilizes it since it scales very badly and has a high overhead cost. I only see this used in professional applications, because it can work very well when the scene is "static" and has super-high details, but it works very badly in a game where a camera moves through a landscape. Splitting regions or rendering passes between GPUs is possible but scales badly and greatly limits any changes in data.

This is why AFR is the only relevant multi-GPU rendering for games, and provided that the engine is designed accordingly this can scale well on 2-4 GPUs.

The only thing I see disappointing is the scaling in SLI... SLI scaling has been pretty good in the past and this seems unusually low even when removing the games that don't scale. Probably need to just give it some time as drivers are still premature on the cards with the new architecture. Oh well, at least we know using the old bridges won't harm our experience in anyway!
No, multi-GPU scaling is up to the game engines, and it's simply far too few games prioritizing this today. On the other hand, microstuttering between frames in multi-GPU rendering is up to Nvidia/AMD to solve.
 
Last edited by a moderator:
Joined
Feb 8, 2012
Messages
3,013 (0.68/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
A GPU still can't transfer directly to another GPU without the CPU in the current implementation.


The SLI bridge has been used up to this day for the purpose of synchronizing the GPUs, which works much better than CrossFireX. Still, all texture data etc. is transfered over PCIe. It's possible that Nvidia would want to transfer something over the SLI HB, but that is still going to be limited to a few MB per frame in order to not cause too much latency.

Any game which needs to synchronize a lot of data in a multi-GPU configuration is going to scale very badly, basicly rendering the multi-GPU configuration pointless; E.g. if your game should scale to multiple GPUs, then any shared data should be complete before the next GPU starts or your game is going to suffer. Transferring hundreds of MB is basically out of the question, since it's too costly.


Although split frame rendering is still technically possible, there are basically no game which utilizes it since it scales very badly and has a high overhead cost. I only see this used in professional applications, because it can work very well when the scene is "static" and has super-high details, but it works very badly in a game where a camera moves through a landscape. Splitting regions or rendering passes between GPUs is possible but scales badly and greatly limits any changes in data.

This is why AFR is the only relevant multi-GPU rendering for games, and provided that the engine is designed accordingly this can scale well on 2-4 GPUs.

With new diff based memory compression algorithms frame buffer transfer between multiple gpus could benefit from that ... even with AFR there is an issue with any postprocessing shader that takes many frames into account like temporal anti-aliasing.
With split frame rendering, open terrain graphics issue where sky is easier to render than terrain can be solved with vertical split :laugh: I don't see how much overhead for cpu is to batch 2 sets of draw calls for each gpu (each half of the frustum) - the resources are the same on both cards
And with AFR shared data being ready before other gpu starts rendering is getting much harder when fps to scale has triple digits ... the question is how much uncompressed data can HB bridge transfer in 8 ms (120 fps)
 
Last edited:
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
No, multi-GPU scaling is up to the game engines, and it's simply far too few games prioritizing this today. On the other hand, microstuttering between frames in multi-GPU rendering is up to Nvidia/AMD to solve.
Not exactly, while yes they can program it better to work with multi-GPU its also up to the graphics companies to make it efficient at it and be able to utilize it. We would not have to wait for profiles except to fix bugs otherwise... That is also the reason why some games show vast improvements with different driver revisions in SLI or CFX.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
With new diff based memory compression algorithms frame buffer transfer between multiple gpus could benefit from that ...
That would not be nearly enough.

A normal SLI bridge has a bandwidth of 1 GB/s. At 120 FPS 1 GB/s is 8.5 MB per frame. So using basic math you can see that just transferring the final image over this bus is not an option. In fact, even over PCIe it's pretty slow to transfer between GPUs:
transfer.png

(source at ~29:30)
This is part of the explanation why we get microstuttering in multi-GPU configurations. The picture from the second GPU needs to be transferred back to the primary GPU while also disturbing it. This is why we get the typical pattern of a long interval between frames, followed by a shorter interval, followed by a longer, ... Even a 1-2 ms difference is quite noticable.

even with AFR there is an issue with any postprocessing shader that takes many frames into account like temporal anti-aliasing.
True, it's one of several types of dependencies which will limit the multi-GPU scalability of a game.
BTW; people should use proper AA like MSAA or SSAA rather than these poor AA techniques.

With split frame rendering, open terrain graphics issue where sky is easier to render than terrain can be solved with vertical split.
It depends on what kind of camera angles and movements you are talking about. I would have no problems creating a terrain which the left part of the screen is 10 times as demanding to render as the right. As mentioned, it's technically possible to do split frame rendering, but you will get very poor utilization for a free camera, bare no gain at all. While AFR can (if your engine is well design) double, triple quadruple with more GPUs. ;)

And with AFR shared data being ready before other gpu starts rendering is getting much harder when fps to scale has triple digits ... the question is how much uncompressed data can HB bridge transfer in 8 ms (120 fps)
Even if the HB bridge have double lanes (assuming 2 GB/s total), we are only talking of 17 MB for a frame. You'll at least need NVLink to do what you are dreaming about...

Not exactly, while yes they can program it better to work with multi-GPU its also up to the graphics companies to make it efficient at it and be able to utilize it. We would not have to wait for profiles except to fix bugs otherwise... That is also the reason why some games show vast improvements with different driver revisions in SLI or CFX.
No, this is a common misunderstanding.
It's up to the game engine developers to design well for multi-GPU support (how queues are built up).

What you are confused about are driver side tweaks from AMD and Nvidia, which is manipulations of the driver and the game's rendering pipeline to work around issues. These tweaks are limited in scope, and can only do minor manipulations such as working around strange bugs and such. If the game is not built for multi-GPU scaling, no patch from Nvidia nor AMD can fix that properly.
 
Joined
Feb 8, 2012
Messages
3,013 (0.68/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
The picture from the second GPU needs to be transferred back to the primary GPU while also disturbing it. This is why we get the typical pattern of a long interval between frames, followed by a shorter interval, followed by a longer, ... Even a 1-2 ms difference is quite noticable.
Right, all things I'm "dreaming about" assume some hypothetical new architecture ... in this case specifically, a data sync for the next frame that doesn't disturb the rendering of the current frame - sync that happens independently while the current frame is being rendered to hide at least the part of that latency ... possibly through nvlink
BTW; people should use proper AA like MSAA or SSAA rather than these poor AA techniques.
I don't know about that, going to 4k seems to need less AA and more temporal AA to reduce shimmering with camera movement
It depends on what kind of camera angles and movements you are talking about. I would have no problems creating a terrain which the left part of the screen is 10 times as demanding to render as the right.
Oh yes, to successfully cover all scenarios, you'll need to split frame buffer into tiles (similar to what Dice did in Frostbite engine for cell implementation in PS3) and distribute tile sets with similar total complexity to different gpus ... which would increase cpu overhead but I think it's worth investigating for engine makers ... maybe even use integrated gpu (or one of the multi gpus asymetrically using simultaneous multi-projection) in a preprocessing step only to help speed up dividing jobs for multi gpus based on geometry and shading complexity (on a screen tile level, not on the pixel level).
You'll at least need NVLink to do what you are dreaming about...
Agreed
 
Last edited:
Joined
Jun 22, 2015
Messages
203 (0.06/day)
Defending? No.
right...

Ripping off customers? Hah, now THAT'S a statement! They holding you hostage and stealing your money, truth??
what is physx
what is gsync
what is a "new and improved" sli bridge "requirement" for a "how its meant to be played" experience, that _costs 8 times more_ than the old "alternative"

People not trashing Nvidia--they must be getting paid! No, I'm new because I wanted to read about the SLI tests just like everyone else.
if you just wanted to read about it, then why did you bother to create that account and post all that "not defending" stuff?

resorting to profanity and insults already? what the matter, running out of arguments already? lol
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.99/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
if I adopted the same stance I would not read any reviews but just pop over to AMD or NVidia (or anyone else) site and absorb the marketing.
mmmmm, marketing. Tasty and nutritious! Gonna go get me some press release infusions... :laugh:
 
Top