• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Details "Pascal" Some More at GTC Japan

AsRock

TPU addict
Joined
Jun 23, 2007
Messages
18,875 (3.07/day)
Location
UK\USA
Processor AMD 3900X \ AMD 7700X
Motherboard ASRock AM4 X570 Pro 4 \ ASUS X670Xe TUF
Cooling D15
Memory Patriot 2x16GB PVS432G320C6K \ G.Skill Flare X5 F5-6000J3238F 2x16GB
Video Card(s) eVga GTX1060 SSC \ XFX RX 6950XT RX-695XATBD9
Storage Sammy 860, MX500, Sabrent Rocket 4 Sammy Evo 980 \ 1xSabrent Rocket 4+, Sammy 2x990 Pro
Display(s) Samsung 1080P \ LG 43UN700
Case Fractal Design Pop Air 2x140mm fans from Torrent \ Fractal Design Torrent 2 SilverStone FHP141x2
Audio Device(s) Yamaha RX-V677 \ Yamaha CX-830+Yamaha MX-630 Infinity RS4000\Paradigm P Studio 20, Blue Yeti
Power Supply Seasonic Prime TX-750 \ Corsair RM1000X Shift
Mouse Steelseries Sensei wireless \ Steelseries Sensei wireless
Keyboard Logitech K120 \ Wooting Two HE
Benchmark Scores Meh benchmarks.
MS says win 10 will allow mixed cards then nVidia come out with this, makes me wounder if they going nuke it and disable the crap out of it all over again.

NVIDIA is innovating a new interconnect called NVLink, which will change the way the company has been building dual-GPU graphics cards.
 
Joined
Oct 20, 2015
Messages
450 (0.14/day)
Location
Michigan
System Name Velka
Processor R5 3600
Motherboard MSI MPG X570
Cooling Wraith stealth
Memory Corsair vengeance 3000mhz
Video Card(s) RX 6650xt
Storage Crucial P1 1TB/ 1tb WD blue
Display(s) MSI MAG301RF + Insignia NS-PMG248
Case Corsair 400r
Audio Device(s) Onboard
Power Supply Corsair HX1000i
Mouse Logitech G305
Keyboard Redragon K556
Software Windows 10
Benchmark Scores No thanks
Joined
Dec 22, 2011
Messages
3,890 (0.86/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11

FreedomEclipse

~Technological Technocrat~
Joined
Apr 20, 2007
Messages
23,380 (3.76/day)
Location
London,UK
System Name Codename: Icarus Mk.VI
Processor Intel 8600k@Stock -- pending tuning
Motherboard Asus ROG Strixx Z370-F
Cooling CPU: BeQuiet! Dark Rock Pro 4 {1xCorsair ML120 Pro|5xML140 Pro}
Memory 32GB XPG Gammix D10 {2x16GB}
Video Card(s) ASUS Dual Radeon™ RX 6700 XT OC Edition
Storage Samsung 970 Evo 512GB SSD (Boot)|WD SN770 (Gaming)|2x 3TB Toshiba DT01ACA300|2x 2TB Crucial BX500
Display(s) LG GP850-B
Case Corsair 760T (White)
Audio Device(s) Yamaha RX-V573|Speakers: JBL Control One|Auna 300-CN|Wharfedale Diamond SW150
Power Supply Corsair AX760
Mouse Logitech G900
Keyboard Duckyshine Dead LED(s) III
Software Windows 10 Pro
Benchmark Scores (ノಠ益ಠ)ノ彡┻━┻
Joined
Oct 17, 2011
Messages
857 (0.19/day)
Location
Oregon
System Name Red 101
Processor 9th Gen Intel Core i9-9900k
Motherboard EVGA Z370 Classified
Cooling Custom Primochill and Heatkiller water cooling loop
Memory 16GB of Gskill 3200Mhz CL14
Video Card(s) EVGA GeForce GTX 1080 FTW2 with Heatkiller block @2114Mhz
Storage 4- Samsung Evo 250GB, 1- Pro 512GB and 1-512GB M.2
Display(s) LG 38" UW
Case In Win 101 customized a lot and painted red
Audio Device(s) Razer Kraken 7.1 Chroma
Power Supply EVGA 850w G2
Mouse Razer DeathAdderv2
Keyboard Razer Ornata Chroma
Software Win10Pro and games
Benchmark Scores NA
Joined
Apr 17, 2014
Messages
228 (0.06/day)
System Name GSYNC
Processor i9-10920X
Motherboard EVGA X299-FTW
Cooling Custom water loop: D5
Memory G.Skill RipJawsZ 16GB 2133mhz 9-11-10-28
Video Card(s) (RTX2080)
Storage OCZ vector, samsung evo 950, Intel M.2 1TB SSD's
Display(s) ROG Swift PG278Q, Acer Z35 and Acer XB270H (NVIDIA G-SYNC)
Case 2x Corsair 450D, Corsair 540
Audio Device(s) sound blaster Z
Power Supply EVGA SuperNOVA 1300 G2 Power
Mouse Logitech proteus G502
Keyboard Corsair K70R cherry red
Software WIN10 Pro (UEFI)
Benchmark Scores bench score are for people who don't game.
LET THE MILLENNIALS AND AMD FB RAGE BEGIN!

Pascal will smoke everything out there
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
I'm really looking forward to that unified memory architecture and the elimination of SLI problems.
Unified memory will not alone fix SLI problems. Most issues are more about proper resource management than it is about not having shared memory, although post-processing will be a bit easier to manage if NV-Link does what it is purported to be able to do. The big boon of shared memory is the added addressing space as well as the ability to store more data allowing for greater detail.
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.98/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
Unified memory will not alone fix SLI problems. Most issues are more about proper resource management than it is about not having shared memory, although post-processing will be a bit easier to manage if NV-Link does what it is purported to be able to do. The big boon of shared memory is the added addressing space as well as the ability to store more data allowing for greater detail.
You might be right, I honestly dunno. I just remember that when this new form of SLI was announced several months ago by NVIDIA (they had a blog post that was reported widely by the tech press, including TPU) it sounded like all these problems would go away. Regardless, I'll bet it will be a big improvement over what we've got now.
 
Joined
May 29, 2012
Messages
514 (0.12/day)
System Name CUBE_NXT
Processor i9 12900K @ 5.0Ghz all P-cores with E-cores enabled
Motherboard Gigabyte Z690 Aorus Master
Cooling EK AIO Elite Cooler w/ 3 Phanteks T30 fans
Memory 64GB DDR5 @ 5600Mhz
Video Card(s) EVGA 3090Ti Ultra Hybrid Gaming w/ 3 Phanteks T30 fans
Storage 1 x SK Hynix P41 Platinum 1TB, 1 x 2TB, 1 x WD_BLACK SN850 2TB, 1 x WD_RED SN700 4TB
Display(s) Alienware AW3418DW
Case Lian-Li O11 Dynamic Evo w/ 3 Phanteks T30 fans
Power Supply Seasonic PRIME 1000W Titanium
Software Windows 11 Pro 64-bit
MS says win 10 will allow mixed cards then nVidia come out with this, makes me wounder if they going nuke it and disable the crap out of it all over again.
NVLink has nothing to do with the consumer space and I don't know why people keep assuming it does. It literally replaces the PCI-e standard and adds cost and complexity the system builders neither want nor need. Not to mention the CPU/PCH has to support the capability in order to communicate between the GPU and the CPU.

On top of it, the DX12 explicit multi-GPU mode has to be specifically coded for and enabled by game developers, the GPU vendors have very little to do in implementing it and the drivers have very little if nothing to do with optimizing it due to the low level nature of DX12.

The only option nVidia could possibly have at even approaching NVLink usage in the consumer space is in Dual-GPU cards with two GPU dies on a single PCB, using the NVLink as an interconnect devoted specifically to GPU-to-GPU communications.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
NVLink has nothing to do with the consumer space and I don't know why people keep assuming it does. It literally replaces the PCI-e standard and adds cost and complexity the system builders neither want nor need. Not to mention the CPU/PCH has to support the capability in order to communicate between the GPU and the CPU.
That's about it. Just as Intel is pushing for PCI-E 4.0 and buying Cray's Aries/Gemini interconnect for pushing bandwidth in the big iron war with IBM, the latter has paired with Nvidia (NVLink) and Mellanox to do the exact same thing for OpenPOWER. The fixation some people have with everything tech HAVING to revolve around gaming is perplexing to say the least.
The only option nVidia could possibly have at even approaching NVLink usage in the consumer space is in Dual-GPU cards with two GPU dies on a single PCB, using the NVLink as an interconnect devoted specifically to GPU-to-GPU communications.
That was my understanding also. The only way for Nvidia to get NVLink into the consumer space would be for it to be folded into the PCI-E 4.0 specification, or as an optional dedicated chip in the same way that Avago's PEX lane extender chips are currently used (and Nvidia's own old NF200 predecessor for that matter).
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
That was my understanding also. The only way for Nvidia to get NVLink into the consumer space would be for it to be folded into the PCI-E 4.0 specification, or as an optional dedicated chip in the same way that Avago's PEX lane extender chips are currently used (and Nvidia's own old NF200 predecessor for that matter).

NVLink should allow for direct access to system ram, and that function is already supported by PCIe spec, AFAIK. It's really no different than AMD's "sidebar" that was present on past GPU designs. IBM has already partnered with NVidia for NVLink, so I'm sure we'll see NVidia GPUs paired with PowerPC CPUs in short order.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
NVLink should allow for direct access to system ram, and that function is already supported by PCIe spec
The function is, but the bandwidth isn't.
PCI-E bandwidth isn't an issue for consumer GPU in 99%+ situations - as W1zz's many PCIE 1.1/2.0/3.0 comparisons have shown. HPC bandwidth, both intra- and inter-nodal on the other hand....it isn't hard to see how a couple of CPUs feeding eight dual-GPU K80's or next-gen GPUs at 100% workload might produce some different effects regards bandwidth saturation compared to a gaming system.
IBM has already partnered with NVidia for NVLink, so I'm sure we'll see NVidia GPUs paired with PowerPC CPUs in short order.
Next year for early access and test/qualification/validation. POWER9 (14nm) won't be ready for prime time until 2017, so the early systems will be based on the current POWER8
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Fury is roughly on par with Maxwell on power efficiency.
Interesting, who will have better process, GloFo 14nm or ITMS 16nm.
Samsung's 14nm were rumored to suck.

so much yadaydadayada

Try harder:

1) GSync is as locked down as it gets (to "nope, won't license it to anyone" point)
2) adaptive sync is THE ONLY standard, (DISPLAYPORT 1.2A, THAT IS) there is no "freesync" standard.
3) nothing stops any manufacturer out there to use adaptive sync (dp 1.2a), no need to involve AMD or any of its "freesync" stuff in there
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.61/day)
Joined
Jul 1, 2011
Messages
340 (0.07/day)
System Name Matar Extreme PC.
Processor Intel Core i9-10900KF @5.1GHZ All cores Ring@4.6GHZ @1.280v , 24/7
Motherboard Gigabyte Z590 UD , With PCIe X1 Card intel killer 1650x card
Cooling CoolerMaster ML240L V2 AIO with MX6
Memory 4x16 64GB DDR4 3600MHZ CL16-19-19-39 G.SKILL Trident Z NEO
Video Card(s) Nvidia ZOTAC RTX 3080 Ti Trinity OC + overclocked 100 core 1000 mem
Storage WD black 512GB Nvme OS + 1TB 970 Nvme Samsung & 4TB WD Blk 256MB cache 7200RPM
Display(s) Lenovo 34" Ultra Wide 3440x1440 144hz 1ms G-Snyc
Case NZXT H510 Black with Cooler Master RGB Fans
Audio Device(s) Internal , EIFER speakers & EasySMX Wireless Gaming Headset
Power Supply Aurora R9 850Watts 80+ Gold, I Modded cables for it.
Mouse Onn RGB Gaming Mouse & Logitech G923 & shifter & E-Break Sim setup.
Keyboard GOFREETECH RGB Gaming Keyboard, & Xbox 1 X Controller
VR HMD Oculus Rift S
Software Windows 10 Home 22H2
Benchmark Scores https://www.youtube.com/user/matttttar/videos
I have been waiting for this can't wait, my next build intel broadwell-E with X99 USB 3.1 and nVidia Pascal in SLi
I have skipped 6 and 7 and 9 and 28nm on the Maxwell didn't sell me , is good now its worth the upgrade. next November 2016 black Friday is my new shopping saving from now...
 

FreedomEclipse

~Technological Technocrat~
Joined
Apr 20, 2007
Messages
23,380 (3.76/day)
Location
London,UK
System Name Codename: Icarus Mk.VI
Processor Intel 8600k@Stock -- pending tuning
Motherboard Asus ROG Strixx Z370-F
Cooling CPU: BeQuiet! Dark Rock Pro 4 {1xCorsair ML120 Pro|5xML140 Pro}
Memory 32GB XPG Gammix D10 {2x16GB}
Video Card(s) ASUS Dual Radeon™ RX 6700 XT OC Edition
Storage Samsung 970 Evo 512GB SSD (Boot)|WD SN770 (Gaming)|2x 3TB Toshiba DT01ACA300|2x 2TB Crucial BX500
Display(s) LG GP850-B
Case Corsair 760T (White)
Audio Device(s) Yamaha RX-V573|Speakers: JBL Control One|Auna 300-CN|Wharfedale Diamond SW150
Power Supply Corsair AX760
Mouse Logitech G900
Keyboard Duckyshine Dead LED(s) III
Software Windows 10 Pro
Benchmark Scores (ノಠ益ಠ)ノ彡┻━┻
I have been waiting for this can't wait, my next build intel broadwell-E with X99 USB 3.1 and nVidia Pascal in SLi
I have skipped 6 and 7 and 9 and 28nm on the Maxwell didn't sell me , is good now its worth the upgrade. next November 2016 black Friday is my new shopping saving from now...

So a $4000 computer then? Are you going to be F@lding or Crunching to the moon and back?
 
Joined
Jul 1, 2011
Messages
340 (0.07/day)
System Name Matar Extreme PC.
Processor Intel Core i9-10900KF @5.1GHZ All cores Ring@4.6GHZ @1.280v , 24/7
Motherboard Gigabyte Z590 UD , With PCIe X1 Card intel killer 1650x card
Cooling CoolerMaster ML240L V2 AIO with MX6
Memory 4x16 64GB DDR4 3600MHZ CL16-19-19-39 G.SKILL Trident Z NEO
Video Card(s) Nvidia ZOTAC RTX 3080 Ti Trinity OC + overclocked 100 core 1000 mem
Storage WD black 512GB Nvme OS + 1TB 970 Nvme Samsung & 4TB WD Blk 256MB cache 7200RPM
Display(s) Lenovo 34" Ultra Wide 3440x1440 144hz 1ms G-Snyc
Case NZXT H510 Black with Cooler Master RGB Fans
Audio Device(s) Internal , EIFER speakers & EasySMX Wireless Gaming Headset
Power Supply Aurora R9 850Watts 80+ Gold, I Modded cables for it.
Mouse Onn RGB Gaming Mouse & Logitech G923 & shifter & E-Break Sim setup.
Keyboard GOFREETECH RGB Gaming Keyboard, & Xbox 1 X Controller
VR HMD Oculus Rift S
Software Windows 10 Home 22H2
Benchmark Scores https://www.youtube.com/user/matttttar/videos
So a $4000 computer then? Are you going to be F@lding or Crunching to the moon and back?
broadwell-E and nVidia Pascal will be available in mid 2016 its not like they are out today and I am buying them next year
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I've literally complained about a lack of bandwidth for multi-GPU processing for a long time, only to get things like "mining doesn't need bandwidth!" as responses. GPGPU has been limited by PCIe for the past 5-7 years from my perspective.
Sounds like the responses you've been getting aren't particularly well informed. I did note 99%+ of usage scenarios (current), but there are few people running 3 and 4 card setups, where the performance difference is more obvious...

...for HPC, I think latency is just as much an issue. Just as PCI-E 1.1/2.0 generally manifests as increased frame variance/stutter in comparison to 3.0 in bandwidth limiting scenarios, time to completion for GPGPU workloads is also affected by latency issues. Where time is literally money when selling time on a cluster its easy to see why Nvidia push the reduced latency of NVLink.
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
Let's rip out the crap that AMD already said, as HBM is their baby. That means the VRAM quantities aren't news.

What we're left with is NVLink. It's interesting, if somewhat disturbing.

Right now single card dual GPU cards are don't scale great and cost a ton of money. NVLink addresses...maybe the first issue. The biggest issue is that even if it solves scaling, you've still got factor 2. As this conclusion is self evident, we're back to the NVLink announcement not being about consumer GPUs. The VRAM side definitely wasn't.

Is this good for HPC, absolutely. Once you stop caring about price, the better the interconnect speed the more you can compute. I applaud Nvidia announcing this for HPC, but it's standing against Intel. Intel is buying up semi-conductor companies for their IP, and working with other companies in their field to corner the HPC market via common interconnects (PCI-e 4.0).

The disturbing part is the upcoming war in which Intel decides to cut PCI-e lanes to make their PCI-e 4.0 standard required. The consumer Intel offerings are already a little sparse on their PCI-e lanes. I don't want Intel deciding to push less PCI-e lanes to penalize Nvidia for NVLink, which will also influence the AMD vs. Nvidia dynamic.



This is interesting, but not news for gamers. Please, show me the Pascal variant with about 8 GB of VRAM that has 60-80% better performance than my current 7970 while sipping power. Until then, thanks but I'm really not the target audience.
 
Joined
Jun 13, 2012
Messages
1,328 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
so much misinformation.

Adaptive sync IS FreeSync.

FreeSync is the brand name for an adaptive synchronization technology for LCD displays that support a dynamic refresh rate aimed at reducing screen tearing.[2] FreeSync was initially developed by AMD in response to NVidia's G-Sync. FreeSync is royalty-free, free to use, and has no performance penalty.[3] As of 2015, VESA has adopted FreeSync as an optional component of the DisplayPort 1.2a specification.[4] FreeSync has a dynamic refresh rate range of 9–240 Hz.[3] As of August 2015, Intel also plan to support VESA's adaptive-sync with the next generation of GPU.[5]
Speaking of Misinformation, you quote "wikipedia".
How are DisplayPort Adaptive-Sync and AMD FreeSync™ technology different?
DisplayPort Adaptive-Sync is an ingredient DisplayPort feature that enables real-time adjustment of monitor refresh rates required by technologies like AMD FreeSync™ technology. AMD FreeSync™ technology is a unique AMD hardware/software solution that utilizes DisplayPort Adaptive-Sync protocols to enable user-facing benefits: smooth, tearing-free and low-latency gameplay and video. Users are encouraged to read this interview to learn more.
Source: http://support.amd.com/en-us/search/faq/214 <---- straight from AMD themselves. In Short words, proprietary use of the protocol


The function is, but the bandwidth isn't.
PCI-E bandwidth isn't an issue for consumer GPU in 99%+ situations - as W1zz's many PCIE 1.1/2.0/3.0 comparisons have shown. HPC bandwidth, both intra- and inter-nodal on the other hand....it isn't hard to see how a couple of CPUs feeding eight dual-GPU K80's or next-gen GPUs at 100% workload might produce some different effects regards bandwidth saturation compared to a gaming system.
Well NVlink will allow on a dual gpu card 1 gpu to access the memory of the other card as explained in the brief. Can't really do that with a pipe that so limited that PCI-E is atm. As resolution goes up could likely see benifit of that much higher bandwidth pipe in performance.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
The disturbing part is the upcoming war in which Intel decides to cut PCI-e lanes to make their PCI-e 4.0 standard required. The consumer Intel offerings are already a little sparse on their PCI-e lanes. I don't want Intel deciding to push less PCI-e lanes to penalize Nvidia for NVLink, which will also influence the AMD vs. Nvidia dynamic.
Very unlikely to happen. Intel has been in the past threatened with sanction, and the FTC settlement (aside from being unable to substantially alter PCI-E for another year at least) only makes allowances for Intel's PCI-E electrical lane changes if it benefits their own CPUs - somewhat difficult to envisage as a scenario. Disabling PCI-E would require a justification that would suit both Intel, the FTC, and not incur anti-monopoly suits from add in board vendors (graphics, sound, SSD, RAID, ethernet, wi-fi, expansion options etc.)
The second requirement is that Intel is not allowed to engage in any actions that limit the performance of the PCIe bus on the CPUs and chipsets, which would be a backdoor method of crippling AMD or NVIDIA’s GPUs’ performance. At first glance this would seem to require them to maintain status quo: x16 for GPUs on mainstream processors, and x1 for GPUs on Atom (much to the chagrin of NVIDIA no doubt). However Intel would be free to increase the number of available lanes on Atom if it suits their needs, and there’s also a clause for reducing PCIe performance. If Intel has a valid technological reason for a design change that reduces GPU performance and can prove in a real-world manner that this change benefits the performance of their CPUs, then they can go ahead with the design change. So while Intel is initially ordered to maintain the PCIe bus, they ultimately can make changes that hurt PCIe performance if it improves CPU performance.

Bear in mind that when the FTC made the judgement, PCI-E's relevance was expected to diminish, not be looking at a fourth generation. It's hard to make a case for Intel pulling the plug, or decreasing PCI-E compatibility options when their own server/HPC future is tied to PCI-E 4.0 (and Omni-Path, which has no more relevance to consumer desktops than it's competitor, Mellanox's Infiniband)
This is interesting, but not news for gamers. Please, show me the Pascal variant with about 8 GB of VRAM that has 60-80% better performance than my current 7970 while sipping power. Until then, thanks but I'm really not the target audience.
Performance/Power might be a juggling act depending upon which target market the parts end up for, but Nvidia released numbers for Pascal at SC15. ~ 4 TFLOPs of double precision for the top SKU (presumably GP 100) which probably equates to a 1:3:6 ratio ( FP64:FP32:FP16), so about 12 TFLOPs of single precision.
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
Very unlikely to happen. Intel has been in the past threatened with sanction, and the FTC settlement (aside from being unable to substantially alter PCI-E for another year at least) only makes allowances for Intel's PCI-E electrical lane changes if it benefits their own CPUs - somewhat difficult to envisage as a scenario. Disabling PCI-E would require a justification that would suit both Intel, the FTC, and not incur anti-monopoly suits from add in board vendors (graphics, sound, SSD, RAID, ethernet, wi-fi, expansion options etc.)


Bear in mind that when the FTC made the judgement, PCI-E's relevance was expected to diminish, not be looking at a fourth generation. It's hard to make a case for Intel pulling the plug, or decreasing PCI-E compatibility options when their own server/HPC future is tied to PCI-E 4.0 (and Omni-Path, which has no more relevance to consumer desktops than it's competitor, Mellanox's Infiniband)

Performance/Power might be a juggling act depending upon which target market the parts end up for, but Nvidia released numbers for Pascal at SC15. ~ 4 TFLOPs of double precision for the top SKU (presumably GP 100) which probably equates to a 1:3:6 ratio ( FP64:FP32:FP16), so about 12 TFLOPs of single precision.

While I appreciate the fact check, disabling PCI-e wasn't what I was trying to say. What I meant is developing a wholly new interface, and only offering a hand full of PCI-e interconnection. They would effectively make its use possible, but not reasonable. If they can demonstrate the ability to connect any card to their system via PCI-e bus it effectively means they're following the FTC's requirements to the letter of the law (if not the spirit). Nowhere in the FTC's ruling can I find an indication of how many PCI-e lanes are required, only that they must be present and meet PCI-SIG electrical requirements.

For example, instead of introducing PCI-e 4.0, introduce PCE (Platform Connect Experimental). 10 PCE connections are allowed to directly connect to the CPU (not interchangeable with PCI-e), while a single PCI-e lane is connected to the CPU. Intel still provides another 2 PCI-e lanes from the PCH, which don't exactly function as well for a GPU.

Intel decides to go whole hog with PCE, and cut Nvidia out of the HPC market. They allow AMD to cross-license the interconnect (under their sharing agreement for x86), but set up some substantial fees for Nvidia. In effect, Intel provides PCI-e as an option, but those who require interconnect have to forego Nvidia products.


As I read the ruling, this is technically not messing with PCI-e electrically. It's also making the HPC effectively Intel's, because the high performance needs make PCI-e unusable (despite physically being present). It follows along with the theory that PCI-e will be supplanted as well. Have I missed something here?
 
Top