• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

Joined
Jun 22, 2015
Messages
71 (0.02/day)
Processor AMD R7 3800X EKWB
Motherboard Asus Tuf B450M-Pro µATX +MosfetWB (x2)
Cooling EKWB on CPU + GPU / Heatkiller 60/80 on Mosfets / Black Ice SR-1 240mm
Memory 2x8GB G.Skill DDR4 3200C14 @ ----
Video Card(s) Vega64 EKWB
Storage Samsung 512GB NVMe 3.0 x4 / Crucial P1 1TB NVMe 3.0 x2
Display(s) Asus ProArt 23" 1080p / Acer 27" 144Hz FreeSync IPS
Case Fractal Design Arc Mini R2
Power Supply SeaSonic 850W
Keyboard Ducky One TKL / MX Brown
The other half of that argument is the AMD Naples board they showed off. With at least 750W going in, the PCIe slots are pretty much guaranteed to be getting at least 500W all up. If they're "overloading" the connections (at full tilt, each pin in a PCIe, EPS/CPU and ATX power connector is rated up to something like 9A, so a 6-pin PCIe is safe for 200W on it's own, at 8-pin safe for 300W, with current server cards using only a single 8-pin for 300W cards), pulling in the 1500W for 2 150W CPUs and 4 300W cards is entirely within the realm of possibility. It may not be PCIe 4.0 on Zen, but power support may well trickle down into a revision of 3.x.

I'm personally not too worried about the safety of pushing 300W through the PCIe slot: 300W is only 25A at 12V. Compare that to the literal hundreds of amps (150+W at less than 1V) CPUs have to be fed over very similar surface areas

Yes, It's also a server board. I don't think any official info on those are available yet, but speculation has it that it is for the PCIe slots, yes.
I still wouldn't see that trickle down to enthusiast immediately, even if it were the case.
I could see the server people paying for a high layer-high power mobo, for specific cards, not necessarily GPU's.

I just don't see the need to change the norm for desktops.

Two 1080's on one card will take you over the PCIe spec.
I think this is just what the AIB's/Constructors/AMD/Nvidia etc and the others want.
 
Joined
Feb 18, 2006
Messages
5,147 (0.77/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
Depends on the board the vast majority are stacked right next to the cpu.
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

I'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.

I'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.

Only if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
 
Joined
Feb 18, 2006
Messages
5,147 (0.77/day)
Location
AZ
System Name Thought I'd be done with this by now
Processor i7 11700k 8/16
Motherboard MSI Z590 Pro Wifi
Cooling Be Quiet Dark Rock Pro 4, 9x aigo AR12
Memory 32GB GSkill TridentZ Neo DDR4-4000 CL18-22-22-42
Video Card(s) MSI Ventus 2x Geforce RTX 3070
Storage 1TB MX300 M.2 OS + Games, + cloud mostly
Display(s) Samsung 40" 4k (TV)
Case Lian Li PC-011 Dynamic EVO Black
Audio Device(s) onboard HD -> Yamaha 5.1
Power Supply EVGA 850 GQ
Mouse Logitech wireless
Keyboard same
VR HMD nah
Software Windows 10
Benchmark Scores no one cares anymore lols
Only if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.



having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.

Only on the desktop parts. On the server bits, specifically high-GPU like the C4130, the Dells run PCIe power using 4 8-pin cables from right in front of the PSU straight to the cards in front.



having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.

Bit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):



There's 4 black PCIe 8-pin connections, one near the PSU (top left), and the remaining 3 all around near the 16x riser slots, 2 of em all the way at the front together with 3 riser slots. The white 4-pins are used for fans and HDD connections. These boards are out there, i production and apparently working quite well, if a bit warm for the 4th rear-mounted GPU (the other 3 are front-mounted). That particular board has over 1600W going through it when fully loaded, and it about the size of a good E-ATX board, so relax, and hope we can get 300W slots consumer-side as well as server side and make us some very neat-looking, easy to upgrade.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
Bit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):

The power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
The power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.

Indeed, but chips are pulling at ~1V (usually less) these days, meaning >100A is going to the chip. At those levels of current, you do indeed get a lot of Vdroop.

At 12V and 25A per slot though, the Vdroop is much less of an issue. On top of that, most cards have their own VRMs anyways to get their own supply of precisely-regulated barely above 1V power, so a bit of Vdroop won't be that much of an issue.
 
Joined
Oct 19, 2006
Messages
34 (0.01/day)
Location
Cuba
System Name RyZen 7
Processor AMD RyZen 7 1700x @ 3.7 Mhz
Motherboard MSI X370 XPower Gaming Titanium
Cooling EKW 360
Memory G-Skill Trident Z 32gb
Video Card(s) RX Vega 64
Storage SSD + M.2 + HDD
Display(s) Samsung 4k 24" FreeSync
Case Thermaltake Core P5
Audio Device(s) Logitech z906
Power Supply Thermaltake TR2 RX1200
Mouse Logitech G303
Keyboard Logitech G910
Software Win 10 x64
Benchmark Scores ...soon
And.... what about AMD and the new CPU Zen???? I mean, if this PCIe specification finalize by 2016, the new MBoards ( AM4 ) will use PCIe Gen 4.0???.
 
Joined
Dec 30, 2010
Messages
2,099 (0.43/day)
if it's a standard it means it's well implemented. They are not going to put tiny small traces on the motherboard allowing up to 300W of power.

I think it will be a neath and proper design, leaving many cases with videocards without the power supply. Simular as putting 1 x 4 or 8 pins for your CPU and 1 x 4 or 8 pins for your GPU.

Right now simular motherboards share their 12V CPU as well for memory and PCI-express slots (75W configuration).
 
Joined
Mar 7, 2007
Messages
3,842 (0.61/day)
Location
Maryland
System Name HAL
Processor Core i9 13900k @5.8-6.1
Motherboard Z790 Arous master
Cooling EKWB Quantum Velocity V2 & (2) 360 Corsair XR7 Rads push/pull
Memory 2x 32GB (64GB) Gskill trident 6000 CL30 @28 1T
Video Card(s) RTX 4090 Gigagbyte gaming OC @ +200/1300
Storage (M2's) 2x Samsung 980 pro 2TB, 1xWD Black 2TB, 1x SK Hynix Platinum P41 2TB
Display(s) 65" LG OLED 120HZ
Case Lian Li dyanmic Evo11 with distro plate
Power Supply Thermaltake 1350
Software Microsoft Windows 11 x64
because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.

You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?

I don't see anything in yogurt's post that calls you silly, just that your line of thinking/assumption/theory of how things work/question was silly. Asking such a question is a bit like asking if an ingot of 24K gold degrades over time (nuclear decay non-withstanding), or if fire is hot.. It's just a question that makes no real sense. I mean, there's nothing special about PCB copper traces that would put them at any risk of degradation vs lengths of copper wire, very much unlike the doped silicon the chips use.
 
Joined
Apr 12, 2015
Messages
212 (0.06/day)
Location
ID_SUB
System Name Asus X450JB
Processor Intel Core i7-4720HQ
Motherboard Asus
Memory 2x 4GiB
Video Card(s) nVidia GT940M
Storage 2x 1TB
"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).

That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.
 
Joined
Apr 12, 2015
Messages
212 (0.06/day)
Location
ID_SUB
System Name Asus X450JB
Processor Intel Core i7-4720HQ
Motherboard Asus
Memory 2x 4GiB
Video Card(s) nVidia GT940M
Storage 2x 1TB
That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.

I still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).
 
Joined
Jun 22, 2015
Messages
71 (0.02/day)
Processor AMD R7 3800X EKWB
Motherboard Asus Tuf B450M-Pro µATX +MosfetWB (x2)
Cooling EKWB on CPU + GPU / Heatkiller 60/80 on Mosfets / Black Ice SR-1 240mm
Memory 2x8GB G.Skill DDR4 3200C14 @ ----
Video Card(s) Vega64 EKWB
Storage Samsung 512GB NVMe 3.0 x4 / Crucial P1 1TB NVMe 3.0 x2
Display(s) Asus ProArt 23" 1080p / Acer 27" 144Hz FreeSync IPS
Case Fractal Design Arc Mini R2
Power Supply SeaSonic 850W
Keyboard Ducky One TKL / MX Brown
That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.

I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
I still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).

In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.

I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.

They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.



As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
 
Joined
Jun 14, 2016
Messages
40 (0.01/day)
Location
Cornwall, UK
Processor Intel Core i7 6700K
Motherboard Gigabyte Z170 Gaming K3
Cooling Coolermaster Hyper TX3 Evo
Memory Corsair Vengeance LPX (Red) 2x8GB 2400MHz
Video Card(s) MSI GeForce GTX 1070 Founders Edition
Storage 1TB WD Blue 7200rpm
Display(s) 2x Acer K222HQL 1080p
Case Corsair Spec-01
Power Supply EVGA SuperNova G2 650W
Mouse Asus Cerberus Mouse
Keyboard Asus Cerberus Keyboard
Software Windows 10 Pro 64-bit
Joined
Jul 14, 2008
Messages
872 (0.15/day)
Location
Copenhagen, Denmark
System Name Ryzen/Laptop/htpc
Processor R9 3900X/i7 6700HQ/i7 2600
Motherboard AsRock X470 Taichi/Acer/ Gigabyte H77M
Cooling Corsair H115i pro with 2 Noctua NF-A14 chromax/OEM/Noctua NH-L12i
Memory G.Skill Trident Z 32GB @3200/16GB DDR4 2666 HyperX impact/24GB
Video Card(s) TUL Red Dragon Vega 56/Intel HD 530 - GTX 950m/ 970 GTX
Storage 970pro NVMe 512GB,Samsung 860evo 1TB, 3x4TB WD gold/Transcend 830s, 1TB Toshiba/Adata 256GB + 1TB WD
Display(s) Philips FTV 32 inch + Dell 2407WFP-HC/OEM/Sony KDL-42W828B
Case Phanteks Enthoo Luxe/Acer Barebone/Enermax
Audio Device(s) SoundBlasterX AE-5 (Dell A525)(HyperX Cloud Alpha)/mojo/soundblaster xfi gamer
Power Supply Seasonic focus+ 850 platinum (SSR-850PX)/165 Watt power brick/Enermax 650W
Mouse G502 Hero/M705 Marathon/G305 Hero Lightspeed
Keyboard G19/oem/Steelseries Apex 300
Software Win10 pro 64bit
In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.




As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.
 
Joined
Jul 31, 2014
Messages
480 (0.13/day)
System Name Diablo | Baal | Mephisto | Andariel
Processor i5-3570K@4.4GHz | 2x Xeon X5675 | i7-4710MQ | i7-2640M
Motherboard Asus Sabertooth Z77 | HP DL380 G6 | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Cooling Swiftech H220-X | Chassis cooled (6 fans + HS) | dual-fanned heatpipes | small-fanned heatpipe
Memory 32GiB DDR3-1600 CL9 | 96GiB DDR3-1333 ECC RDIMM | 32GiB DDR3L-1866 CL11 | 8GiB DDR3L-1600 CL11
Video Card(s) Dual GTX 670 in SLI | Embedded ATi ES1000 | Quadro K2100M | Intel HD 3000
Storage many, many SSDs and HDDs....
Display(s) 1 Dell U3011 + 2x Dell U2410 | HP iLO2 KVMoIP | 3200x1800 Sharp IGZO | 1366x768 IPS with Wacom pen
Case Corsair Obsidian 550D | HP DL380 G6 Chassis | Dell Precision M4800 | Lenovo Thinkpad X220 Tablet
Audio Device(s) Auzentech X-Fi HomeTheater HD | None | On-board | On-board
Power Supply Corsair AX850 | Dual 750W Redundant PSU (Delta) | Dell 330W+240W (Flextronics) | Lenovo 65W (Delta)
Mouse Logitech G502, Logitech G700s, Logitech G500, Dell optical mouse (emergency backup)
Keyboard 1985 IBM Model F 122-key, Ducky YOTT MX Black, Dell AT101W, 1994 IBM Model M, various integrated
Software FAAAR too much to list
you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.

Not that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
 
Joined
Jun 22, 2015
Messages
71 (0.02/day)
Processor AMD R7 3800X EKWB
Motherboard Asus Tuf B450M-Pro µATX +MosfetWB (x2)
Cooling EKWB on CPU + GPU / Heatkiller 60/80 on Mosfets / Black Ice SR-1 240mm
Memory 2x8GB G.Skill DDR4 3200C14 @ ----
Video Card(s) Vega64 EKWB
Storage Samsung 512GB NVMe 3.0 x4 / Crucial P1 1TB NVMe 3.0 x2
Display(s) Asus ProArt 23" 1080p / Acer 27" 144Hz FreeSync IPS
Case Fractal Design Arc Mini R2
Power Supply SeaSonic 850W
Keyboard Ducky One TKL / MX Brown
In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.



As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.

The rest of the server industry, maybe.
Desktop, no.

There is no reason for it, and no need for it.

Your example is yet another server tech, that won't be coming to Enthusiast desktop (NVLink).
The DGX-1 is fully built by Nvidia, there are no "common" parts in it at all, so it can all be proprietary... (mobo, NVLink board, etc...)
i can't even see the power connectors on the NVLink board, so they could be using any number of connectors on the left there, or on the other side of the board...
 
Joined
Jul 14, 2008
Messages
872 (0.15/day)
Location
Copenhagen, Denmark
System Name Ryzen/Laptop/htpc
Processor R9 3900X/i7 6700HQ/i7 2600
Motherboard AsRock X470 Taichi/Acer/ Gigabyte H77M
Cooling Corsair H115i pro with 2 Noctua NF-A14 chromax/OEM/Noctua NH-L12i
Memory G.Skill Trident Z 32GB @3200/16GB DDR4 2666 HyperX impact/24GB
Video Card(s) TUL Red Dragon Vega 56/Intel HD 530 - GTX 950m/ 970 GTX
Storage 970pro NVMe 512GB,Samsung 860evo 1TB, 3x4TB WD gold/Transcend 830s, 1TB Toshiba/Adata 256GB + 1TB WD
Display(s) Philips FTV 32 inch + Dell 2407WFP-HC/OEM/Sony KDL-42W828B
Case Phanteks Enthoo Luxe/Acer Barebone/Enermax
Audio Device(s) SoundBlasterX AE-5 (Dell A525)(HyperX Cloud Alpha)/mojo/soundblaster xfi gamer
Power Supply Seasonic focus+ 850 platinum (SSR-850PX)/165 Watt power brick/Enermax 650W
Mouse G502 Hero/M705 Marathon/G305 Hero Lightspeed
Keyboard G19/oem/Steelseries Apex 300
Software Win10 pro 64bit
Not that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
well.. yes, but it elevates the cost regardless since it is a more expensive implementation due to its complexity and added engineering.
 
Joined
Sep 25, 2007
Messages
5,965 (0.98/day)
Location
New York
Processor AMD Ryzen 9 5950x, Ryzen 9 5980HX
Motherboard MSI X570 Tomahawk
Cooling Be Quiet Dark Rock Pro 4(With Noctua Fans)
Memory 32Gb Crucial 3600 Ballistix
Video Card(s) Gigabyte RTX 3080, Asus 6800M
Storage Adata SX8200 1TB NVME/WD Black 1TB NVME
Display(s) Dell 27 Inch 165Hz
Case Phanteks P500A
Audio Device(s) IFI Zen Dac/JDS Labs Atom+/SMSL Amp+Rivers Audio
Power Supply Corsair RM850x
Mouse Logitech G502 SE Hero
Keyboard Corsair K70 RGB Mk.2
VR HMD Samsung Odyssey Plus
Software Windows 10
Actually PCI-SIG contacted toms and told them that the slot power was still 75W the extra 300 watts would be from extra power connectors

Update, 8/24/16, 2:06pm PT:pCI-SIG reached out to tell us that the power increase for PCI Express 4.0 will come from secondary connectors and not from the slot directly. They confirmed that we were initially told incorrect information. We have redacted a short passage from our original article that stated what were originally told, which is that the slot would provide at least 300W, and added clarification:

  • PCIe 3.0 max power capabilities: 75W from CEM + 225W from supplemental power connectors = 300W total
  • PCIe 4.0 max power capabilities: TBD
New value “P” = 75W from CEM + (P-75)W from supplemental power connectors.
 
Joined
Feb 21, 2014
Messages
1,383 (0.37/day)
Location
Alabama, USA
Processor 5900x
Motherboard MSI MEG UNIFY
Cooling Arctic Liquid Freezer 2 360mm
Memory 4x8GB 3600c16 Ballistix
Video Card(s) EVGA 3080 FTW3 Ultra
Storage 1TB SX8200 Pro, 2TB SanDisk Ultra 3D, 6TB WD Red Pro
Display(s) Acer XV272U
Case Fractal Design Meshify 2
Power Supply Corsair RM850x
Mouse Logitech G502 Hero
Keyboard Ducky One 2
Top