- Joined
- Apr 24, 2020
- Messages
- 2,701 (1.63/day)
He has a point. Why did the 6-pin and 8-pin GPU connectors even come into existence? The answer likely has nothing to do with capacity of the connectors but more about separation of use. Maybe to prevent inappropriate use, or to allow power supply designers more flexibility about internal partitioning.
Because instead of running a bulky 300W connector to 3 or 4 different PCIe cards (when each one only needs +75W), you make a 75W connector supply 75W.
Then GPUs started taking 150W, so instead of sending 2x 75W 6-pin connectors, you created a more compact 8-pin 150W connector.
Lets say you're a 500W PSU. You assume the CPU will take up 100W (despite giving the CPU a 300W cable). How many additional cables to you have coming out of your PSU? Well, you can supply ~400W. Lets say 100W is for fans / hard drives / other stuff. Leaving 300W for GPUs, and that's an easy and simple 2x 8-pin connector (which could convert into 4x 6-pin connectors).
--------------
Wires have a "language". The power-supply designers are talking to us through the shape of the wires. You can immediately and instinctively tell whether or not a PSU has enough power simply by counting the 75W or 150W connectors dedicated to each task. This was perhaps more important in the days of dual-rail PSUs (ex: you had 600W of power but across two rails. The first rail could send maybe 400W and the second rail could send like 200W. You juggle the wires just right to handle a balanced load). All modern PSUs are advanced enough to just be single rail these days, but you still need a language to estimate different power-consumption metrics.