• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 5090 Runs on 3x8-Pin PCI Power Adapter, RTX 5080 Not Booting on 2x8-Pin Configuration

Compared to a 5090 that requires 575W and only getting 450W + 75W (525W) the 5080 should be "easier" to run with 2x8pin.
That assumes the voltage/freq curves for both chips are the same (they aren't) and that the 5080 is pulling the maximum both from the bus and each plug (it isn't, or it wouldn't need the third plug).

Claiming this board is a fail because you can't run it drastically out of spec is an outrageously puerile argument. Had NVidia done what you asked, then the first cable or mobo that was even slightly out-of-spec would have caused anything from a melted cable to an outright fire, and you'd have gone off the rails over that instead.
 
it will just heat up those 3 cables more. Ask me how I know.

View attachment 383115

Was running some maps in POE 2 when the distinct smell of burning plastic filled the air. I was only running 3 cables because I read when I first bought the card that would limit the card in my build to 450W instead of 450W+, and was in spec.

I would not recommend it.

Even if your card runs at a “limited” 450W, the TGP isn’t even the maximum the card will draw. The 12VHPWR spec allows for 900W+ to be drawn from the connector for fractions of a second at a time. Coupling that with the increased resistance from not running all the cables the adapter calls for could be the reason yours melted.

Could also be a bad adapter, bad PSU cable connector, cable, or PSU.

To be honest, I really hate the standard. 8 pin PCIe was far better behaved than this nonsense.
 
Last edited:
That assumes the voltage/freq curves for both chips are the same (they aren't) and that the 5080 is pulling the maximum both from the bus and each plug (it isn't, or it wouldn't need the third plug).

Claiming this board is a fail because you can't run it drastically out of spec is an outrageously puerile argument. Had NVidia done what you asked, then the first cable or mobo that was even slightly out-of-spec would have caused anything from a melted cable to an outright fire, and you'd have gone off the rails over that instead.
Personally I wouldn't call a GPU card a fail just because it cant run with 1 less 8pin connector from what the manufacturer specifies. I would maybe for other reasons but definitely not this.
I already run a GPU with 3x8pin and I wasnt ever considering to run it any differently even if it could.
 
So much for all those "but you can undervolt it" arguments.
I don't think that's really relevant here, especially as both cards behave differently. As the article covers, always expect to power the card with what it states it requires first, then you can undervolt it after.

I wouldn't expect a card with 2x8-pin to work with only one connected for example, even if a power target was set needing less than 225w.
 
I don't think that's really relevant here, especially as both cards behave differently. As the article covers, always expect to power the card with what it states it requires first, then you can undervolt it after.

I wouldn't expect a card with 2x8-pin to work with only one connected for example, even if a power target was set needing less than 225w.
Just like I've always said, undervolting isn't an argument if it only works through software, and not from boot.
 
Just like I've always said, undervolting isn't an argument if it only works through software, and not from boot.
I don't know what you mean by argument? does undervolting not work when using software? Well I know the answer, it does.

You might not like it (yeah I know, Linux), but it doesn't mean you can arbitrarily decide it's invalid because it doesn't work from boot.

Undervolting also generally only affects the hardware when a 3D load - ie software, is being placed on it, even if it was baked into hardware it wouldn't matter from boot.

Just connect the card the way the manufacturer says then do what you like with it.
 
I don't know what you mean by argument? does undervolting not work when using software? Well I know the answer, it does.

You might not like it (yeah I know, Linux), but it doesn't mean you can arbitrarily decide it's invalid because it doesn't work from boot.

Undervolting also generally only affects the hardware when a 3D load - ie software, is being placed on it, even if it was baked into hardware it wouldn't matter from boot.

Just connect the card the way the manufacturer says then do what you like with it.
We have to connect the cards properly because they've been designed for X power use. Just because you can undervolt them through software, you still need to account for default power use when factoring in your PSU. So for example, if you don't have 3x 8-pins on your PSU, or the power to feed 3x 8-pins, you shouldn't be thinking about a 5080.

Edit: "But you can undervolt it" is an argument I usually get when I speak up against modern GPUs consuming enormous amounts of power. The example shows that it's not a good argument.
 
Wait, aren't these cards pulling 75w from the PCIe slot as the standard allows them to?

No. As a rule Nvidia only pulls about 2W from the PCIe slot.
 
We have to connect the cards properly because they've been designed for X power use. Just because you can undervolt them through software, you still need to account for default power use when factoring in your PSU. So for example, if you don't have 3x 8-pins on your PSU, or the power to feed 3x 8-pins, you shouldn't be thinking about a 5080.
100% agree, I feel like this was my point...
Edit: "But you can undervolt it" is an argument I usually get when I speak up against modern GPUs consuming enormous amounts of power. The example shows that it's not a good argument.
Disagree. The example just shows you need to connect it as it demands to be connected, basica hardware compatibility.

Then you can do what you like, run stock, undervolt, overclock, etc. I don't see any connection between a card being properly connected to the PC and then how you choose to run it.

GPU's consuming more power than before is true, and "but you can just undervolt them" I think is not a good counter to that either, but I see zero correlation to connecting it properly. You need a PSU that can handle the card in it's default TDP, typically in outright wattage with perhaps a bit of wiggle room depending how you intend to run it, but 100% required in terms of the physical connectors present. To plan to do otherwise, even if fully intending to drastically undervolt would be foolish at best.
Every single Nvidia card I have ever had or worked with disagrees with you.
Don't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.
 
You need a PSU that can handle the card in it's default TDP
Exactly my point. This is why I'm against cards with enormous TDPs and don't accept "but you can undervolt it" as a counter-argument.

Don't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.
Naturally, cards with external power connectors don't use the PCI-e slot to its full 75 W specification, but to say they use 2 W is a bit daft.
 
Exactly my point. This is why I'm against cards with enormous TDPs.
I can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:
Naturally, cards with external power connectors don't use the PCI-e slot to its full 75 W specification, but to say they use 2 W is a bit daft.
2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
 
I can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:
Same here. 300-ish W is fine as long as it doesn't require a million-slot chunky boy cooler that I can't fit into my tiny m-ATX box. :laugh:

Also, I've currently got 2x 8-pin cables connected to my PSU (no pigtails in here), and I'd like to keep it that way because it's a bit hard to access without taking it out. :laugh:

2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
Agreed.
 
Same here. 300-ish W is fine as long as it doesn't require a million-slot chunky boy cooler that I can't fit into my tiny m-ATX box. :laugh:

Also, I've currently got 2x 8-pin cables connected to my PSU (no pigtails in here), and I'd like to keep it that way because it's a bit hard to access without taking it out. :laugh:
I'm on mini ITX, and I see some people disassemble the same case I have, install a massive card, and then rebuild the bottom of the case around it... I'm not too fussed as long as it fits but damn that's dedication. Doing that you can technically fit cards slightly longer than the case says it takes.

As for the PSU, 750W with 2x8 Pin should be plenty, corsair also make a native 2x8pin to 12v2x6 for their PSU's, so I should be alright with either a 9070XT or 5070Ti/5080 as long as I can fit the bastard in!
 
I'm on mini ITX, and I see some people disassemble the same case I have, install a massive card, and then rebuild the bottom of the case around it... I'm not too fussed as long as it fits but damn that's dedication. Doing that you can technically fit cards slightly longer than the case says it takes.
Respect! :)

I used to be on mini-ITX myself, but had enough of the awkward cable management and lack of choice in motherboards. I wouldn't go any bigger than m-ATX, though.

As for the PSU, 750W with 2x8 Pin should be plenty, corsair also make a native 2x8pin to 12v2x6 for their PSU's, so I should be alright with either a 9070XT or 5070Ti/5080 as long as I can fit the bastard in!
I've got 750 W, too, and that's exactly my plan. Then, as much as it hurts, I'm gonna put my upgrade urges to rest for a good 3-ish generations. I just wish those cards had come out before Kingdom Come Deliverance 2. Oh well. :ohwell:
 
No. As a rule Nvidia only pulls about 2W from the PCIe slot.
Its 1.5W actually... /s

I can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:

2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
10~40W is x5~20 above 2W. No one sane believes that all 75W are utilized when you have 2~3 or even 4 external 8pin connectors.

BTW I am pretty comfortable with 350+W GPU power consumption since my AIB R9 390X OC version...
 
4080 Super does same thing as 5080, looks like there is no lower limit fall back mode, I posted in the 4080 (Super?) FE thread about it, initially tried to use the adaptor with just two cables, card wouldnt wake on post, had to use pig tail, am now using a two x 8 pin to 12V-2x6 cable , but with adaptor I had to pig tail, as it needed to detect 3 cables connected. Nonsensical as the 4080 Super has same TDP as 3080 FE which works with 2 input cables, its an artificial Nvidia restriction. Although the Nvidia leaflet that comes with GPU tells people to not pig tail, you can bet they will as its preferable to buying a new PSU, which then puts 66/33 load on cables instead of 50/50.

It is nothing to do with card configuration, what it actually draws, UV, OC etc.
 
Last edited:
Don't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.

Just did a Cyberpunk benchmark run.

1738637265146.png
 
Damn just under 9w at its peak! My memory serves me well.

My guess is the only power pulled from the slot is what the PCIe interface itself uses, with the rest of the GPU being powered through the cable.

I can see 10.5GB a second needing 8 watts to move data off the motherboard and into the card.
 
So, if this picture is valid, then we know why the 5080 refuses to boot. In the sensing part there are only the option 600W with 4x8pin, and 450W with 3x8pin. There is no option for 300W, thus if falls in the "fail" config group. So it is not the problem of the GPU not being able to scale, instead it is the problem of the connector not allowing that config as a valid one.
 
So, if this picture is valid, then we know why the 5080 refuses to boot. In the sensing part there are only the option 600W with 4x8pin, and 450W with 3x8pin. There is no option for 300W, thus if falls in the "fail" config group. So it is not the problem of the GPU not being able to scale, instead it is the problem of the connector not allowing that config as a valid one.

This also sheds some more light as to why the RTX5070Ti will also have the 3x 8-pin connector cable for the 16-pin connector, even though it's only a 300W card, it would be overkill, as 3x 8pin = 450W of power, excluding the PCIe slot that can provide 75W, but from what I read, the new cards using the new connector do not pull all it's power from the PCIe slot.
 
We have to connect the cards properly because they've been designed for X power use. Just because you can undervolt them through software, you still need to account for default power use when factoring in your PSU. So for example, if you don't have 3x 8-pins on your PSU, or the power to feed 3x 8-pins, you shouldn't be thinking about a 5080.

Edit: "But you can undervolt it" is an argument I usually get when I speak up against modern GPUs consuming enormous amounts of power. The example shows that it's not a good argument.
That's completely wrong. The card not starting cause a cable is missing has nothing to do with power draw or undervolting. There are 3080 models with 2x 8pin and 3x 8pins. They all draw the same power (locked to 360) but the 3x8pins don't start with only 2 cables. That has nothing to do with them requiring more power, it's just the way they were designed.
 
That's completely wrong. The card not starting cause a cable is missing has nothing to do with power draw or undervolting. There are 3080 models with 2x 8pin and 3x 8pins. They all draw the same power (locked to 360) but the 3x8pins don't start with only 2 cables. That has nothing to do with them requiring more power, it's just the way they were designed.
Still, Nvidia configured the 5080 to need 3 8-pins in a converter exactly because it needs that much power. If it didn't need that power, it wouldn't need 3 8-pins, either.

A better counter-argument would be that if someone doesn't mind spending a grand on a GPU, then they should also not skimp on a proper PSU.
 
Still, Nvidia configured the 5080 to need 3 8-pins in a converter exactly because it needs that much power. If it didn't need that power, it wouldn't need 3 8-pins, either.

A better counter-argument would be that if someone doesn't mind spending a grand on a GPU, then they should also not skimp on a proper PSU.
So how much power does a 3090 need? Cause there are 2 and 3 pin models. Same with the 3080 and probably the 3070ti. There is no such thing as need, hardware doesn't have needs. They take as much power as the user decides to give them.
 
Back
Top