• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Radeon RX 7000 Series Won't Use 16-pin 12VHPWR, AMD Confirms

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,885 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD just officially confirmed that its upcoming Radeon RX 7000 series next-generation graphics card will not use the 12+4 pin ATX 12VHPWR connector across the product stack. Scott Herkelman. SVP and GM of the AMD Radeon product group, confirmed on Twitter that the current RX 6000 series and future GPUs based on the RDNA3 graphics architecture, will not use this power connector. This would mean that even its add-in board (AIB) partners won't find the connector as a qualified part by AMD to opt for. This would mean that Radeon RX 7000 series will stick with 8-pin PCIe power connectors on the card, each drawing up to 150 W of power. For some of the higher-end products with typical board power of over 375 W; this will mean >2 8-pin connectors. AMD is expected to debut RDNA3 on November 3, 2022.



View at TechPowerUp Main Site | Source
 
This seems like a smart decision, in my opinion.
 
Regardless of the current issue (AMD made that decisions months before the 4090 controversies), ATX 3.0 is just not ready.

It's more around December/Q1 2023 that we will see the first broadly available PSU.

Maybe the AMD 8000 GPU once PSUs are ATX 3.0 and the whole connector is "stabilised" and well tested.

I don't think Nvidia can afford to let the 12vhpwr issue as is anyways and they'll probably clarify this and/or recall products, I am not really making the RDNA 3 vs 4090 decision based solely on the connectors tho, I'll see November 3rd.
 
lets hope we dont see 4slots behemoths on the new radeon gpus
 
Regardless of the current issue (AMD made that decisions months before the 4090 controversies), ATX 3.0 is just not ready.

It's more around December/Q1 2023 that we will see the first broadly available PSU.

Maybe the AMD 8000 GPU once PSUs are ATX 3.0 and the whole connector is "stabilised" and well tested.

I don't think Nvidia can afford to let the 12vhpwr issue as is anyways and they'll probably clarify this and/or recall products, I am not really making the RDNA 3 vs 4090 decision based solely on the connectors tho, I'll see November 3rd.
Define broadly available. In my 3rd world country with a slowly obliterating currency when compared to USD, there are already three brands of 12VHPWR PCIE 5.0 and ATX 3.0 PSUs. FSP, Thermaltake, and (I pray that this brand get their priorities straight for the sake of whoever buys their PSUs) Gigabyte. The FSP even comes in 850w and 1000w flavors, the other 2 only 1000w so far. Some of the listings are 2 weeks or more old. And all of this is ready stock mind you, no back order or preorder rubbish.
 
lets hope we dont see 4slots behemoths on the new radeon gpus
Remember this double-decker connector from the GTX 680? This could be one way for AIBs to avoid having a row of four 8-pin connectors. I doubt they'll use it, but it exists.

1666716176209.png
 
Why do you care about that? Just buy lower tier cards if that's an issue. Man people get so weird, it's like religion all over
little-girl-meme.gif


Huh???
 
Define broadly available. In my 3rd world country with a slowly obliterating currency when compared to USD, there are already three brands of 12VHPWR PCIE 5.0 and ATX 3.0 PSUs. FSP, Thermaltake, and (I pray that this brand get their priorities straight for the sake of whoever buys their PSUs) Gigabyte. The FSP even comes in 850w and 1000w flavors, the other 2 only 1000w so far. Some of the listings are 2 weeks or more old. And all of this is ready stock mind you, no back order or preorder rubbish.
And in the US there is only one single atx3.0 psu with 12vhpwer under 1200w that's in stock.
 
Why do you care about that? Just buy lower tier cards if that's an issue. Man people get so weird, it's like religion all over again.
This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot. There is no long term data to show this is not an issue. Especially given the size of the actual card.
 
This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot. There is no long term data to show this is not an issue. Especially given the size of the actual card.
Yes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
 
I'm curious if AMD might instead look towards adopting EPS12V power connectors should the power reqs increase further. IIRC, there was some discussion on the subject around the time NVIDIA and Intel came up with the 12VHPWR plug of using EPS12V connectors to replace or supplement 8-pin PCI connectors, as 4-pin EPS12V can output a continuous 155 watts, while the 8-pin EPS12V can output a continuous 235 watts (depending on wire quality) vs the 8-pin PCI's limit of 150w continuous. Further, many high-end 1k+ PSUs aimed at energy intensive rigs usually have 2, sometimes 3, EPS 8-pin cables included in box. And some modular PSUs, such as Seasonic's, can provide EPS12V or PCI output from the same modular port, while other modular PSUs, such as EVGA's, has one spare dedicated EPS12V port.
 
I think AMD won't have a better architecture than Nvidia, so if they were to make a monolithic chip they would probably lose in performance/Watt.
A chiplet approach could mean better yields, cheaper and lower power consumption if they don't push the silicon to the limit.
What interest me and most is what they do in the midrange, how can AMD chiplet compete with a not so big monolithic chip from Nvidia which is also made at TSMC but at 4nm.
One thing is sure, GPU sales continue to be very low.
 
Yes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
So let me understand this. If you get a 13900K /7950X system, want the best performance and to take advantage of the PCIe lanes available you should gimp your GPU purchase because the manufacturers have made cards too big for that and it's ok?

The thing that I don't understand is the 4090 has about the same power draw as the 3090TI but yet those cards are 3 slots wide. I can understand SFF builds but to be compromised by the size of the GPU in an ATX build is crazy.

The difference between MBs that support PCIe flexibility is already price but because Nvidia has done this there is no need for ATX as Micro ATX does the same thing. I will be interested to see how those boards that have 2 NVME slots populated in between the X16 slots will work with heat dissipation with a giant GPU sitting above them as well. When (if) Direct Storage becomes a thing you could have those NVME drives singing along with the GPU and that would be quite the heat soak.

Of course we cannot forget the heat that PCIe 5.0 drives will produce based on the exotic heatsinks on the MBs for that protocol.
 
This sentiment is because there are people that like to populate their MBs with other peripherals. The weight of these cards is also a worry. 2KG is no joke hanging in a PCIe slot.
TBH if I would get that card I can probably afford a GPU support of maybe one of those little figurines. Maybe a mini Hulk holding up a card.
 
Yes, so why not buy a lower tier card then?

I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
Hi,
Slots really isn't the bad thing here I do remember the silly looking asus with noctua heatsink/ fans release :laugh:
It's the length and height that is getting really stupid.
 
I think AMD won't have a better architecture than Nvidia, so if they were to make a monolithic chip they would probably lose in performance/Watt.
A chiplet approach could mean better yields, cheaper and lower power consumption if they don't push the silicon to the limit.
What interest me and most is what they do in the midrange, how can AMD chiplet compete with a not so big monolithic chip from Nvidia which is also made at TSMC but at 4nm.
One thing is sure, GPU sales continue to be very low.
Despite the name, Nvidia GPUs are made in 5nm. I think AMD has a huge chance to beat Nvidia by a considerable margin, after all it will have more than twice as many compute units, my only question is the RT performance.
 
I don't understand the sentiment at all. Do you want Nvidia to use magic and pull performance out of their ass like a rabbit while still maintaining 2 slot cards?
That's ironically what they've been doing for the better part of the last 20 years, yes.

A new bar has been set and its okay to be sentimental about that. You can wait for this size to arrive at midrange. One gen? Two?
 
So let me understand this. If you get a 13900K /7950X system, want the best performance and to take advantage of the PCIe lanes available you should gimp your GPU purchase because the manufacturers have made cards too big for that and it's ok?

The thing that I don't understand is the 4090 has about the same power draw as the 3090TI but yet those cards are 3 slots wide. I can understand SFF builds but to be compromised by the size of the GPU in an ATX build is crazy.
If your build cannot support a thicker card then you buy a slimmer card and sacrifice performance. Not to mention PCIe lanes are really not an issue anymore for desktops, SLI is dead, storage doesn't benefit much from speeds in the real world.

It's pretty clear that the 4090 cooler was overbuilt and Nvidia cut down the power limit at the very end, maybe expecting outrage, maybe because the performance gains were not enough to justify it. But it gives you the benefit of the card being very quiet.

That's ironically what they've been doing for the better part of the last 20 years, yes.

A new bar has been set and its okay to be sentimental about that. You can wait for this size to arrive at midrange. One gen? Two?
So then you go buy the low end if you do not want leading edge performance.



Did you guys forget to look at performance per watt?
1666723845573.png


4090 is the most power efficient card on the planet by a wide margin. You can expect lower end cards using the same architecture to also be very fast and small.
 
This seems like a smart decision, in my opinion.

There is no problem with the hybrid 12+4-pin 12HPWR connector in itself. But... As long as one uses as many as needed.
One such connector cannot deliver 600 watts. The pins are too few and too thin to sustain the load - both electrical current through them and the dissipated heat from the enormous heatsink nearby.
Just think about it - you need a 2-kilo heatsink to dissipate that energy which you want to squeeze through super thin pins - not gonna happen.

Look at the second image - the normal PCI-e 6-pin and 8-pin connectors have a good size for the current that passes through them.
Whoever made the decision to call the new connector "600W-ready" must be fired, and his degree in electrical engineering and heat transfer revoked and publicly humiliated.

This said, the RTX 4090 needs not 1 but 3 such connectors to function safely.

1666723917168.png


1666723876724.png
 
One such connector cannot deliver 600 watts. The pins are too few and too thin to sustain the load - both electrical current through them
That's just wrong. The connector itself is overkill for 50amps. You only have 8 amps per pin, can do 8 amps on a hair.

The issue with the connector has nothing to do with the current or the heat, the only issue is mechanical, it doesn't have a proper latch to resist getting partially pulled out if someone really bends the cables.
 
So then you go buy the low end if you do not want leading edge performance.
That is a possibility, but that wasn't the subject, you say Nvidia can't pull performance out of similar slot size, but that's exactly what they've done.
 
That is a possibility, but that wasn't the subject, you say Nvidia can't pull performance out of similar slot size, but that's exactly what they've done.
But that's my point exactly, wait for the 4060 and buy that if you want 2 slot cards... I'm sure it will be faster than 2080 Ti.

Even the 4070 might be two slot, considering 3080 Ti was.
 
But that's my point exactly, wait for the 4060 and buy that if you want 2 slot cards... I'm sure it will be faster than 2080 Ti.
The point is people are sentimental about a change in the definition of 'progress'.

I don't see increased slot size for increased performance as progress. Its just about going bigger.
 
The point is people are sentimental about a change in the definition of 'progress'.

I don't see increased slot size for increased performance as progress. Its just about going bigger.
The progress is in the performance per watt department, total performance is absolutely irrelevant in that sense. You could push the cards to use that amount of power for a long time, it's just that there was no reason to, couldn't gain any performance, now they can get gains so they do. The only people that benefit are the consumers, you don't have to wait two more years for that performance in a two slot card.
 
Back
Top