• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9070xt, which one...

Status
Not open for further replies.
I'm just curious where you plucked the 315W number from for the Pulse? It's more likely that your source is wrong, that's all.
Sapphire themselves claim it as a 304W card:
From the TPU review. Although, if Sapphire claims it's a 304 W card, then that's what it is.
 
From the TPU review. Although, if Sapphire claims it's a 304 W card, then that's what it is.
I've worked it out and edited the above post. Nothing sinister, just two different values for two different things.
 
I've worked it out and edited the above post. Nothing sinister, just two different values for two different things.
Ah I see... :)

In that case, I don't see why measured and max allowed power consumption would be different on the Pulse, but not on other cards.

Edit: Could W1zzard have taken data from the Pure by accident? I mean, they've got pretty similar names, and that one is a 317 W card according to Sapphire. Or maybe Sapphire changed specs?
 
Ah I see... :)

In that case, I don't see why measured and max allowed power consumption would be different on the Pulse, but not on other cards.

Edit: Could W1zzard have taken data from the Pure by accident? I mean, they've got pretty similar names, and that one is a 317 W card according to Sapphire. Or maybe Sapphire changed specs?
No, I think W1zzard does show a difference between measured and TBP on other cards.

Sapphire list 330W TBP for the Nitro+ but Wizzard cites is as 350W measured in his review of it:

1748084558262.png


Possibly the problem is that different manufacturers are treating the TBP acronym differently. T sometimes stands for "typical" and sometimes stands for "total".
 
No, I think W1zzard does show a difference between measured and TBP on other cards.

Sapphire list 330W TBP for the Nitro+ but Wizzard cites is as 350W measured in his review of it:

View attachment 401069

Possibly the problem is that different manufacturers are treating the TBP acronym differently. T sometimes stands for "typical" and sometimes stands for "total".
It does say "Pwr limit Def/Max", though.
 
Phanteks Eclipse 600s, MSI Tomahawk b650

Im just obsessed with temperature, coz I have my PC under the desk. And sometimes I touch glass in my case with my leg. SO it isnt nice to touch something hot.... And I dont have much space to put my PC somewhere else

At start I wanted to buy Red Devil (its the cheapest one) but after reviews Im afraid that he will be to hot

The temperature reading you see is the temperature reading of the GPU itself. Ironically, getting a card with better cooling could actually make your PC case/side panel hotter. Why? Because the cooling system isn't removing heat from existence. That would violate the fundamental physical law of conservation of energy. Instead, it's just taking that heat and transferring it out of the GPU and into your case (and room). Your GPU will hence be cooler, but your case and room - hotter. Over time any GPU with a high power draw will heat up your case and room.

So, what you need is not lower GPU temps, what you need is a lower power draw. Through undervolting and power limiting, you can reduce the power draw of most cards. How much exactly kind of depends with your luck in the silicon lottery and whether or not you're willing to lose performance. If you want further savings, you're going to have to go with a lower-power GPU. The 9070 is more energy efficient than the XT and, if you power limit it, can draw half the power of the XT at stock. This would lead to a lot less heat. You could also look into other lower-powered GPUs, like the 5070 or 5060 Ti 16GB.

Of course, to do this you would be sacrificing performance. So you should decide what's more important for you - having a powerful card or one which doesn't generate a lot of heat. The two are usually mutually exclusive to a large extent.
 
Sapphire Pulse (Samsung)
-65mv, -13% power limit, 2666MHz fast timing on the RAM - runs at 264W and can't hear it over other fans which are capped at 1000rpm max on a quiet curve for both the AIO and intakes/exhausts.
That's excellent result have you tried The Last of us Part 1 ? Good test for stability 3440x1440 Ultra Preset FSR4 Quality + FG. As i remember -45mv was not stable but that's when playing for around 2 hours and only then it crashes. I didn't even bother to try -40mv straight to -35mv.

P12 @900rpm
P14 @700rpm
RX 9070 XT @1250rpm
 
Last edited:
No. What I am saying is that my card has not malfunctioned.

There's probably a varying degree of PEBCAK involved, but there were also failures with the safety aspect. That's undeniable. Those issues should be largely solved by now with the H++ terminal. The original is still safe to use if you triple check all connections are in order.
Thank you for taking the time to reply, appreciate it :toast:

IMO, DLSS4-Transformer > FSR4 > DLSS4-CNN > DLSS3-CNN > FSR3
That is correct.

It's all personal preference really!
<3

there is overwhelming evidence out there in photos and videos to prove that most of this PEBCAK gaslighting is unwarranted - the connector design is faulty and more GPUs continue to burn with every passing day even with all these minor changes to the standard to help prevent the problem.
I hear you.

They're all accidents waiting to happen IMO, it's simply a question of time.
Hopefully next iterations of the PCIe 16-pin port will take all of these concerns into account (fingers crossed emoji).

That's my complaint, really.

There are documented cases where cables are properly inserted, checked on camera and then monitored - with various methods from different videos/articles using either the Asus GPU pin monitoring, clip-on ammeters, or the WireView - and proving there's a sometimes a serious imbalance of current flow through the different wires even when everything is done perfectly; Following all the recommendations and double-checking your connections does not guarantee you'll be fine.

Seasonic's newly announced PSUs that have per-wire monitoring at the GPU end is doing what you say (and I agree) AIBs should have done on the board to correct the issue, like Asus has on a couple of models - though realistically the underlying problem is that there simply isn't enough safety margin built into each wire of the new connectors. old PCIe 8-pin sure is an inefficient use of wire but it has almost 3x the safety factor in terms of current per AWG16 strand and it's becoming apparent that modern GPUs have been dipping into that 3x safety factor quite heavily, even on cards that still use 8-pin connectors!
Thanks for sharing the knowledge and experience, appreciate it :toast:
 
The cheapest 5070ti in my country cost 200$ more than high end 9070xt, and I never had problems with AMD cards before. from 4850 till now with 6900xt.

@eidairaman1
Hard with 49" monitor
Thanks for telling the truth.
 
Not even ontop of your desk as to avoid dust build up?
That is why I'm avoiding bottom case fans.

The temperature reading you see is the temperature reading of the GPU itself. Ironically, getting a card with better cooling could actually make your PC case/side panel hotter. Why? Because the cooling system isn't removing heat from existence. That would violate the fundamental physical law of conservation of energy. Instead, it's just taking that heat and transferring it out of the GPU and into your case (and room). Your GPU will hence be cooler, but your case and room - hotter. Over time any GPU with a high power draw will heat up your case and room.

So, what you need is not lower GPU temps, what you need is a lower power draw. Through undervolting and power limiting, you can reduce the power draw of most cards. How much exactly kind of depends with your luck in the silicon lottery and whether or not you're willing to lose performance. If you want further savings, you're going to have to go with a lower-power GPU. The 9070 is more energy efficient than the XT and, if you power limit it, can draw half the power of the XT at stock. This would lead to a lot less heat. You could also look into other lower-powered GPUs, like the 5070 or 5060 Ti 16GB.

Of course, to do this you would be sacrificing performance. So you should decide what's more important for you - having a powerful card or one which doesn't generate a lot of heat. The two are usually mutually exclusive to a large extent.
I can confirm that, if I do Superposition 8k test 30 min and rear case fan is an efficient one and intake is good than will raise room temp with roughly 1C.
Side panel in my case is glass(many others) with metal one maybe even more room temp rising, however I prefer room temp higher than GPU backing.
Power draw will remain a concern but UV, pwr limits and professional cards (RTX 4500) remains very good counters.
 
I'm not talking about which connector is on the PSU, I'm talking if single cable or adapter which implies another set of connections.

I never said that with weaker cards it never happens. I said there is less risk with those.
And there is nothing risk free, well except not having a PC, then it can't blow up.

Yes max current per pin is the problem, it's the mfgs responsibility to adjust things which are not imposed in a way that is safer. The wires are not imposed at 16AWG, but they went with the lowest common denominator i.e. what gets the job done. 14AWG would have been safer but it's not like I can do anything about it. But actually even with 14AWG there is almost the same risk because the different draw is caused by the pin contact. 14AWG simply allows for higher amperage, but with bad pin contact you could also have at least one wire running over spec even with 14AWG.

If you want to protest against the connector by buying only 8 pin cards it's fine but all of us can't do that because for some there is no other choice but nVidia (productivity and all that jazz).

So if you argue that people should completely ignore the Taichi and Nitro+ even if the price is good I think it's a little extreme. It's a con yes, but a dealbreaker?
Single adapter (no 8pin used) were also affected.
There could be other choice. PSU and GPU manufacturers should team up and come up with really revised version, with proper safety margin. For the sake of safety.

People are forgetting AMD is a member of PCI-SIG, they've likely had some level of input in this connector's creation and that the connector itself is fine, it's the load balancing side that doesn't exist because of oversimplified circuitry. That's on NV.
Is this your try to mitigate Nvidia's impact on that shitty connector? AMD is part of consortium, too, so it must be responsible, too. Like other hundreds of member here.

The connectors won't melt and won't have any issues if installed correctly. It's time to start accepting that it exists, was designed to and will replace the old connectors - and the only reason the 7900 XTX didn't already use it 3 years ago is because its board design was already finished by the time the spec closed.
Sure. It only takes to know whether it's installed correctly. Even fully inserted does not mean correctly, as we saw on pictures and der8auer's video.

Good on AMD for not enforcing it on the 9000 series, as it's reasonable to believe some of their customers don't have the adequate power supplies yet.
:kookoo: PSU is not the problem here.

What exactly is the point of this post, to showcase the Astral's security feature? Because no other 5090 has per pin sensing AFAIK. None of the reference PCB models do, and neither does the FE. Was kept out of lower end ASUS models as well.
The point was to show dude how unevenly can current be distributed. He had doubts, so I showed him few pictures. It's easy with Astral RTX 5090, as it has per pin current sensing. Now imagine how many not so smart RTX 5090 are out there, experiencing uneven current distribution and users even don't know anything about it. Sure, Asus did not SOLVE the problem, they brought early warning solution that can possibly mitigate impact of pin/wire overheating. Still, not a solution.

I use my 12V6X2 cable because I have no choice in the matter, but even at just 300W it makes me uneasy - I've seen multiple pieces of content now showing that the current is often distributed unevenly across the individual wires, and as an engineer I understand how temperature and resistance imbalances in (and here's the important part) an UNMONITORED cable result in a positive feedback loop where the hot wire has higher resistance which makes the wire hotter, which increases its resistance. They're all accidents waiting to happen IMO, it's simply a question of time.
Well said.

It's not really gaslighting, ensuring all connectors are properly inserted will also ensure current load is split evenly across all wires, which will drastically reduce the chance of failure. That's more on the GPUs for not having monitoring and load balancing, really. The WireView is a cool gadget, wish I could buy one. They've been perma out of stock here.

In any case, I have my rig on a bench, where the cable is completely stress free and I made sure that it's inserted correctly. It's also an older H+ cable, and I am putting the 600 W of a 5090 through it. No cable overheating or anything off to report thus far, if something does happen I'll make sure to document and post a thread about it
You still seem to not understand that even properly (fully and not bend) inserted connector can cause a problem. Connector manufacturing variation can be a problem when robustness is badly lacking.
User's fully insert the connector, yet it happens. What else are they supposed to do? Grab multimeter and check resistance themselves? There's simply no feedback.
 
Same Roman:

Out of 3700 units of wire view sold so far, 12 confirmed cases of melted 12VHPWR connectors.
That's 12 potential house fires from a teeny tiny slice of the overall market, from a high-end market segment using higher quality cables than most people will be using, and likely from enthusiasts who know how to plug in a cable properly - i.e, if you're buying aftermarket upgrade cables, it's probably not your first rodeo...
 
That's 12 potential house fires from a teeny tiny slice of the overall market, from a high-end market segment using higher quality cables than most people will be using, and likely from enthusiasts who know how to plug in a cable properly - i.e, if you're buying aftermarket upgrade cables, it's probably not your first rodeo...
This Is Fine GIF
 
Is this your try to mitigate Nvidia's impact on that shitty connector? AMD is part of consortium, too, so it must be responsible, too. Like other hundreds of member here.

Not really, no. I do understand it, and I think you can deduce that from the exchange I had with Chrispy there. You aren't wrong, and since you're erring on the side of caution (which is, well, prudent when it comes to electricity), I'll side with you here, too, even if I personally think it's nowhere near as big a deal as it's been made thus far. I embrace change, that includes this new connector and considered I did buy an RTX 5090, it does pass on the message that I'm comfortable living on the edge ;)

I'll let you guys know if my card, power supply or H+ cable burns :laugh:
 
Which 9700xt all the way to burning nvidia connectors? Looks like you managed to ruin a good thing again.
 
Status
Not open for further replies.
Back
Top