• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9070xt, which one...

Status
Not open for further replies.
All 9070 models run at about the same temperatures, the only difference is size, and probably noise. Check out some reviews, and pick the model you like, imo. :)
 
What is the improvement from the original version?
They applied stands in front of the card, so that heatsink does not sink or move :D
I don't know whether this is really a revision, or they finally solved QA problem.
Other than that, many Pulses are equipped with Samsung memory. My friend has one.

Can you tell me your temps ?

At the moment probably Ill buy TUF too, im afraid of the 12v in Nitro+. Maybe there arent any news about melting in Nitro+, but knowing my luck I dont want try ^^"

Temps during Steel Nomad test, PL110%, default core voltage and memory, OC BIOS:
Gpu temp max. 55°C (don't take this temp into account, it's a joke)
Hotspot max. 76°C
Memory junction temp max. 82°C
Fan PWM max. 51%
At 51%, card is audible.

When I lowered PL to 90% (293W), applied undervolt -70mV, and even OCed VRAM to 2740 MHz:
GPU temp max. 54°C
Hostpot max 74°C
MJT max 80°C
FAn PWM max 43%
Even 10% difference in fan pwm did huge difference. I don't mind this card until 45% PWM, then it's a no-go zone for me.
Keep in mind that I have full tower case with plenty of fans. Don't expect such temps in MATX/ITX case.

Nitro+ is overcomplicated card. That detachable back panel does more bad than good. It actually increases temps. Who would have thought.
I'm not saying it's bad card, it's just ... sometimes less is more.

I agree, (but i'm quite satisfied with my gigabyte gaming oc 9070 xt) but if I could buy right now new card i would go with xfx oc versions with vapor chamber.

Better temps means that it should live longer, and xfx is i think best for now with temps.
Ah can You add what case You 've got and other components like mobo, and etc? cause with xfx it will clog your case really well, it's big card.
I don't recommend Gigabyte cards because of clicking sounds that fans produce at 0db mode and due those thermal pads ticking bomb as well.

I don’t agree with this obsession with temperatures.

The impact on component life between 50 and a 100 degrees, yeah a bit. Between 65 and 75 no. Also the risk is that the manufacturer of the lower temperature devices have already factored longevity related to temperature into the equation and use less quality components.
It's really a difference when silicone is operating at 115°C or 95°C. Still, lower is always better, not only for lifespan, but also for loudness.
Basically you want to have things inside a case as cool a possible, because the other component can heat another one.
Just like hot air from GPU is directed into CPU cooler, or even M.2 SSD stored behind GPU has less temps when GPU is cooler.

At this power draw level I would reckon there is no cause for concern, but still when it comes to principles AMD partners using the nVidia connector is treason and bad karma.
Taichi can draw more than 360W. If I remember correctly, it's 340W at default +10% so something past 370W is absolute maximum.
RTX 5080 max. power draw is 360W. And yet, there are causes with melted connector on RTX 5080s.

Current takes path of least resistance. Problem of this connector is variance in resistance between pins, thus between wires as well.
One pin in the connector is rated for 9.3A at max. I'm gonna use 300W power draw for my example.
With 300W power draw, in ideal conditions when there are no resistance differences between pins, each pin carries 4.16A (6x4.16 = roughly 25A*12 V = 300W).
Current takes path of least resistance and when there is same resistance on all pins, current is distributed equally among them.
That is perfectly safe and fine.

Shit starts to happen when there are differences in resistance.
What if just one pin carries more than 9.3A? Like 13 amps? Maybe current gets distributed like this: 3.6A, 13A, 2A, 3.4A, 1.3A, 1.7A.
It's still just 25A, but not evenly distributed. That 13A pin will cause melting over time.
It does not matter whether GPU has power draw of 300W, 400W or 500W (until maximum rated 600W).
What matters is max. current per one pin (and wire) must be lower than 9.3A.
Of course, lower power draw decreases chance for melting to happen, because there are lower currents at play, but it does NOT eliminate the possibility for problem to occurr.
Melting connectors on RTX 5080 is good example that even a card with draw <400W can suffer from this problem.
 
Last edited:
I don't recommend Gigabyte cards because of clicking sounds that fans produce at 0db mode and due those thermal pads ticking bomb as well.
I agree with You about the thermal "gel", but fans in gigabyte rx 9070 xt oc are marvelous (I did not heard any clicking sound in my card), i'm starting to notice them around 35-40% fan spin(1700+ rpm i think), (I would love to have the design in case fans, cause 1500rpm is barely audible, and I need to focus on it to hear it).
But again, now I would buy xfx with vapor chamber, just because i like vapor chambers :D (I like trains too :) )
 
I agree with You about the thermal "gel", but fans in gigabyte rx 9070 xt oc are marvelous (I did not heard any clicking sound in my card), i'm starting to notice them around 35-40% fan spin(1700+ rpm i think), (I would love to have the design in case fans, cause 1500rpm is barely audible, and I need to focus on it to hear it).
But again, now I would buy xfx with vapor chamber, just because i like vapor chambers :D (I like trains too :) )
Maybe you should buy RTX 5090 FE, it has 3D vapor chamber!
 
There are some nuaces to physics as well (and as usual the devil is always in the details) and thats where things get interesting! More mass means more capacity to store energy, the capability to radiate it in this case depends on the design, heatpipes, fins, quality, TIM material and airflow. So more mass does not directly equal to less noise or better cooling performance, it means at best better capability to generate less noise.

This is evident in the data from the various reviews in this thread, metrics for some cards clearly show less noise, some are slightly cooler and some are clearly heavier, all are metrics from physics and implementation of physics, just pick the metric(s) most important to you and choose accordingly =)

The XFX and Palit cards are perfect examples on this, they outweigh the competition by a up to a kg more and less and they are bulkier, but they are noisier. Maybe they would be less relative noise at 500w in, but that is not relevant, maybe they would be less noisy with a different fan curve or a different fan, but the fact is that they out of the box are not delivering better performance from a cooling and noise perspective than e.g Sapphire and Powercolor (as can be seen in all the reviews).

Fun anecdote is "obsession with weight, in the absence of an understanding of what it may infer, led to Japanese manufacturers in the 70s and 80s to fit heavy plates to the bottom chassis of amps. This supported a sales technique of a customer to lift an amp to see how heavy it was and, therefore, how robustly it was constructed." https://forums.audioholics.com/foru...ter-constructed-better-power-supplies.114110/ so it is nothing new=)

I suspect the vendors have different strategies here, they focus their marketing on differently, because we all different on what want, more than that some are smarter than others.

I will now shut up and not derail this =)

sure but the difference in weight and size of the powercolor cards is so great, i doubt you can find anything that can contradict physics

I don't recommend Gigabyte cards because of clicking sounds that fans produce at 0db mode and due those thermal pads ticking bomb as well.

stay away from those, they are the cheapest here and lots of refurbished/2nd hand as new ones.
 
sure but the difference in weight and size of the powercolor cards is so great, i doubt you can find anything that can contradict physics
As I've seen/read, the smaller 9070 (XT) cards tend to produce slightly more noise with the same temperatures. Bigger cards are factory overclocked / power limit raised ones, too.
 
Nitro+ or Taichi, if you don't want the new 2x6 power connector, get the TUF, IMO
 
I looked up for some Pulse or Pure, thinks that ill save some cash, fun fact Nitro+ is cheapest from Sapphire, Pulse cost like 20$+ and Pure 30$+ :|
If Nitro+ is cheaper than Pulse and Pure and if TUF is similar to Nitro+ then you have an easy choice. Might as well flip a coin.
Good luck, you're getting a great card either way!
At start I wanted to buy Red Devil (its the cheapest one) but after reviews Im afraid that he will be to hot
Or the Red Devil if it's similar to those two. That Hellstone might come in handy!
What if just one pin carries more than 9.3A? Like 13 amps? Maybe current gets distributed like this: 3.6A, 13A, 2A, 3.4A, 1.3A, 1.7A.
There's like 6 times more amps going through one wire than others, I highly doubt that would happen in a normal situation, only with a faulty connector you could see something like that, like manually open up the terminals like JTC did.
 
Or the Red Devil if it's similar to those two. That Hellstone might come in handy!
I wonder if it boosts your FPS while playing Doom. :laugh:
 
There's like 6 times more amps going through one wire than others, I highly doubt that would happen in a normal situation, only with a faulty connector you could see something like that, like manually open up the terminals like JTC did.
It's not like it hasn't happened yet, this is RTX 5090 ROG Astral with integrated pin current sensing:
1747915738063.webp


Another one:
1747915877903.png


Another one:
1747916167865.webp


Another one:
1747916145574.webp


Another one:
1747915927623.png


11.90A / 0.22 = 54
So, are you still having doubts?

And that's just one model of RTX 5090. Imagine how many others without any warning system are out there.
 
HERESY!!!!!

Your fault or something like that!

/S

I honestly dont understand how many more melted connectors are needed before people stop buying this garbage.
 
I own the Pulse, I've built systems with the TUF and the Nitro+

Honestly, the Pulse is the one I prefer of the three, mainly for the 8-pin cables instead of the silly new melty connector on the Nitro+

They're all good, though - just get whichever one is cheapest.
 
So, are you still having doubts?
Yes.
Do all those people use a native cable?
By native cable I mean either 16 pin to 16 pin or 16 pin to 2x8 pin that came with the PSU (or made by the same mfg that is compatible with their PSU, example Corsair 12VHPWR Type 4 used on RM1000x).
If some of those people are using a 4-way "octopus" adapter that could very well be the culprit because it has another set of connection points.

Also 5090 is a special case, I haven't seen that many reported cases with 4070 Ti Super, 4080 Super, 5070 Ti, 5080 as with 4090 and 5090.

Make no mistake, a lot of people are playing the flame generation roulette by using the squid adapter, or various aftermarket cables either 90° or not, caring more about having sleeving and its color matching the build than the quality of the cable.

So I would definitely recommend that people avoid this connector as much as possible, but for cards with "normal" power draw I wouldn't lose sleep over it, after all if total avoidance was the objective that means almost all nVidia cards would be off the table. A bit extreme, no?
I honestly dont understand how many more melted connectors are needed before people stop buying this garbage.
I don't see how it would go away now when even AMD cards are soiled by it.
In this regard AMD made a huge mistake by not enforcing AIBs to only use 8 pin connectors on the cards.
 
Yes.
Do all those people use a native cable?
By native cable I mean either 16 pin to 16 pin or 16 pin to 2x8 pin that came with the PSU (or made by the same mfg that is compatible with their PSU, example Corsair 12VHPWR Type 4 used on RM1000x).
If some of those people are using a 4-way "octopus" adapter that could very well be the culprit because it has another set of connection points.

Also 5090 is a special case, I haven't seen that many reported cases with 4070 Ti Super, 4080 Super, 5070 Ti, 5080 as with 4090 and 5090.

Make no mistake, a lot of people are playing the flame generation roulette by using the squid adapter, or various aftermarket cables either 90° or not, caring more about having sleeving and its color matching the build than the quality of the cable.

So I would definitely recommend that people avoid this connector as much as possible, but for cards with "normal" power draw I wouldn't lose sleep over it, after all if total avoidance was the objective that means almost all nVidia cards would be off the table. A bit extreme, no?

I don't see how it would go away now when even AMD cards are soiled by it.
In this regard AMD made a huge mistake by not enforcing AIBs to only use 8 pin connectors on the cards.
Dude, there's been a helluva lot of talk about this already. Yes, it also happens when PSU is equipped with native 12pin connector, go check the internet. RTX 4080 (S) with melted connectors were reported, too.

It's not about power draw, it's about max. current per pin and wire. Look at images I posted above. Same thing happened to RTX 5080, which has TGP lower than RX 9070 XT Taichi.

Once again: with overall lower power draw, there's lesser chance for connector to melt. That does not mean there is no chance. The thing is, not only GPU gets damaged, but PSU as well. I don't understand reason for risking this when there are clearly other and safer options available.
 
Dude, there's been a helluva lot of talk about this already. Yes, it also happens when PSU is equipped with native 12pin connector, go check the internet. RTX 4080 (S) with melted connectors were reported, too.

It's not about power draw, it's about max. current per pin and wire. Look at images I posted above. Same thing happened to RTX 5080, which has TGP lower than RX 9070 XT Taichi.

Once again: with overall lower power draw, there's lesser chance for connector to melt. That does not mean there is no chance. The thing is, not only GPU gets damaged, but PSU as well. I don't understand reason for risking this when there are clearly other and safer options available.
I'm not talking about which connector is on the PSU, I'm talking if single cable or adapter which implies another set of connections.

I never said that with weaker cards it never happens. I said there is less risk with those.
And there is nothing risk free, well except not having a PC, then it can't blow up.

Yes max current per pin is the problem, it's the mfgs responsibility to adjust things which are not imposed in a way that is safer. The wires are not imposed at 16AWG, but they went with the lowest common denominator i.e. what gets the job done. 14AWG would have been safer but it's not like I can do anything about it. But actually even with 14AWG there is almost the same risk because the different draw is caused by the pin contact. 14AWG simply allows for higher amperage, but with bad pin contact you could also have at least one wire running over spec even with 14AWG.

If you want to protest against the connector by buying only 8 pin cards it's fine but all of us can't do that because for some there is no other choice but nVidia (productivity and all that jazz).

So if you argue that people should completely ignore the Taichi and Nitro+ even if the price is good I think it's a little extreme. It's a con yes, but a dealbreaker?
 
I'm not talking about which connector is on the PSU, I'm talking if single cable or adapter which implies another set of connections.

I never said that with weaker cards it never happens. I said there is less risk with those.
And there is nothing risk free, well except not having a PC, then it can't blow up.

Yes max current per pin is the problem, it's the mfgs responsibility to adjust things which are not imposed in a way that is safer. The wires are not imposed at 16AWG, but they went with the lowest common denominator i.e. what gets the job done. 14AWG would have been safer but it's not like I can do anything about it. But actually even with 14AWG there is almost the same risk because the different draw is caused by the pin contact. 14AWG simply allows for higher amperage, but with bad pin contact you could also have at least one wire running over spec even with 14AWG.

If you want to protest against the connector by buying only 8 pin cards it's fine but all of us can't do that because for some there is no other choice but nVidia (productivity and all that jazz).

So if you argue that people should completely ignore the Taichi and Nitro+ even if the price is good I think it's a little extreme. It's a con yes, but a dealbreaker?

It is not a con. It's completely overblown and long since veered into what ifs, conspiracy theories and flat out doomposting.

People are forgetting AMD is a member of PCI-SIG, they've likely had some level of input in this connector's creation and that the connector itself is fine, it's the load balancing side that doesn't exist because of oversimplified circuitry. That's on NV.

The connectors won't melt and won't have any issues if installed correctly. It's time to start accepting that it exists, was designed to and will replace the old connectors - and the only reason the 7900 XTX didn't already use it 3 years ago is because its board design was already finished by the time the spec closed.

Good on AMD for not enforcing it on the 9000 series, as it's reasonable to believe some of their customers don't have the adequate power supplies yet.

It's not like it hasn't happened yet, this is RTX 5090 ROG Astral with integrated pin current sensing:
View attachment 400743

Another one:
View attachment 400744

Another one:
View attachment 400747

Another one:
View attachment 400746

Another one:
View attachment 400745

11.90A / 0.22 = 54
So, are you still having doubts?

And that's just one model of RTX 5090. Imagine how many others without any warning system are out there.

What exactly is the point of this post, to showcase the Astral's security feature? Because no other 5090 has per pin sensing AFAIK. None of the reference PCB models do, and neither does the FE. Was kept out of lower end ASUS models as well.
 
11.90A / 0.22 = 54
So, are you still having doubts?

And that's just one model of RTX 5090. Imagine how many others without any warning system are out there.

Current takes path of least resistance. Problem of this connector is variance in resistance between pins, thus between wires as well.
One pin in the connector is rated for 9.3A at max. I'm gonna use 300W power draw for my example.
With 300W power draw, in ideal conditions when there are no resistance differences between pins, each pin carries 4.16A (6x4.16 = roughly 25A*12 V = 300W).
Current takes path of least resistance and when there is same resistance on all pins, current is distributed equally among them.
That is perfectly safe and fine.

Shit starts to happen when there are differences in resistance.
What if just one pin carries more than 9.3A? Like 13 amps? Maybe current gets distributed like this: 3.6A, 13A, 2A, 3.4A, 1.3A, 1.7A.
It's still just 25A, but not evenly distributed. That 13A pin will cause melting over time.
It does not matter whether GPU has power draw of 300W, 400W or 500W (until maximum rated 600W).
What matters is max. current per one pin (and wire) must be lower than 9.3A.
Of course, lower power draw decreases chance for melting to happen, because there are lower currents at play, but it does NOT eliminate the possibility for problem to occurr.
Melting connectors on RTX 5080 is good example that even a card with draw <400W can suffer from this problem.

Why don't they put a standard molex or...

1747943902919.png

1747943931701.png


or a Type A plug?

1747943986581.png


Two pins are enough. More is ridiculous and anti-engineering.
 
Any reason for the 9070xt over the 5070ti? from what I've seen you are only saving less than 10%.

If you are set on the 9070xt I would wait a bit as the price should lower a bit.
Asrok steel legend seems the best option from what I've seen. Taichi and nitro+ I would avoid because of the 12VHPWR in a awkward position and the nitro doesn't have dual bios. Asus and gigabyte tend to be iffy as they normally just slap on the parts design for Nvidia cards.
 
Any reason for the 9070xt over the 5070ti? from what I've seen you are only saving less than 10%.

If you are set on the 9070xt I would wait a bit as the price should lower a bit.
Asrok steel legend seems the best option from what I've seen. Taichi and nitro+ I would avoid because of the 12VHPWR in a awkward position and the nitro doesn't have dual bios. Asus and gigabyte tend to be iffy as they normally just slap on the parts design for Nvidia cards.

No, there is no reason to get the 9070 XT over the 5070 Ti or vice versa, provided you are paying the same price for both of them. They are functionally equal products, some games the 70 Ti is better, some games the XT is better. In general, go with whatever is cheaper or whatever makes more sense for your personal use case, for example, I'd pick a custom, overengineered 9070 XT like the Nitro+ over a reference board 5070 Ti, or vice versa - just keep the financial aspect in mind. If both are north of $900, then the logical thing to do is to just step up to the RTX 5080 (provided you find it at the 999-1100 range). USD values, of course, approximate it to your local currency.
 
No, there is no reason to get the 9070 XT over the 5070 Ti or vice versa, provided you are paying the same price for both of them. They are functionally equal products, some games the 70 Ti is better, some games the XT is better. In general, go with whatever is cheaper or whatever makes more sense for your personal use case, for example, I'd pick a custom, overengineered 9070 XT like the Nitro+ over a reference board 5070 Ti, or vice versa - just keep the financial aspect in mind. If both are north of $900, then the logical thing to do is to just step up to the RTX 5080 (provided you find it at the 999-1100 range). USD values, of course, approximate it to your local currency.
They aren't really functionality the same, 5070 ti is better at RT which should be taken into consideration as now we have RT only games, dlss has a wider support than fsr which is likely to continue as an advantage in the future and if you use it for productivity Nvidia has a huge advantage (for example I use blender, the 9070xt is almost 20% slower than a 5060!)

But you should pick a card for your personal use case, I always advise to go a bit more in depth on game performance, look at the games you play, look for favourable patterns for a brand in game engines to acount for future releases, and give a bigger weight to poor preforming games.

The 9070xt is a card that you want to undervolt, never do a pure overclock, overengineered cards matter for acoustics in this case. The nitro + is one of the worst premium models, it underperforms the pulse, doesn't have dual bios, and the implementation of 12vhpwr is probably the worst I've seen (it's needless and potentiates failure during installation and pc maintenance). If the 9070xt is the card for you look at the steel legend, it's a oc card (overengineerd) and it's asrok base module (cheapest of the 9070xt) and hellhound if you want dual bios.
 
They aren't really functionality the same, 5070 ti is better at RT which should be taken into consideration as now we have RT only games, dlss has a wider support than fsr which is likely to continue as an advantage in the future and if you use it for productivity Nvidia has a huge advantage (for example I use blender, the 9070xt is almost 20% slower than a 5060!)
I have both, the only time I notice the Radeon falling behind a bit is in path-tracing which you can count on the fingers of one hand. It's significantly slower though.

If the price is identical, go with the Geforce. If the Radeon is $/£/€100+ cheaper, get the Radeon.

IMO DLSS4 Transformer and FSR4 are roughly equivalent. People always moan that FSR4 isn't in as many games as DLSS4, but the number of games that actually support the DLSS4 transformer model natively is also absolutely tiny right now. IMO DLSS4 without transformer is just the same old blurry mess in motion that made me avoid DLSS for the last 6 years. You need either FSR4 or DLSS4-Transformer to get decent image quality in motion as far as I'm concerned.

Not all DLSS games can be overridden to DLSS4 via the Nvidia app, and even some of those that can do not support transformer, so you're stuck with CNN.
 
I have both, the only time I notice the Radeon falling behind a bit is in path-tracing which you can count on the fingers of one hand. It's significantly slower though.

If the price is identical, go with the Geforce. If the Radeon is $/£/€100+ cheaper, get the Radeon.

IMO DLSS4 Transformer and FSR4 are roughly equivalent. People always moan that FSR4 isn't in as many games as DLSS4, but the number of games that actually support the DLSS4 transformer model natively is also absolutely tiny right now. IMO DLSS4 without transformer is just the same old blurry mess in motion that made me avoid DLSS for the last 6 years. You need either FSR4 or DLSS4-Transformer to get decent image quality in motion as far as I'm concerned.

Not all DLSS games can be overridden to DLSS4 via the Nvidia app, and even some of those that can do not support transformer, so you're stuck with CNN.
The only thing I'd add is that any game that supports FSR 3.1 also supports FSR 4 via a toggle in the driver menu.

Other than that, I agree. Unless you definitely need CUDA for work (even though ROCm support is coming), then get the GeForce. Other than that, get the one that's cheaper. If both come at an equal price, then get the nicer card.
 
@Dr. Dro
How is that 16-pin port treating you? Is all in order?
 
Status
Not open for further replies.
Back
Top