• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

9070xt, which one...

Status
Not open for further replies.
Yes, of course it is.
Then all these melting issues that people are complaining about are PEBCAK, is what you're saying
 
Phanteks Eclipse 600s, MSI Tomahawk b650

Im just obsessed with temperature, coz I have my PC under the desk. And sometimes I touch glass in my case with my leg. SO it isnt nice to touch something hot.... And I dont have much space to put my PC somewhere else

At start I wanted to buy Red Devil (its the cheapest one) but after reviews Im afraid that he will be to hot
Not even ontop of your desk as to avoid dust build up?
 
Any reason for the 9070xt over the 5070ti? from what I've seen you are only saving less than 10%.

If you are set on the 9070xt I would wait a bit as the price should lower a bit.
Asrok steel legend seems the best option from what I've seen. Taichi and nitro+ I would avoid because of the 12VHPWR in a awkward position and the nitro doesn't have dual bios. Asus and gigabyte tend to be iffy as they normally just slap on the parts design for Nvidia cards.

The cheapest 5070ti in my country cost 200$ more than high end 9070xt, and I never had problems with AMD cards before. from 4850 till now with 6900xt.

@eidairaman1
Hard with 49" monitor
 
Last edited:
Then all these melting issues that people are complaining about are PEBCAK, is what you're saying
Even for TG and their wireview failure rate is at 0.3%
 
I have both, the only time I notice the Radeon falling behind a bit is in path-tracing which you can count on the fingers of one hand. It's significantly slower though.

If the price is identical, go with the Geforce. If the Radeon is $/£/€100+ cheaper, get the Radeon.

IMO DLSS4 Transformer and FSR4 are roughly equivalent. People always moan that FSR4 isn't in as many games as DLSS4, but the number of games that actually support the DLSS4 transformer model natively is also absolutely tiny right now. IMO DLSS4 without transformer is just the same old blurry mess in motion that made me avoid DLSS for the last 6 years. You need either FSR4 or DLSS4-Transformer to get decent image quality in motion as far as I'm concerned.

Not all DLSS games can be overridden to DLSS4 via the Nvidia app, and even some of those that can do not support transformer, so you're stuck with CNN.
The point is that in the current market the 5070ti is 40€-80€ more, at this point you should consider carefully your use case before buying a 9070xt.

From what I've seen fsr4 hovers between dlss3.5 and 4, and I was pointing the advantage more for future releases, currently not only fsr4 has lower support, in games that won't update to it you are stuck with fsr3 which is arguably a lot worst than dlss3.
 
The point is that in the current market the 5070ti is 40€-80€ more, at this point you should consider carefully your use case before buying a 9070xt.

From what I've seen fsr4 hovers between dlss3.5 and 4, and I was pointing the advantage more for future releases, currently not only fsr4 has lower support, in games that won't update to it you are stuck with fsr3 which is arguably a lot worst than dlss3.
At €40-80 more I think I'd take the Geforce. A $599 9070XT is a far better deal than a $749 5070Ti at MSRP, but that's not the world we live in right now. I paid £620 and £730 for my two cards, respectively and at those exact price points I favour the 9070XT. At closer price points the path-tracing and developer support tips the balance in favour of the 5070Ti for me, personally.

AMD might be gaining marketshare but more game developers are in Nvidia's pocket at the the moment, so it's likely to have more support for DLSS going forwards. I'm not sure DLSS is too relevant for a 5070Ti unless you intend to game at 4K, because the whole point of buying a GPU that fast is to not need to use upscaling - but at the same time, if you hold onto a GPU for a long time, it will need to use upscaling as a crutch at some point in the future for sure, (though as a counterpoint to that, Nvidia is shit at supporting the latest DLSS on older cards, so you'll get screwed over by Nvidia there, too!)

IMO, DLSS4-Transformer > FSR4 > DLSS4-CNN > DLSS3-CNN > FSR3

My personal preference is for it to not look like a 720p blurry mess in motion, so DLSS4-Transformer and FSR4 are the only upscalers I will tolerate. If you like blurry motion, or you're one of those people who enables motion-blur in the game's own graphics settings, then you're probably fine with Nvidia's DLSS3 which just exposes a very soft-focus base render resolution when you move the camera and recovers detail and resolution once you slow down or stop.

It's all personal preference really!
 
Last edited:
The cheapest 5070ti in my country cost 200$ more than high end 9070xt, and I never had problems with AMD cards before. from 4850 till now with 6900xt.

@eidairaman1
Hard with 49" monitor
Oh, in that case go for the 9070xt (assuming productivity isn't a factor of course.

Radeon cards are great, since AMD bought ATI sapphire went from godly to good, powercolor got better and asrok has been a present surprise, overall I would say they still have better board partners than Nvidia, plus I really like adrenaline software.

Personally I would look into the steel legend from asrok (where I'm from it's one of the cheapest and it's a mid tier card) and the hellhound from powercolor if you want dual bios, considering this is an undervolt card paying more will be worthless.
I wouldn't touch the nitro +, worst performance than the pulse, no dual bios and a needless implementation of 12vhpwr connector engineered to potentiate needless problems, the taichi at least has dual bios and they let you see the 12vhpwr connector after installation but the performance is poor compared to cheaper models.
 
There really aren't many "bad" 9070XT models. Atm it's a lot easier to get an annoying NV card than an AMD one. Since you're not planning on overclocking you don't even need a higher end model. A Sapphire Pulse/Pure would serve you just fine. Both are quiet. A Nitro+ is even higher-end, but you decide if it's worth it when not overclocking. An AsRock Steel Legend or Taichi is also a great option. XFX cards this gen are quite cool, but might require a little tweaking of the fan curve to get quieter. Then there's the PowerColor HellHound and Red Devil. Both great cards. As is the Asus TUF. Hell, even the Prime holds its own.

If temperatures are what you're after, then go for the higher-end cards (Nitro+, Tachi, Red Devil or TUF). Don't pay more for Pulse/Pure than you would for Nitro+. The Nitro is a higher-end card. However, do keep in mind that all of these cards have acceptable temperatures. There's no reason you should be worrying about temperatures on any of them. Temps become more of an issue if you want to push more power to OC.

I'd recommend you go with whatever is most convenient to buy locally, and whatever you like the looks of the most. As I said, there really aren't many "bad" models this time around.
 
Im just obsessed with temperature, coz I have my PC under the desk. And sometimes I touch glass in my case with my leg. SO it isnt nice to touch something hot.... And I dont have much space to put my PC somewhere else

I must have missed this part, how hot the case will get most likely depends much, much more on the power draw of the card, than on the measured temperature of the card.

Unless is exhausts directly onto the glass next to your leg I guess.

Increasing airflow in the case, especially over the glass will have a bigger impact on “feeling the temperature” than any of the different cards in this thread.

Don’t turn the power limiter up. A OC card with full power will use much more and generationen more heat.

1748069801576.png

The XFX draws 400W at max power at OC, but if you look closely at the fps, most card will deliver 95% of the performance already at 300w. The yellow marking is the sapphire pulse.
 
Then all these melting issues that people are complaining about are PEBCAK, is what you're saying

No. What I am saying is that my card has not malfunctioned.

There's probably a varying degree of PEBCAK involved, but there were also failures with the safety aspect. That's undeniable. Those issues should be largely solved by now with the H++ terminal. The original is still safe to use if you triple check all connections are in order.
 
No. What I am saying is that my card has not malfunctioned.

There's probably a varying degree of PEBCAK involved, but there were also failures with the safety aspect. That's undeniable. Those issues should be largely solved by now with the H++ terminal. The original is still safe to use if you triple check all connections are in order.
There's enough evidence from people who take this seriously and investigate this issue with lab equipment (Der8auer, Igor or Igorslab, Steve of Gamersnexus etc) to make me generally uneasy about using even the latest H++ connectors.

Roman, in particular, released a video last week where a user sent in his GPU+PSU+cables to him for repair because the PSU and GPU manufacturer were fighting over whose responsibility it was and neither of them would honour the warranty because of this. The cable was fused to both the PSU and the GPU and it was very definitely inserted all the way. Rather than a faulty cable or poorly insterted cable creating a local hotspot, this was irrefutable evidence that the user did everything right and the fault lies with the design of this standard allowing too much current to flow over an individual wire.


Roman isn't the first person to show fully-inserted melted connectors and unless something changes with the design he won't be the last. It's not a PEBCAK error and 0.3% failure rate still means 1-in-300 GPUs will catch fire which translates to hundreds or even thousands of graphics cards across the whole industry, each of which could potentially result in someone's house burning down.

There are already dozens of photos of MSI's latest yellow indicator cables melting, despite none of the yellow warning indicator being visible. People have gaslit users for failing to insert stuff correctly, using aftermarket cables (which are typically built to a higher standard than the free cables that come with a PSU anyway) and tight bend radius near the cables, but it always boils down to the same issue - there is overwhelming evidence out there in photos and videos to prove that most of this PEBCAK gaslighting is unwarranted - the connector design is faulty and more GPUs continue to burn with every passing day even with all these minor changes to the standard to help prevent the problem.

I use my 12V6X2 cable because I have no choice in the matter, but even at just 300W it makes me uneasy - I've seen multiple pieces of content now showing that the current is often distributed unevenly across the individual wires, and as an engineer I understand how temperature and resistance imbalances in (and here's the important part) an UNMONITORED cable result in a positive feedback loop where the hot wire has higher resistance which makes the wire hotter, which increases its resistance. They're all accidents waiting to happen IMO, it's simply a question of time.
 
There's enough evidence from people who take this seriously and investigate this issue with lab equipment (Der8auer, Igor or Igorslab, Steve of Gamersnexus etc) to make me generally uneasy about using even the latest H++ connectors.

Roman, in particular, released a video last week where a user sent in his GPU+PSU+cables to him for repair because the PSU and GPU manufacturer were fighting over whose responsibility it was and neither of them would honour the warranty because of this. The cable was fused to both the PSU and the GPU and it was very definitely inserted all the way. Rather than a faulty cable or poorly insterted cable creating a local hotspot, this was irrefutable evidence that the user did everything right and the fault lies with the design of this standard allowing too much current to flow over an individual wire.


Roman isn't the first person to show fully-inserted melted connectors and unless something changes with the design he won't be the last. It's not a PEBCAK error and 0.3% failure rate still means 1-in-300 GPUs will catch fire which translates to hundreds or even thousands of graphics cards across the whole industry, each of which could potentially result in someone's house burning down.

There are already dozens of photos of MSI's latest yellow indicator cables melting, despite none of the yellow warning indicator being visible. People have gaslit users for failing to insert stuff correctly, using aftermarket cables (which are typically built to a higher standard than the free cables that come with a PSU anyway) and tight bend radius near the cables, but it always boils down to the same issue - there is overwhelming evidence out there in photos and videos to prove that most of this PEBCAK gaslighting is unwarranted - the connector design is faulty and more GPUs continue to burn with every passing day even with all these minor changes to the standard to help prevent the problem.

I use my 12V6X2 cable because I have no choice in the matter, but even at just 300W it makes me uneasy - I've seen multiple pieces of content now showing that the current is often distributed unevenly across the individual wires, and as an engineer I understand how temperature and resistance imbalances in (and here's the important part) an UNMONITORED cable result in a positive feedback loop where the hot wire has higher resistance which makes the wire hotter, which increases its resistance. They're all accidents waiting to happen IMO, it's simply a question of time.
Same Roman:

Out of 3700 units of wire view sold so far, 12 confirmed cases of melted 12VHPWR connectors.
 
There's enough evidence from people who take this seriously and investigate this issue with lab equipment (Der8auer, Igor or Igorslab, Steve of Gamersnexus etc) to make me generally uneasy about using even the latest H++ connectors.

Roman, in particular, released a video last week where a user sent in his GPU+PSU+cables to him for repair because the PSU and GPU manufacturer were fighting over whose responsibility it was and neither of them would honour the warranty because of this. The cable was fused to both the PSU and the GPU and it was very definitely inserted all the way. Rather than a faulty cable or poorly insterted cable creating a local hotspot, this was irrefutable evidence that the user did everything right and the fault lies with the design of this standard allowing too much current to flow over an individual wire.


Roman isn't the first person to show fully-inserted melted connectors and unless something changes with the design he won't be the last. It's not a PEBCAK error and 0.3% failure rate still means 1-in-300 GPUs will catch fire which translates to hundreds or even thousands of graphics cards across the whole industry, each of which could potentially result in someone's house burning down.

There are already dozens of photos of MSI's latest yellow indicator cables melting, despite none of the yellow warning indicator being visible. People have gaslit users for failing to insert stuff correctly, using aftermarket cables (which are typically built to a higher standard than the free cables that come with a PSU anyway) and tight bend radius near the cables, but it always boils down to the same issue - there is overwhelming evidence out there in photos and videos to prove that most of this PEBCAK gaslighting is unwarranted - the connector design is faulty and more GPUs continue to burn with every passing day even with all these minor changes to the standard to help prevent the problem.

I use my 12V6X2 cable because I have no choice in the matter, but even at just 300W it makes me uneasy - I've seen multiple pieces of content now showing that the current is often distributed unevenly across the individual wires, and as an engineer I understand how temperature and resistance imbalances in (and here's the important part) an UNMONITORED cable result in a positive feedback loop where the hot wire has higher resistance which makes the wire hotter, which increases its resistance. They're all accidents waiting to happen IMO, it's simply a question of time.

It's not really gaslighting, ensuring all connectors are properly inserted will also ensure current load is split evenly across all wires, which will drastically reduce the chance of failure. That's more on the GPUs for not having monitoring and load balancing, really. The WireView is a cool gadget, wish I could buy one. They've been perma out of stock here.

In any case, I have my rig on a bench, where the cable is completely stress free and I made sure that it's inserted correctly. It's also an older H+ cable, and I am putting the 600 W of a 5090 through it. No cable overheating or anything off to report thus far, if something does happen I'll make sure to document and post a thread about it
 
The XFX draws 400W at max power at OC, but if you look closely at the fps, most card will deliver 95% of the performance already at 300w. The yellow marking is the sapphire pulse.
Diminishing returns poor efficiency at higher watts. Pulse should eat 304w not 315w! Usually there are 304w, 317w, 330w and up versions.

Best efficiency for RX 9070 XT probably is at around ~280-320 watts when UV (at stock performance level)
 
Last edited:
It's not really gaslighting, ensuring all connectors are properly inserted will also ensure current load is split evenly across all wires, which will drastically reduce the chance of failure. That's more on the GPUs for not having monitoring and load balancing, really. The WireView is a cool gadget, wish I could buy one. They've been perma out of stock here.
That's my complaint, really.

There are documented cases where cables are properly inserted, checked on camera and then monitored - with various methods from different videos/articles using either the Asus GPU pin monitoring, clip-on ammeters, or the WireView - and proving there's a sometimes a serious imbalance of current flow through the different wires even when everything is done perfectly; Following all the recommendations and double-checking your connections does not guarantee you'll be fine.

Seasonic's newly announced PSUs that have per-wire monitoring at the GPU end is doing what you say (and I agree) AIBs should have done on the board to correct the issue, like Asus has on a couple of models - though realistically the underlying problem is that there simply isn't enough safety margin built into each wire of the new connectors. old PCIe 8-pin sure is an inefficient use of wire but it has almost 3x the safety factor in terms of current per AWG16 strand and it's becoming apparent that modern GPUs have been dipping into that 3x safety factor quite heavily, even on cards that still use 8-pin connectors!
 
Diminishing returns poor efficiency at higher watts. Pulse should eat 304w not 315w! 315w does not exist there are 304w, 317w, 330w and etc versions.
I agree, and the returns are realy diminishing on the 9070XT, it's already past its peak imho. The Vanilla 9070 give some bit of performance for those extra watts.


I stole the image from the techpowerup review, but I still fell that the point stands regardless of 10 extra watt. https://www.techpowerup.com/review/sapphire-radeon-rx-9070-xt-pulse/43.html

1748080792733.png

Do the reviews report power used or power budget for the GPU, looks like the first?
I guess we have the "same numbers" and the difference is that the 315W is the actual usage and 304W is the power budget, the differencs is the overhead/iefficencies to reach it?
It would actualy be cool to see that delta in reviews, since it would indicate the efficency of the power supply circuitry.
 
Best efficiency for RX 9070 XT probably is at around ~280-320 watts when UV (at stock performance level)
Can confirm.

-50mv to -75mv will get you slightly better than default "304W" performance at about 275W. Based on the TUF, Pulse, Nitro+ I think -75mv is about all you can expect from a 9070XT. The Nitro+ is stable at -115mv, but that's only because it's clearly overvolted out of the box by Sapphire, ruining its efficiency and effectively adding 40-50mv onto that -115mv so it's not really the impressive undervolt it would seem to be and falls in line with the other two cards, as well as reports of undervolting from other people posting their MSRP/standard card undervolts around the web.

You can see from my quick and dirty testing (post here) that the "curve" is pretty flat - because we're way past the efficiency sweet spot:

1748081893050.png
 
Last edited:
That's my complaint, really.

There are documented cases where cables are properly inserted, checked on camera and then monitored - with various methods from different videos/articles using either the Asus GPU pin monitoring, clip-on ammeters, or the WireView - and proving uneven current flow through the wires even when everything is done perfectly. Following all the recommendations and double-checking your connections does not guarantee you'll be fine.

Seasonic's new PSUs that have per-wire monitoring at the GPU end are doing what AIBs should have done on the board to correct the issue, though realistically the real issue is that there simply isn't enough safety margin built into each wire of the new connectors. old PCIe 8-pin sure is an inefficient use of wire but it has almost 3x the safety factor in terms of current per AWG16 strand and it's starting to become apparent that modern GPUs have been dipping into that 3x safety factor quite heavily, even on cards that still use 8-pin connectors!

There are quite a few variables at play indeed. PSU-side per wire monitoring is going to require a DSP (like Corsair AXi series and the one you mentioned), that's going to add to costs and not really something you'll see in most midrange models, never mind budget. I also hesitate to call it a safety factor, but I guess arguing simple physics works well enough to merit it acting as such. The specifications must generally never be exceeded no matter the form factor IMHO.
 
Do the reviews report power used or power budget for the GPU, looks like the first?
That's pretty much the same thing on the 9070 (XT). They always work at 100% power budget during 100% load.

I guess we have the "same numbers" and the difference is that the 315W is the actual usage and 304W is the power budget, the differencs is the overhead/iefficencies to reach it?
No - the Sapphire Pulse comes with a 315 W power budget, so it consumes 315 W. The only "standard" 304 W card is the Powercolor Reaper, as far as I know.
 
Can confirm.

-50mv to -75mv will get you slightly better than default "304W" performance at about 275W.

You can see from my quick and dirty testing (post here) that the "curve" is pretty flat - because we're way past the efficiency sweet spot:
That's cool if your UV is 24/7 stable in all games.

-35mv, -9% power limit, Memory 2614Mhz fast timings. From 317watts to 288watts exactley the same performance as stock.
 
No - the Sapphire Pulse comes with a 315 W power budget, so it consumes 315 W. The only "standard" 304 W card is the Powercolor Reaper, as far as I know.
Both GPU-Z and the Radeon driver do report a "total board power" of ~303-304W in all my testing. Maybe Sapphire are reporting an extra 10W for the fans?
 
The point is that in the current market the 5070ti is 40€-80€ more, at this point you should consider carefully your use case before buying a 9070xt.

From what I've seen fsr4 hovers between dlss3.5 and 4, and I was pointing the advantage more for future releases, currently not only fsr4 has lower support, in games that won't update to it you are stuck with fsr3 which is arguably a lot worst than dlss3.
In which country 5070 ti is only 40-80€ more?
 
Both GPU-Z and the Radeon driver do report a "total board power" of ~303-304W in all my testing. Maybe Sapphire are reporting an extra 10W for the fans?
That's interesting. I have no idea why that is. My Reaper also reports 304 W used at full load.
 
That's cool if your UV is 24/7 stable in all games.

-35mv, -9% power limit, Memory 2614Mhz fast timings. From 317watts to 288watts exactley the same performance as stock.
Yeah, I'm super-conservative in my testing and I've been dailying the below undervolt for 5 weeks now, 100+ hours of gaming and not one single crash. I managed to get 1h+ stable at -100mv in specific benchmarks but they weren't game stable. Sample size of one, obviously. I only played with the TUF and Nitro+ undervolting briefly as they were customer builds that were shipped at bone-stock settings. My "daily driver" -70mv undervolt from that post above did crash once about 3h into a Phantom Liberty playthrough using Path tracing and FSR4+framegen a couple of days after posting that, so I dialled it back slightly while also raising the power limit very slightly to reduce the severity of the clock/voltage jumps between max boost and lower power draw moments.

Sapphire Pulse (Samsung)
-65mv, -13% power limit, 2666MHz fast timing on the RAM - runs at 264W and can't hear it over other fans which are capped at 1000rpm max on a quiet curve for both the AIO and intakes/exhausts.

Edit:
That's interesting. I have no idea why that is. My Reaper also reports 304 W used at full load.
I'm just curious where you plucked the 315W number from for the Pulse? It's more likely that your source is wrong, that's all.
Sapphire themselves claim it as a 304W card:

Edit 2:
OH HANG ON, I see what's going on now.

@io-waiter is citing the overclocking section of W1zzard's review which are real-world measurements taken from the PCI Express power connector(s) and PCI Express bus slot. There are some measurement error bars and presumably some minor losses before the GPU's own power sensors are reached in the circuit, as well as the fans' power draw - so W1zzard's measured values are never likely to exactly match any stated TBP for any GPU. They're not "wrong" measurements, they're just not relevant to the manufacturer-published and software reported BIOS power limit or TBP that we're talking about here :)
 
Last edited:
Status
Not open for further replies.
Back
Top