• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is RX 9070 VRAM temperature regular value or hotspot?

I don't have a problem with my 9070 XT but I know how to mod it.
Obviously(?) my comments on that post was not directly for you alone… hence the “people” in the first sentence.
I just use your screenshot to make a point.
 
The reason I agree with @Assimilator on this one is that nearly every single 9070 (XT) seems to be running high VRAM temps. I can't imagine that every single AIB botched every single card.
Not like 5090 condensed but, 9000 series are condensed PCBs, heat by proximity. If drv Mosfets are not cooled properly it will rise temps on VRAM. That being only one factor.
Hot unstable DRV Mosfet can also feed too much voltage or wrong V packets to the inductors causing overheating of those, core saturation, coil whine.
Is an entire chain reaction resulting in oscillating voltages, Hot VRAM, Drv Mosfet, Inductors and even the SP caps can dry out in time because of heat.

We seen in the past Gigle bit, Zotac, MSI and others, forgetting to put pads on drv mosfets.
In the case of OP the XFX card, as I seen the cold plate for VRAM touching the V chamber not sure is a proper link with pads in between and not sure how beneficiary is for the VRAM to be linked with the GPU core heat.
Not sure sensor issue is fixable by BIOS update, hope it is, I really do. We need those sensors.

According to who? You?

Are you the engineer who designed the card? No? Then why do you think you know better than that engineer as to what is "unreasonably high" or not?

This is exactly the reason why NVIDIA removed the hotspot sensor on the 5000 series, because uninformed users who think they know better than engineers whined incessantly about it without bothering to understand it. Considering AMD had the same problem with Zen 4 and 5 and idiot users, I'm honestly surprised they didn't follow NVIDIA's lead.

And no, I am not advocating for removing sensors, but unfortunately the world has more people suffering from Dunning-Kruger than people who are willing and capable of rubbing two brain cells together. So NVIDIA optimised for the lowest common denominator.
We are not engineers in GPU design but, we can work with official data given, take it with a lot of salt and do what we can, to set our hardware thresholds under the official one. Desire for longevity?

For example I can keep my VRAM to 56C(AVG) on GPU avg power 256 W max pwr 316 W - 25 min Superpositon 8K optimized loop 3x 92mm GPU fans @1400 RPM + 1 backplate at aprox 1000RPM.
Yes, is not a 315 W card but is not far. Side panel close as 3 cm from the card, recirculating some of the hot air.

Nvidia removed hot spot for other reasons, they can't give a rat ass to such complaints.

Why are you talking about people that OC their CPUs and have no thermal possibilities to do so?

Is enough data IMO that GDDR5X and GDDR6X didn't reach official threshold to throttle down at 100 C and 105C, just fried before reaching those temps, same with GDDR6.
For example just ask the guy NorthWest Repair YT how many thousand mem chips he replaced. Not only Micron. He is fast in diagnosis of GPUs several issues, and is not an experience you can ignore.

Just look at the title of one of his video, he is changing memory module in this case
 
Last edited:
This education happened in literally every review of a Zen 4 CPU. It happens on the product page for those CPUs. Yet we still got, and I am sure will get in future, multiple threads of idiots asking "why my cpu tmprture so highhh?" when they could've got that answer with a single Google search. That is the type of advanced stupidity that companies fight against every day.


Which describes most consumers. Therefore it makes the most business sense for any company to optimise for the majority case of "don't scare the idiots", by not exposing scary temperatures at all. You can't fix stupid, but you can design products that don't trigger it as often.
I don't think we should cater to the bottom of the intelligence ladder though. Even though it is popular these days.

Yes, there are people who don't get it. There are also a lot of people who do and can easily understand things. But yanking ever more responsibility and agency out of their hands, they'll eventually turn lazy and stupid too.

If that's a world you want to live in, keep saying what you say. Did you know corporate loves lazy and stupid? Easy money. I don't know if you understand what you're voicing here.

Under the guise of 'convenience'... How many people can still use a map now that they have satnav? Its a simple example. We all become dependent on the tools we get offered. Some reflection on that isn't a bad thing, and choosing to retain skills is, in my view, essential. And you don't even have to be good at everything, but certainly at something. And that's what most people are. And then they share that knowledge with others, and we all benefit. But if no one has the skillsets anymore? Hmmm...

For every five people I see not knowing jack shit about tech, I meet about 4 others that start asking questions and one that can do it all by himself just like me. So yes, the majority may not know, but the minority keeps them in the game, too.
 
Last edited:
If that's a world you want to live in
The fact that I don't want to live in it, doesn't mean I'm foolish enough to ignore the fact that I do live in it.
 
All RX 9070 series seem to share this higher VRAM temps behavior. IMHO, above 90°C for VRAM is really high temperature.
AMD has reused GDDR6 third time so far with RX 9070 series. I've seen multiple threads on the internet with high VRAM temp on 7900 XT(X) cards.
Many of these cases were solved after thermal pads were exchanged for better ones.
Don't expect that AIBs will be using Thermal Grizzly grade pads or even Arctic grade pads for these cards.
There is some cheap porous and 2 mm thick thermal pad on top of those VRAM chips.
 
All RX 9070 series seem to share this higher VRAM temps behavior. IMHO, above 90°C for VRAM is really high temperature.
AMD has reused GDDR6 third time so far with RX 9070 series. I've seen multiple threads on the internet with high VRAM temp on 7900 XT(X) cards.
Many of these cases were solved after thermal pads were exchanged for better ones.
Don't expect that AIBs will be using Thermal Grizzly grade pads or even Arctic grade pads for these cards.
There is some cheap porous and 2 mm thick thermal pad on top of those VRAM chips.
Thermal pads quality yes, I didn't expect AIB to give high quality pads but, than we have a higher condensed PCB as well on 9000 series. I believe is a mixture of factors.

I don't have a problem with my 9070 XT but I know how to mod it.
I definitely mod mine... when I get it. Specially if VRAM is too high.
 
All RX 9070 series seem to share this higher VRAM temps behavior. IMHO, above 90°C for VRAM is really high temperature.
AMD has reused GDDR6 third time so far with RX 9070 series. I've seen multiple threads on the internet with high VRAM temp on 7900 XT(X) cards.
Many of these cases were solved after thermal pads were exchanged for better ones.
Don't expect that AIBs will be using Thermal Grizzly grade pads or even Arctic grade pads for these cards.
There is some cheap porous and 2 mm thick thermal pad on top of those VRAM chips.
No, some of them (like PowerColor) are using PTM7950, but still run with hot VRAM. I think it's by design. Otherwise, we would see more variation among temps of different AIB cards.
 
I'm running XFX RX 9070 XT Mercury OC and in more demanding games like Dying Light 2 with ray tracing fully on, VRAM temperature is 92°C which is unreasonably high for such an overbuild card with massive cooler that's also touching VRAM modules. Granted, it's not full contact, but even with partial like it is on Mercury OC, it should be lower.

I heard from someone that AMD changed VRAM temperature reporting to now be hotspot by default instead of regular temperature. Anyone has any concrete info to confirm that? All I can find online are reports of "unreleased AMD card with very high VRAM temperatures" and "RX 9070 having issues too" reports all over news sites and nothing else. If VRAM temperature sensor is now indeed reporting hotspot by default for VRAM, that would explain all those leaked reports and it would also explain why VRAM temperature is so high even on Mercury OC. But I don't know for sure and I can't find any reliable resources for it online. Any reliable info on this matter would be highly appreciated.
A question has been asked to HWiNFO author in this thread about RX9000 VRAM temp reading

Post #4
 
No, some of them (like PowerColor) are using PTM7950, but still run with hot VRAM. I think it's by design. Otherwise, we would see more variation among temps of different AIB cards.
But they are using phase thermal TIM on GPU only, on VRAM and VRM there are standard thermal pads. Or not?
 
But they are using phase thermal TIM on GPU only, on VRAM and VRM there are standard thermal pads. Or not?
Good question, I don't know.

But still, the fact that every single 9070 XT card runs with hot VRAM makes me believe that it's fine.
 
that seems like a very dangerous argumentation to be made in this tech space this days:
-engineers know what they are doing
-they wouldn't make something that doesn't work properly or outside specs

in what planet are you guys on?
 
that seems like a very dangerous argumentation to be made in this tech space this days:
-engineers know what they are doing
-they wouldn't make something that doesn't work properly or outside specs

in what planet are you guys on?
The same argument was made when my 6750 XT ran with a hotspot temp of 105 °C (just 5 degrees below tJmax). Almost 3 years have passed, and it's still running strong.
 
The same argument was made when my 6750 XT ran with a hotspot temp of 105 °C (just 5 degrees below tJmax). Almost 3 years have passed, and it's still running strong.

that has absolutely nothing to do with the conversation being made.
Nvidia released cards for decades without cables burning, do you really think that's a smart argument to be made, here that you guys were talking about intelligence?
 
Nvidia released cards for decades without cables burning, do you really think that's a smart argument to be made, here that you guys were talking about intelligence?
Now that has nothing to do with the conversation here.
 
Now that has nothing to do with the conversation here.

it has it's the same conversation. Someone designed a stupid cable, and Nvidia and AMD are using it.
What isn't smart is claiming "they know what they are doing, so it must be fine", that's some delusional stuff
 
It is not guilty until it is proven guilty.
12V-whatever are proven guilty of burning when they are supposed to be in spec. (Yes I know, uSeR ErRoR blablabla, but it still has a much higher tendency of burning than it should be.)
GDDR6 at 100C are not guilty. Or else there would be a crapton of forum posts / news articles of RX7900 / RX69x0 / RX5700 ( 5700XT was launched at Jul 2019, and it was great at mining!) dying.
I didn't care this that much, but I'm suprised that RX5700XT and RX9070XT both uses GDDR6. Feels like a century ago.

Obviously a lower temp is preferable, but I don't really see a problem on GDDR6 running at near 100C when it is rated at 110C.
EDIT: minor typo
 
I under volt my 9070 XT and cool the backplane, there are thermocouples that contact the backplane (default), and my VRAM runs at 72-80C and in a few bench/game at 82C.
If I remove the undervoltage (stock), they jump to 84-86C, and if I try to remove the cooling fan, the VRAM will probably jump to 90C+.
And my case is well ventilated (the CPU is also cooled outside the case).

So there is a pure lack of cooling - bad design - nothing more, nothing less. In all models? Sad, but yes.

Engineers know what to do, but marketing screws everything up and this has been well known for years.
 
it has it's the same conversation. Someone designed a stupid cable, and Nvidia and AMD are using it.
What isn't smart is claiming "they know what they are doing, so it must be fine", that's some delusional stuff
Who talked about a cable here? The topic is about safe VRAM temperatures. (So far), there's no indication of anyone's graphics card catching fire, or killing itself in any other way due to high VRAM temps, unlike the cable example, of which there's plenty of.

It is not guilty until it is proven guilty.
12V-whatever are proven guilty of burning when they are supposed to be in spec. (Yes I know, uSeR ErRoR blablabla, but it still has a much higher tendency of burning than it should be.)
GDDR6 at 100C are not guilty. Or else there would be a crapton of forum posts / news articles of RX7900 / RX69x0 / RX5700 ( 5700XT was launched at Jul 2019, and it was great at mining!) dying.
I didn't care this that much, but I'm suprised that RX5700XT and RX9070XT both uses GDDR6. Feels like a century ago.

Obviously a lower temp is preferable, but I don't really see a problem on GDDR6 running at near 100C when it is rated at 110C.
EDIT: minor typo
Exactly my point. Just because I've got a knife in my hand, it doesn't mean I'm gonna kill someone. Maybe I'm a chef.
 
Maybe I'm a chef.
Or maybe you're this guy :wtf:
ay_108629119.jpg
 
Or maybe you're this guy :wtf:
ay_108629119.jpg
Maybe. Or maybe this guy in the centre. :rolleyes:
1742472524809.jpeg
Awesome film, by the way, highly recommended.
 
It is not guilty until it is proven guilty.
12V-whatever are proven guilty of burning when they are supposed to be in spec. (Yes I know, uSeR ErRoR blablabla, but it still has a much higher tendency of burning than it should be.)
GDDR6 at 100C are not guilty. Or else there would be a crapton of forum posts / news articles of RX7900 / RX69x0 / RX5700 ( 5700XT was launched at Jul 2019, and it was great at mining!) dying.
I didn't care this that much, but I'm suprised that RX5700XT and RX9070XT both uses GDDR6. Feels like a century ago.

Obviously a lower temp is preferable, but I don't really see a problem on GDDR6 running at near 100C when it is rated at 110C.
EDIT: minor typo

This depends on if the memory is Micron, SK Hynix or Samsung modules and they have different when looking at the documentation.

Micron states OC Temps to 95C+
SK Hynix and Samsung states max of 95C

So from this it could be on the high sight of what the memory modules should be at, on my own RX 9070 the memory modules there runs at 85.0C measured with GPU-Z and with uv I see maybe 84.0C during the same time so there ain't much room for oc unless you actively try to cool the memory modules better.
 
Highest vram temperature seen on my card was 86 C° with a Hotspot of 79 - 80 C° (edit: in a heated/cozy room). Runs great & performs well. The case I have is shite basically one fan in & out. Also the card probably increased the airflow for my CPU because of the 3 fan going through the card ^^" But have to test it in Roboquest that game hammers my CPU with temps.
 
This depends on if the memory is Micron, SK Hynix or Samsung modules and they have different when looking at the documentation.

Micron states OC Temps to 95C+
SK Hynix and Samsung states max of 95C

So from this it could be on the high sight of what the memory modules should be at, on my own RX 9070 the memory modules there runs at 85.0C measured with GPU-Z and with uv I see maybe 84.0C during the same time so there ain't much room for oc unless you actively try to cool the memory modules better.
I was sure the Samsung GDDR6 chips on my XTX have a rated TJmax of at least 105C, likely 110C.
Totally forgot that apparently not all GDDR6 are not created equal. Oops.

Looking at OP’s card and a few 7900XTX and 9070XT models, they use the same model of Hynix chips according to TPU’s reviews.
Now, that exact model of VRAM chips is rated at 0-85C, but is it the TJmax? Is the source reliable? The review here for AsRock Taichi 9070XT has VRAM temps gone as high as 90C.
I would love to check how other 7900XTX models fare, but the reviews here don’t show VRAM temps for the XTXs.

That said, the “all these cards should have been dead” point still stands, but I feel the ground here is a bit shaky now.
 
Maybe dumb question but:

Shouldn't the vram/card throttle itself if it gets too hot? Or is that not a thing that happens?
 
I was sure the Samsung GDDR6 chips on my XTX have a rated TJmax of at least 105C, likely 110C.
Totally forgot that apparently not all GDDR6 are not created equal. Oops.

Looking at OP’s card and a few 7900XTX and 9070XT models, they use the same model of Hynix chips according to TPU’s reviews.
Now, that exact model of VRAM chips is rated at 0-85C, but is it the TJmax? Is the source reliable? The review here for AsRock Taichi 9070XT has VRAM temps gone as high as 90C.
I would love to check how other 7900XTX models fare, but the reviews here don’t show VRAM temps for the XTXs.

That said, the “all these cards should have been dead” point still stands, but I feel the ground here is a bit shaky now.

Problem is almost all documentation for Sk Hynix and Samsung are hard to find, only Micron have it out in the open.

Link: https://www.micron.com/products/memory/graphics-memory/gddr6

For SK Hynix and Samsung I am going off memory, maybe production evolved from the first time they where introduced, wish I hope otherwise we will see cards with memory failing.

Maybe dumb question but:

Shouldn't the vram/card throttle itself if it gets too hot? Or is that not a thing that happens?

In reality yes, but if it's on the edge all the time, it might not because nothing is perfect.
 
Back
Top