• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

110°C Hotspot Temps "Expected and Within Spec", AMD on RX 5700-Series Thermals

Joined
Aug 9, 2019
Messages
226 (0.13/day)
Common sense and physics. Use your brain.

A device has a max rated limit. This is the max it can take before IMMEDIATE damage occurs. Long term damage does not play by the same rule. Whenever you are dealing with a physical product, you NEVER push it to 100% limit constantly and expect it to last. This applies to air conditioners, jacks, trucks, computers, tables, fans, anything you use on a daily basis. Like I said, my car can do 155 MPH. But if I were to push it that fast constantly, every day, the car wouldnt last very long before experiencing mechanical issues, because it isnt designed to SUSTAIN that speed.

Every time the GPU heats up and cools down, the solder connectors experience expansion and contraction. Over time, this can result in the solder connections cracking internally, resulting in a card that does not work properly. The greater the temperature variance, the faster this occurs. This is why many GPUs now shut the fans off under 50C, because cooling it all the way down to 30C increases the variance the GPU experiences.

What AMD is doing here is allowing the GPU to run at max tjunct temp for extended periods of time and calling this acceptable. Given the GPU also THROTTLES at this temp, AMD is admitting it designed a GPU that cant run at full speed during typical gaming workloads. Given AMD also releases GPUs that can be tweaked to both run faster and consume less voltage rather reliably, it would seem a LOT of us know better then RTG engineers.

Would you care to explain how AMD's silicon is magically no longer affected by physical expansion and contraction from temperatures? I'd love to hear about this new technology.

really what damage to your car would happen at 155mph daily? Do you perhaps have 3 gears? small motor struggling to get to 155? id say letting a car sit idle would be more damage then most cars at 155 :)


getting from 0 to 100mph is where you’re going to be doing the most ‘damage’ - if you do it in a quarter mile, you’re really stressing the car, but if you take 20 miles to get to that speed, your wear and tear is much less, due to less torque. Once you get to that speed, it doesn’t much matter if you’re driving a muscle car or a Prius, as long as the overdrive gear is set up to sip fuel (or pull juice from the battery) just enough to overcome 100mph drag.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I don't care what excuses they come up with, any sustained temperatures in the range of 100-110°C can't be good for long term reliability of the product. And this goes for any brand, not just AMD.

We have to remember that most reviews are conducted on open test benches or in open cases, while all customers will run these in closed cases, and even the best of us will not keep it completely dust free. That's why it's important that any product have some thermal headroom when reviewed under ideal circumstances, since real world conditions will always be slightly worse.
 
Joined
Sep 17, 2014
Messages
20,934 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Actually, talk was about thermal design and horrors that nvidia GPU owners feel, for some reason, for 5700 XT ref GPU owners.

Now that we've covered that, NV's 2070s (I didn't check others) AIBs aren't great OCers either, diff between Ref and AIB performance is also similar between brands.

You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock. The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number. And I have to say, the AIB Navi's so far look pretty good too, on a similar level even - sans the OC headroom - AMD squeezed out every last drop at the expense of a bit of efficiency, and it shows. Was that worth it? I don't know.

The problem here is AMD once again managed to release ref designs that visibly suck, and its not good for their brand image, it does not show dedication to their GPUs much like Nvidia's releases are managed. The absence of AIB cards at launch makes that problem a bit more painful. And its not a first - it goes on, and on. In the meantime, we are looking at a 400 dollar card. Its not strange to expect a bit more.

Oh and by the way, I said similar stuff about the Nvidia Founders when Pascal launched, but the difference there was that Pascal and GPU Boost operated at much lower temps. And even thén the FE's still limited performance a bit.
 
Last edited:
Joined
May 7, 2014
Messages
55 (0.02/day)
You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock. The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number. And I have to say, the AIB Navi's so far look pretty good too, on a similar level even - sans the OC headroom - AMD squeezed out every last drop at the expense of a bit of efficiency, and it shows. Was that worth it? I don't know.

The Turing chips have a small OC headroom but the performance gain is almost 10%, which is the opposite of what you see with Navi

 
Joined
Jun 27, 2019
Messages
1,855 (1.05/day)
Location
Hungary
System Name I don't name my systems.
Processor i3-12100F 'power limit removed/-130mV undervolt'
Motherboard Asus Prime B660-PLUS D4
Cooling ID-Cooling SE 224 XT ARGB V3 'CPU', 4x Be Quiet! Light Wings + 2x Arctic P12 black case fans.
Memory 4x8GB G.SKILL Ripjaws V DDR4 3200MHz
Video Card(s) Asus TuF V2 RTX 3060 Ti @1920 MHz Core/950mV Undervolt
Storage 4 TB WD Red, 1 TB Silicon Power A55 Sata, 1 TB Kingston A2000 NVMe, 256 GB Adata Spectrix s40g NVMe
Display(s) 29" 2560x1080 75 Hz / LG 29WK600-W
Case Be Quiet! Pure Base 500 FX Black
Audio Device(s) Onboard + Hama uRage SoundZ 900+USB DAC
Power Supply Seasonic CORE GM 500W 80+ Gold
Mouse Canyon Puncher GM-20
Keyboard SPC Gear GK630K Tournament 'Kailh Brown'
Software Windows 10 Pro
Honestly I don't care about this 'issue' and I don't belive it for a second that Nvidia or Intel doesn't have the same stuff going on anyway.

In the past ~10+ years I only had 2 cards die on me and both were Nvidia cards so theres that.

Don't care about ref/blower cards either,whoever buys those should know what they are buying instead of waiting some time to get 'proper' models.

I'm planning to buy a 5700 but I'm not in a hurry,I can easily wait till all of the decent models are out and then buy one of them 'Nitro/pulse/giga G1 probably'.
 
Joined
Oct 31, 2013
Messages
186 (0.05/day)
Every Vega has the T-juction temp sensors. GPU-Z showed them, which confused a lot of people. So they choose for one GPU-Z version not to show it by default. But you could activate that.

And don't forget there is still the usual GPU-Temperature. Which is showing the temperatures we are used to.

We could only compare Nvidia and AMD cards, when we have a sensor with with a lot of temp zones, which we could put between the GPU-Die and the cooler. We could see the hot spots no matter who built the card.
 
Joined
Aug 6, 2009
Messages
1,162 (0.22/day)
Location
Chicago, Illinois
Ultimately what I and others should care about is power consumption to performance ratio BUT with AMD being lower nm than Nvidia I hoped and expected much better results. Oh well.
 
Joined
Oct 1, 2006
Messages
4,884 (0.76/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
First of all: is this your intuition or are there some publications to support this hypothesis? :)

Second: you seem a bit confused. The passive cooling does not increase the number of times the fan starts. The fan is not switching on and off during gaming.
If the game applies a lot of load, the fan will be on during the whole session. Otherwise the fan is off.
So the number of starts and stops is roughly the same. It's just that your fan starts during boot and mine during game launch. So I don't have to listen to it when I'm not gaming (90% of the time).

In fact it actually decreases the number of starts for those of us who don't play games every day.
First of all, motors draw the maximum amount of current when they start, this heats up the wire winding in the motor.
Extra start and stops means extra thermal cycles for the wires. This is similar to the concern that other members have raised about the solder joins of the GPU.
The there is the wear on the bearings depending on type. Rifled Bearing and Fluid Dynamic Bearings require a certain speed to get the lubricant flowing.
This means at start up there are parts of the bearing with very little lubrication which cause extra wear on the bearing than otherwise.

Now because the the fan blades are rather light loads, the motor gets up to speed quickly and the effects are minimal.
Therefore I said it is only slightly detrimental to fan life span.
Shutting the fans off at idle is for noise reasons and nothing else, that is exactly what I said in my post thanks for repeating my point.

No not a fact, it certainly doesn't decrease the number of starts, the GPU fans will spin up at least once on boot.
Also depending on the design of the card, some GPUs will start the fans on video play back due to the the gpu heating up under load for hardware acceleration.
So the best case scenario is it is the same number of start cycles.
 
Joined
Apr 12, 2013
Messages
6,750 (1.67/day)
You really need to clarify whether you actually have a point or just want to keep this slowchat going with utter bullshit. The numbers speak for themselves, what are you really arguing against? That AMD is a sad puppy not getting enough love?
So that's your argument huh? What's your data point for 110°c "hotspot" temperature being (always) bad given we have absolute no reference, especially at 7nm nor do we know if 110°c is sustained for n length of time? Do you for instance have data about hotspots on 9900k @5GHz or above & how about this ~ Temperature Spikes Reported on Intel's Core i7-7700, i7-7700K Processors

So you come with absolutely no data, lots of assumptions & then ignoring historical trends & call whatever I'm saying as utter BS, great :rolleyes:

Could AMD have done a better job with the cooling ~ sure, do we know that the current solution will fail medium - long term? You have absolutely 0 basis to claim that, unless you know more than us about this "issue" or any other on similar products from the competitors.
 
Joined
Oct 15, 2010
Messages
208 (0.04/day)
It is hard to compare to the competition, because nVidia GPUs do not have a TJunction sensor at all.
Without knowing where the Temp sensor on nVidia GPUs is located, there really is no valid comparison.
The edge temp on AMD gpus aka "GPU" read out is much closer to what you typically expects.

Edit: It is not a single TJunction sensor, the TJunction / Hotspot read out is just the highest reading out of many different sensors spread across the die.
In the case of Radeon VII there are 64 of them. It is not necessary the the same area of the GPU die that is getting hot all the time.

Yeah, People never bother to read the whole text, they only see, AMD 110 Degrees. Then start complaining.
You moronic faqs, why bother to waste energy commenting at all ! You are in no measure to understand squat, so please go do something else with your life, instead of spamming us on the Forums here !
 
Joined
Jul 10, 2015
Messages
749 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Almost no OC headroom since 7970 Ghz ed. OC dream Watercooled Fury X? No OC AIB cards. 7nm brand new Navi and here we are, underwhealming already OCed straight from production lines. RDNA 2.0 to the rescue in 2020.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock.
When I checked it was 4% for ASUS and 3% for MSI.

The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number.
Boost doesn't count as OC in my books. It's part of the standard package, and wizard keeps explicitly stating the clock range in recent reviews.
The performance we get at the end of the day, includes that boost.
You can't count it as something being added on top.


The problem here is AMD once again managed to release ref designs that visibly suck, and its not good for their brand image, it does not show dedication to their GPUs much like Nvidia's releases are managed.
More of a PR, nothing practical. We don't even know what "spot" temps of NV are.


The absence of AIB cards at launch makes that problem a bit more painful. And its not a first - it goes on, and on. In the meantime, we are looking at a 400 dollar card. Its not strange to expect a bit more.
That's simply caused by playing catch-up game.
And, frankly, I'd rather learn what's coming 1-2 month in advance, rather than wait for Ref and AIB cards to hit together. (I don't even get what ref cards are for, other than that)

Oh and by the way, I said similar stuff about the Nvidia Founders when Pascal launched, but the difference there was that Pascal and GPU Boost operated at much lower temps. And even thén the FE's still limited performance a bit.
Ok, let me re-state this again:
1) AMD used a blower type (stating that is the only way they can guarantee the thermals)
2) Very small perf diff between AIB and Ref proves that even ref 5700 XT is not doing excessive throttling, despite being 20+ degrees hotter.
3) "Spot temperature" is just a number, that makes sense only in pair with ref cards (who buys them) and even there, is not causing practical problems, although I admit that @efikkan has a point and it might have bad impact on card's longevity, still, "ref card, who cares"

In short: possibly bad impact on card longevity, but we are not sure. Definitely not having serious performance impact. We don't even know what values are for NV, as there is no exposed sensor.
 
Joined
Oct 28, 2010
Messages
251 (0.05/day)
That's exactly what was said about the 1070 GPU, which indeed could exceed 100C, but that in turn for notebooks overheated the CPU too much, so maintenance was due anyway.

Radeons have always ran hot, but this is ludicrous.
And nVs didn't ?
lol
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,930 (2.85/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
I don't care what excuses they come up with, any sustained temperatures in the range of 100-110°C can't be good for long term reliability of the product. And this goes for any brand, not just AMD.

We have to remember that most reviews are conducted on open test benches or in open cases, while all customers will run these in closed cases, and even the best of us will not keep it completely dust free. That's why it's important that any product have some thermal headroom when reviewed under ideal circumstances, since real world conditions will always be slightly worse.

The thing is we don't know that. If these cards start to drop dead in a year or so an it is confirmed to be temperature related (as opposed to sloppy manufacturing), then we know. Until then it's more or less qualified guesswork. And modern chips rarely sustain any load, the voltage regulation and boost thingies are way too sophisticated for that.
 
Joined
Feb 18, 2017
Messages
688 (0.26/day)
The GPU is not the only thing using power...


5700 XT uses more power than 2070 Super in gaming on average, while performing worse. 5700 XT is slower, hotter and louder.

The RTX 2060 uses more power than an RX 5700 in gaming on average while performing worse. So what did you want to say?
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
The thing is we don't know that. If these cards start to drop dead in a year or so an it is confirmed to be temperature related (as opposed to sloppy manufacturing), then we know. Until then it's more or less qualified guesswork. And modern chips rarely sustain any load, the voltage regulation and boost thingies are way too sophisticated for that.
Well. Many people on this forum are convinced that high temperatures are killing Intel CPUs. Do you want to tell them that AMD GPUs are magically resistant to 100*C? :-D
 
Joined
Apr 21, 2010
Messages
562 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100
Read this :

This hotspot 110'c was there long before Navi/Radeon VII but people couldn't understand thing.here one of people said very clear :

Under the old way of measuring things, AMD had one value to work with. It established a significant guard band around its measurements and left headroom in the card design to avoid running too close to the proverbial ragged edge.

Using this new method, AMD is able to calibrate its cards differently. They don't need to leave as much margin on the table, because they have a much more accurate method of monitoring temperature. The GPU automatically adjusts its own voltage and frequencies depending on the specific characteristics of each individual GPU rather than preprogrammed settings chosen by AMD at the factory.

It is also possible that pre-Navi AMD GPUs hit temperatures above 95C in other places but that this is not reported to the end-user because there's only one sensor on the die. AMD did not say if this was the case or not. All it said is that they measured in one place and based their temperature and frequency adjustments on this single measurement as opposed to using a group of measurements.

I hope one day Intel/Nvidia/AMD follows path and allow us to see temp of all array of sensors.

this is best example for those who don't understand.:rolleyes:
From 1995 to 2015.It took them 20 years to get a SHARP image of Pluto Planet.
 
Joined
Oct 31, 2013
Messages
186 (0.05/day)
Boost doesn't count as OC in my books. It's part of the standard package, and wizard keeps explicitly stating the clock range in recent reviews.
The performance we get at the end of the day, includes that boost.
You can't count it as something being added on top.

I think the differences between Boost and Overclock have started blending together for some time now. And AMD with it's Junction temp and the Precision Boost can get close to a good Overclock with it's Boost features.
I find it amazing that they have 32 sensors on the Vega and 64 on the Radeon VII. I wonder how many the Navi GPUs have. And if we get a tool to see the temps in a colored 2d-texture we could see where and when what part of the GPU is utilized.
And think about the huge GPU dies of the Nvidia RTX cards. They likely have a lot of headroom with a junction temp optimization.

People complaining about the "not so good" 7nm GPUs. It is a new process, and it will take some time to get the best out of that manufacturing node. And we will see how good Nvidia's architecture is scaling on 7nm when it will be released :)
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,930 (2.85/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
Well. Many people on this forum are convinced that high temperatures are killing Intel CPUs. Do you want to tell them that AMD GPUs are magically resistant to 100*C? :-D

I've never seen that claim. And yes, if that is what the specs says.
 
Joined
Apr 10, 2013
Messages
302 (0.07/day)
Location
Michigan, USA
Processor AMD 1700X
Motherboard Crosshair VI Hero
Memory F4-3200C14D-16GFX
Video Card(s) GTX 1070
Storage 960 Pro
Display(s) PG279Q
Case HAF X
Power Supply Silencer MK III 850
Mouse Logitech G700s
Keyboard Logitech G105
Software Windows 10
110 seems pretty hot. Not so hot it dies within warranty though.
 
Joined
Feb 11, 2009
Messages
5,398 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> ... nope still the same :'(
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
rubbing eyes

so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.

My sisters are still running my old HD6950's sooo yeah, oh and a friend uses an HD7950 still today purely because the prices are ridiculous so upgrading does not make sense.
 
Joined
Sep 17, 2014
Messages
20,934 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
So that's your argument huh? What's your data point for 110°c "hotspot" temperature being (always) bad given we have absolute no reference, especially at 7nm nor do we know if 110°c is sustained for n length of time? Do you for instance have data about hotspots on 9900k @5GHz or above & how about this ~ Temperature Spikes Reported on Intel's Core i7-7700, i7-7700K Processors

So you come with absolutely no data, lots of assumptions & then ignoring historical trends & call whatever I'm saying as utter BS, great :rolleyes:

Could AMD have done a better job with the cooling ~ sure, do we know that the current solution will fail medium - long term? You have absolutely 0 basis to claim that, unless you know more than us about this "issue" or any other on similar products from the competitors.

You oughta scroll back a bit, I covered this at length - Memory ICs reach 100C for example, which is definitely not where you want them. That heat affects other components and none of this helps chip longevity. The writing is on the wall. To each his own what he thinks of that, but its not looking comfy to me.

By the way, your 7700K link kinda underlines that we know about the 'hot spots' on Intel processors, otherwise you wouldn't have that reading. But these Navi temps are not 'spikes'. They are sustained.

We can keep going in circles about this but the idea that Nvidia ref also hits these temps is the same guesswork; but we do have much better temp readings from all other sensors on a ref FE board - including memory ICs. And note: FE's throttle too but I've seen GPU Boost in action and it does the job a whole lot better; as in, it will rigorously manage voltages and temps instead of 'pushing for the limit' like we see on these Navi boards. This is further underlined by the OC headroom the cards still have. There are more than enough 'data points' available...

Besides, nothing is really new here - AMD's ref cards have always been complete junk.

"ref card, who cares"

In short: possibly bad impact on card longevity, but we are not sure. Definitely not having serious performance impact. We don't even know what values are for NV, as there is no exposed sensor.

We are never sure until after the fact. I'll word it differently. The current state of affairs does not instill confidence. And no, I don't 'care' about ref cards either, but I pointed that out earlier; AMD should, especially when AIB cards are late to the party. These kinds of events kill their momentum for any GPU launch, and it keeps repeating itself.
 
Last edited:
Joined
Mar 18, 2015
Messages
2,960 (0.89/day)
Location
Long Island
and its 100 - 150 dollars cheaper.... so why are you comparing the two?
If anything you should compare it to the RTX2060 Super (like in your link...was the 2070 a typo?) and then the 5700XT is overall the better option.

To my eyes the 5700 XT should be compared with the 2070 .... the 5700 w/ the 2060... no supers.
With both cards overclocked, the AIB MSI 2070 Gaming Z (Not super) is still about 5% faster than the MSI Evoke 5700 XT ... so if price difference is deemed big enough (-$50) I can see the attraction ... but the 5700XT being 2.5 times as loud is a deal breaker at any price. The Sappire is slower still but it's significantly quieter

MSI 2070 = 30 dbA
MSI 5700 XT = 43 dba .. 13 dba = 2.46 times as loud


MSI 5700 XT = 100%
Reference 2070 = 96%

MSI Evoke Gain from OC = 100% x (119.6 / 115.1) = 103.9

MSI 2070 Gain from overclocking = 96% x (144.5 / 128.3) = 108.1

108.1 / 103.9 = + 4.85%

The Gaming Z is a $460, the Evoke suggested at $430.... will likely be higher for the 1st few months.

If we ask, "Is a 5% increase in performance worth a 7% increase in price ?" It would be to me. But with a $1200 build versus a $1230 build, that's a 5% increase in speed for a 2.5% increase in price, and that's a more appropriate comparison as the whole system is faster and the card don't deliver any fps sitting on your desk. However, the 800 pound gorilla in the room is the 43 dbA 2.5 times as loud thing.

I think the issue here is, from what we have seen so far most of the 5700XT cards are not true AIB cards but more like the EVGA Black series ... pretty much a reference PCB with a AIB cooler. Asus went out and beefed up the VRMS with an 11 / 2 + 1 design versus the 7 / 2 reference . They didn't cool near as well as the MSI 5700 XT or 2070, The did a lot better on the"outta the box" performance but OC headroom was dismal. As the card was so aggressively OC'd in the box, manual OC'ing added just 0.7% performance.


Asus 5700 XT Strix = 100% x (118.3 / 117.4) = 100.77
MSI 2070 Gaming Z = 95% x (144.5 / 128.3) = 107.00

107.00 / 100.77 = + 6.18 %

Interesting tho that Asus went all out, spending money on the PCB redesign when MSI (and Sapphire) looks like they used a cheaper memory controller than the reference card and yet MSI hit 119.6 in the OC test where as Asus only hit 118.3. Still, tho it will surely cost closer to what the premium AIB 2070s costs due to the PCB redesign and tho it's 7C hotter and 6% slower than the MSI 2070 ... it's only 6 dbA louder (performance BIOS). To get lower (+2 dbA), the performance drops and temps go up to 82C.

Tho the Asus is 6% slower and the MSI is 5% slower than the MSI 2070.... if I couldn't get a 2070, and was looking to choose a 5700 XT, it would have to be the Asus. ... but not at $440.

As for the hot spots, I'm kinda betwixt and between ... Yes, I'm inclined to figure that I have neither the background nor experience to validate or invalidate what they are saying .... but in this era ... lying to everybody seems to be common practice. In recent memory we have AMD doing the "it was designed that way" routine when the 6 pin 480's were creating fireworks .... and then they issued a soft fix , followed by a move to 8-pin cards. EVGA said "we designed it that way" when 1/3 of the heat sink missed the GPU on the 970 .... and again, shortly thereafter they issued a redesign. Yet again, when the EVGA 1060s thru 1080s started smoking, the "we designed it that way" mantra was the 1st response and then there was the reacall / do it yaself kit / redesign with thermal pads.

All I can say is "I don't know ... I'm in no position to judge. Ask me again in 6 monts after we get user feedback. But I also old enough to remember AMDhaving fun at nvidias expense frying an egg on the GTX 480 card.
 
Joined
Sep 15, 2011
Messages
6,471 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Just admit it AMD. You overclock and overvolt those GPUs like crazy just to gain 5-6% more performance in order to barely compete with nVidia.
How much is the power consumption when those GPUs pass 100°C ??
 
Joined
Oct 1, 2006
Messages
4,884 (0.76/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Just admit it AMD. You overclock and overvolt those GPUs like crazy just to gain 5-6% more performance in order to barely compete with nVidia.
How much is the power consumption when those GPUs pass 100°C ??
They don't the GPUs average reach high 80s to 90-ish degrees, and the power consumption are in the reviews.
Every modern GPU has a target set in the bios, without any overclock it just reach the power target and stays there.
It is not like the old days where GPUs run themselves into the ground when you run Furmark.
 
Top