• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GPUs Have Hotspot Temperature Sensors Like AMD

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,853 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA GeForce GPUs feature hotspot temperature measurement akin to AMD Radeon ones, according to an investigative report by Igor's Lab. A beta version of HWInfo already supports hotspot measurement. As its name suggests, the hotspot is the hottest spot on the GPU, measured from a network of thermal sensors across the GPU die, unlike conventional "GPU Temperature" sensors, which reads off a single physical location of the GPU die. AMD refers to this static sensor as "Edge temperature." In some cases, the reported temperature of this sensor could differ from the hotspot by as much as 20°C, which underscores the importance of hotspot. The sensor with the highest temperature measurement becomes the hotspot.

GPU manufacturers rarely disclose the physical locations of on-die thermal sensors, but during the AMD Radeon VII, we got a rare glimpse at this, in a company slide, with the sensors being located near components that can get the hottest, such as the compute units (pictured below). Igor's Lab put out measurements of the deviation between the hotspot and "GPU temperature" sensors on a GeForce RTX 3090 Founders Edition card. There's a much narrower deviation between the two (between 11-14°C), and than the one between hotspot and Edge temperature on an MSI Radeon RX 6800 XT Gaming X Trio (which posts a 12-20°C difference).



View at TechPowerUp Main Site
 
More help to miners. Monitor accurate temps of GPUs in Rigs.
 
More help to miners. Monitor accurate temps of GPUs in Rigs.
????
Talk about a stretch! Some people will take every opportunity to crap on miners for literally no reason other than spite, I guess...

Yes, it will help miners, but it'll also help gamers lol... it's not a mining-focused feature in any way, shape or form...
 
I've always known that hotspots can be much hotter than what a single sensor can read and 20C is huge. This is why I always buy cards with powerful coolers such as my current one (see specs) which can keep temperatures down under even the biggest loads. It does it silently, too.
 
Last edited:
Some people will take every opportunity to crap on miners for literally no reason other than spite, I guess...
You got new GPU (RTX 3000 or RX 6000)?
 
lower edge temp than amd indicates better circuit design
 
A lot of people replaced stock TIM and get stutters in game while the GPU temperature readings are still ok, this is the reason. Edge temperature is not the best indication for properly applied TIM because 1/3 of the die might not have any TIM and the edge temp readings are still fine.
Hot spot temp is better IMHO, would better explain why people are getting stutters in game.
 
You got new GPU (RTX 3000 or RX 6000)?
That's not the point. The point is you're dumping on one group of people when a feature is made for everyone to use. You're literally looking for things to complain about.
 
That's not the point. The point is you're dumping on one group of people when a feature is made for everyone to use. You're literally looking for things to complain about.
Why this "feature for everyone to use" pop-up right now? Why not during release (or short time after)? Because NVIDIA & Co start worry about returning back their "cooked" cards? Maybe NVIDIA worry about class action lawsuit against them for hide important information from user which may prevent damage product?

Feature is AWESOME, but why now?
 
Why this "feature for everyone to use" pop-up right now? Why not during release (or short time after)? Because NVIDIA & Co start worry about returning back their "cooked" cards? Maybe NVIDIA worry about class action lawsuit against them for hide important information from user which may prevent damage product?

Feature is AWESOME, but why now?
Because they can. They should've done it long ago but no one can change that so no point in whining about it.
 
That memory temp though...
Looks like the massive coolers are just as much for GDDR6X as it is for the GPU.
 
That's pretty cool (no pun intended)

@W1zzard any plans of implementing this in GPU-z in any upcoming versions?
 
Is this sensor present on Turing GPU's?
 
Is this sensor present on Turing GPU's?
It is
1613575514216.png
 
The delta from the main GPU temp ain't that bad.

1613577988445.png
 
This has probably been around since Pascal or even Maxwell days, just not exposed to 3rd party tools to pick up the datastream
 
This has probably been around since Pascal or even Maxwell days, just not exposed to 3rd party tools to pick up the datastream
Given the whole drama that happened when AMD exposed their hotspot sensor, I wonder why nVidia wouldn't want to do the same. :roll:
 
Given the whole drama that happened when AMD exposed their hotspot sensor, I wonder why nVidia wouldn't want to do the same. :roll:

It is good information. Consider how matured Nvidia's boost algorithm has been it would be silly to think they dont have a large number of sensor information. Point is what does the extra sensor information help with besides generating internet outrage? For extreme overclockers it definitely matter. For daily usage the averaged die temp is more than enough to gauge operating condition of a GPU
 
oof, that memory temperature tho
Yup. I'm currently "working" while Nicehash is in the background. That would explain that.

You see that 200W power listed there. 150W of it is just for the GDDR6X. Ampere is efficient but that GDDR6X gets really hungry and hot fast. Traditional cooling is not enough to cool these cards at full 100% load without some sort of mods.
 
My Vega 56 hotspot temp can be close to 20c over the “gpu” temp (under volted). Folks saying they like to run in “silent mode” turning the fans down makes me a bit worried about what they are doing to the longevity of the card. I.e. they may think they are running at 85c but the hotspot is averaging 105c.

Adding the additional info is a good thing. hopefully it will allow for better insight and longer lasting cards in the future. (I really think the key metric should be the hotspot temp, not the average or whatever they currently use).
 
Back
Top