This was an answer on another site by a Tose Nikolov, Computer programmer that works with embedded systems.
"This is a complicated topic.
Basically, on the GPU die there are hundreds of temperature sensors. What GPU hotspot temperature shows you, is the maximum measured temperature of all the sensors.
The temperature of concern on a silicon chip is 125C. You don’t want any part of your chip to be above 125C, because at that temperature the structure of the chip starts degrading.
In ye olden days, all you had was 1 temperature sensor on the middle of the chip, and then the GPU engineers would calculate the reported temperature, when parts of the chip got toasty. So they would calculate that when there are hot spots on the chip of 120C(5C gap for safety), the temperature sensor would show 95C. So they would set the max safe temperature of the GPU at 95C. And as long as you were below 95C, you would be pretty sure that no part of your GPU was overheating.
Nowadays, you get the sensor grid, so now the GPU manufacturers get 2 numbers to play with. They get the “average” measured temperature, and the “hotspot” measured temperature. This temperature measurement still happens on the surface of the silicon chip, so there are still hotspots inside the chip that are hotter that the “GPU hotspot temperature”. The process is the same, GPU designers calculate the max safe temperature of the GPU chip, usually 120C again, and then see what numbers they are getting on the “average temperature” and “hotspot temperature” sensor. Usually the average temperature sensor is in the range of 92–97C depending on GPU. The safe temperature for the “hotspot temperature” sensor, is in the 110C range. Basically, the “hotspot temperature” sensor is closer to the true value you care about, but still only halfway there.
So. As long as the GPU temperature is below 95C, and the hotspot temperature is bellow 110C, you will be fine."
I was looking for info on this subject as well.