- May 14, 2004
- 21,526 (3.53/day)
|Video Card(s)||RTX 3080|
|Display(s)||30" 2560x1600 + 19" 1280x1024|
|Software||Windows 10 64-bit|
Hence, has AMD traded-off lifespan of a top-tier product for pure performance?
Science agrees that for silicon higher temperature = shorter lifespan, but I haven't seen any significant cases of GPUs randomly dying after a certain time due to high heat. You'd see a gaussian distribution of such cases, which would quickly draw everyone's attention. Usually the product becomes obsolete first (which happens in just a few years). Also thermal expansion of solder joints leads to damage on the PCB itself first. Have you seen any scientific research on the topic of silicon lifespan vs temperature (on real shipping products)?
AMD is powering certain zones and pipelines of the chip depending on the power footprint and temperature
I think you mean power gating, which has been used extensively over the last few years. Moderns GPUs reduce clock and voltage whenever they can, but as far as I know they do not randomly shut off shaders when full performance is needed. When idling, they shut off almost everything, even their 2D rendering acceleration units, by detecting whether the displayed image is static or not.
Id hate to have 94'c of heat dumped inside my case
You don't dump x°C of heat into the case. All power the card consumes is turned into heat which is dissipated into the case. So for a constant power draw, no matter the temperature, the same energy is deposited into the case.