• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

What kills gpus?

I have never had a GPU die on me, they love Pron too!!!
 
Time. It's just a matter of time.
 
The gpu itself is usually fine, power delivery, display components die eventually. Memory can also give up if you overclock it.
 
Daft dimwits running Furmark for 24hour burn-in sessions comes to mind.. Totally wasteful and has the potential to kill GPU hardware. 15 to 20 minutes is all Furmark needs to test and verify stability.
Are you telling me that PowerColor kills their gpu's with Heaven+Furmark testing? And they sell it afterwards? And they do it for years without consequences?

Better Call Saul
 
We know AMD GPU's cant stand the test of time, are people killing Nvidia GPU's?

I kid about the AMD thing.. :fear:
 
Heat kills GPU’S
 
1080p resolution..lol
 
not dusting it properly, it needs maintenance like a car. dust it, thermal pads need replaced eventually, just like brake pads on a car they get worn out, and thermal paste needs replaced sometimes too, but don't attempt that if you don't know what you are doing, also use kryosheet not paste.
 
There's several factors you (as the end-user) have absolutely 0 influence over.
-Engineering/design
-Component choice
-Component defects
-Manufacturing errors
-drivers (un)intentionally made to kill hardware (fan disable, power/load limit removal, etc )

As far as what the end-user *can* influence:
-Heat.
Keep components cool, they last longer and perform more-efficiently.
This one is a tough one, as I've owned cards missing cooling on key components, from the factory. Part of the reason a 'suffocating' case can kill cards, are sub-components seeing high temps. The GPU Diode, Hotspot, and VRAM temps are far from the entire picture...

-Thermal Cycling.
Some cards were especially susceptible to cold solder joints and thermal warpage (Ex. Radeon VII). In the case of Vega 20, the only surefire preventative was a full cover waterblock or replacing the "Radeon VII" with a "Radeon Pro VII" or "Radeon Instinct MI50/MI60".
Supposedly, letting cards slowly cool off after a load should mitigate this. In practice, this would probably be setting a fan curve that 'drops out' as soon as load ceases; allowing the card to slowly thermally contract-back to its 'cool' state.

-Power quality.
PSUs that produce 'noisy' power (ripple) under load (or idle), will severely strain the caps and power components on a card.
This is also a major cause for mobo caps going bad in pre-builts, etc. PSU starts delivering dirty power and the downstream caps, VRMs, etc. get stressed.
The 'fix' is make sure you're buying a quality PSU, and that you're not overloading it.
 
Last edited:
Last edited:
Daft dimwits running Furmark for 24hour burn-in sessions comes to mind..
Shouldn't the GPU be able to handle all GPU loads, at least with default setting. However I'm being hypocritical here as I wouldn't run it for more than minute myself, the time for a bench run. Don't know if Nvidia drivers still look out for this software and down clock accordingly as I haven't used furmark for a long time now.

Can confirm. 20 minutes was enough for my faulty 5070 Ti to burn my PSU.
What was the power drawn by the GPU?

Time. It's just a matter of time.
And with faster clocks time goes quicker :laugh:

Overclocking?
Interesting, hadn't thought of it that way but a good answer IMO. Strange how many manufacturers provide overclocking software!
 
If you can keep it cool, you can let it run with extended power and boost settings with overclocked memory speeds indefinitely, or for the life of the card, whichever comes first :D

If you keep the stock bios, and work within its power limits, you are technically running it at stock. The GPU will pull back clocks as it sees fit for the load its running.

So you get an extra 150-200MHz on the core.. ooOo.. and whatever extra you get on the mems. If you run your fans fast, you can kiss some of your power budget good bye too, that is how tight Nvidia is.

But it is probably to keep the board from self immolating.
 
The sad part is your a Mod……strike that I missed the “I kid” part
 
Heat, power, manufacturing.

This is the answer, plus throw in water (but that kind of ties into power/electricity).

A GPU that doesn't have a defective part should be fine to run at stock clocks for years. It's the other variables that will change that.. high heat/poor air circulation, power surges/failing PSU, spilling a cup of coffee on it, etc...

Downclocking a GPU in theory will help with longevity, but it won't do anything to stop those other variables.

TLDR; just use the card, don't do any dumb OC's that get you a 2% frames boost, profit.
 
We know AMD GPU's cant stand the test of time, are people killing Nvidia GPU's?

I kid about the AMD thing.. :fear:
Remember VEGA and the HBM ? Those didn't last long in the GPU mining world.
 
Remember VEGA and the HBM ? Those didn't last long in the GPU mining world.
No :oops:

I paid no attention to them or even Nvidia back then :sleep:

Mostly because I had a young family and no time or money for hardware :D

The sad part is your a Mod……strike that I missed the “I kid” part
I can have an opinion too, sorry if it hurts your feelings.
 
Luck of the draw, like with all electronics.

Had a DVD player kick the bucket 6 months old.
Had a 30GB iPod fail after the warranty ended. Apple wanted more money to fix it than to buy a replacement - screw them. Haven't touched Apple since.
Had a 3080Ti die about 5 months after getting it. The replacement (via warranty) has been going good for the last 3 years. This is the only GPU I've had die on me. My 980Ti I used for 6.5 years. I used some GTX 570s in SLI for 4.5 years and my brother used one of them for several years after that. I still have the old lady, I wonder if she works.....
 
Shouldn't the GPU be able to handle all GPU loads, at least with default setting.
Theoretically? Yes. In reality? No. It IS possible to kill hardware with software. It's not smart, it's not a good idea and it should be actively avoided.

Don't know if Nvidia drivers still look out for this software and down clock accordingly as I haven't used furmark for a long time now.
Not sure, but I don't use it for anything longer than 5 to 10 minutes myself.
 
I slapped that TG PTM pad in today. At first I thought it sucked because I hammered it right away with intensity. Then I tried a medium to high load for a couple of hours and it seemed to like that much better.

It seems ok now.. pretty decent.
 
Phase-change Thermal Material?
Yessir

Screenshot 2025-07-11 055239.png
 
using Cheap Solder and Flux also a killer, then high heat creates micro fractures in the solder…….
 
using Cheap Solder and Flux also a killer, then high heat creates micro fractures in the solder…….
That hasn't been in problem in more than 15 years.. and even then it was only a problem because makers were switching away from leaded solder. There were growing pains associated with that development. They had those problems worked out within a year.
 
Back
Top