• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Cinebench crashed my PC. My Wi-Fi stopped working, and I keep getting a "Please wait" screen when I boot up my PC.

I wasn’t watching the temps at the exact time the crash happened, but my temperatures seemed fairly reasonable about 30 seconds before that during my test. They were all 80-90 degrees C. I’m honestly not sure how much voltage was applied since I just used the default PBO settings; it’s probably bad that I don’t know that lol. I do know it makes the frequency go from 3.8 to 5.1 GHz though. Thanks!
Well if you doing OC you MUST pay attention on this 2 things and they going hand in hand more Voltage higher the temps.....OC on default PBO is most likely to much of the Voltage....if you doing OC my advice is always go step by step and with the lowest possible Voltage / undervolted your CPU first in that case even when you have instability/BSOD it will be almost impossible to damage your hardware.......
 
Well if you doing OC you MUST pay attention on this 2 things and they going hand in hand more Voltage higher the temps.....OC on default PBO is most likely to much of the Voltage....if you doing OC my advice is always go step by step and with the lowest possible Voltage / undervolted your CPU first in that case even when you have instability/BSOD it will be almost impossible to damage your hardware.......
That makes sense. I’ll be more careful when I OC then and pay attention to the voltage and all. I'll modify it myself and lower the voltage more and I'll be more careful than before for sure. Thanks!
 
Blaming software for a failed overclock is just silly.
Once upon a time in 2021 Amazons game killed a bunch of 3090's, blaming software for killing gpu's isn't far fetched or even laughable. Years ago it was Furmark destroying gpu's.

 
Last edited:
Once upon a time in 2021 Amazons game killed a bunch of 3090's, blaming software for killing gpu's isn't far fetched or even laughable. Years ago it was Furmark destroying gpu's.

Those are examples of the GPU trying to do more work than the components the mfg selected to build it from would allow combined with the clock/voltage settings they allowed.

Imagine an extremely simplified pipeline:
1) Software: Hey DirectX API, draw this.
2) DX12 API: Hey GPU draw this.
3) GPU: Thank sir, may I have another?
4) start over

Why would the game/app be at fault instead of DirectX or OpenGL or whatever API is sitting between the app and the GPU?
Why would the API be at fault instead of the GPU drivers?
Why would the GPU drivers be at fault instead of the GPU components selected by the mfg, or the clock/voltage settings the mfg allowed?

If the software didn't bypass the GPU drivers or clock/voltage settings and was simply telling the GPU to draw things as fast as possible via typical methods, the software isn't responsible for killing the GPU.
 
Once upon a time in 2021 Amazons game killed a bunch of 3090's, blaming software for killing gpu's isn't far fetched or even laughable. Years ago it was Furmark destroying gpu's.

No the game didn't kill the GPU, a faulty design that deviated from Nvidia Reference is the cause.

When it came to Furmark and the Femi, the card was simply not designed for a power virus app and instead actual workloads. Not sure if this is still present in the current GPU drivers, but Furmark exe was flagged and a power limit was put in place to prevent killing GPUs. Workaround was to change the exe name.

In both cases it was a fault in the design that is the problem, not the software.
 
Once upon a time in 2021 Amazons game killed a bunch of 3090's, blaming software for killing gpu's isn't far fetched or even laughable. Years ago it was Furmark destroying gpu's.

That sounds like hardware malfunctioning to me. I have only had software damage one card, and it was in pursuit of overclocking and voltage and it was completely my fault.


Stock devices should be capable of handling stability tests at stock values. Full stop.

If you choose to operate hardware outside its specified limits its your fault. Full stop.
 
No the game didn't kill the GPU, a faulty design that deviated from Nvidia Reference is the cause.

When it came to Furmark and the Femi, the card was simply not designed for a power virus app and instead actual workloads. Not sure if this is still present in the current GPU drivers, but Furmark exe was flagged and a power limit was put in place to prevent killing GPUs. Workaround was to change the exe name.

In both cases it was a fault in the design that is the problem, not the software.
Also back in those days soooome ppl thought it was a good idea to loop Furmark for 24 hours and shit to prove stability, hence poof gone like Keyser Soze!
 
Back
Top