• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RTX 3080 Crash to Desktop Problems Likely Connected to AIB-Designed Capacitor Choice

My advice don't pull someone up for using AIBS and then rant to us telling us we have to use the same abbreviation you just pulled someone up for.
You are not the English language police, you can tell me how to do nothing, sir. ... .
And I'm English not American.

You're responding to someone who hasn't managed a single correct English sentence to save his life... I nearly fell off my chair :roll::roll::roll: Dafuq is happening to the world?

It's almost like computer silicon gets unstable when you clock it past its limits.
Almost like this has been true since silicon has been used in computers.
Almost like overclock instability related to silicon limits has nothing to do with capacitor choice.
Almost like this is a non-issue that has been blown way out of proportion.

As for those people who will say "but some people get over 2GHz": silicon lottery.
As for those people who will say "but MUH CLOCKS NVIDIA IS RIPPING ME OFF": NVIDIA never guaranteed you'd get over 2GHz boost, NVIDIA in fact never even guaranteed you'd get anything more than the rated base or boost clocks. Nobody does.

Small caveat, these cards boost beyond 2 Ghz without touching the dials. So out of the box, they can simply boost to oblivion. This is not right, and the end result is you're going to find a performance limitation to avoid that. GPU Boost should be able to account for differences in silicon lottery, or it should be tweaked. Either way, its a handicap (and whatever is rated on the box is irrelevant in that sense, right? We know better by now and cards aren't reviewed on base clocks either)

Its not a non issue at all. Previous generations worked a lot more smoothly with GPU Boost peaking up high at the beginning of a load, and sustained too. The ripoff part...myeah... its not substantial in any way. But it does tell us a big deal about the quality of this generation and the design choices they've been making for it.

The whole rock solid GPU Boost perception we used to have... has been smashed to pieces with this. For me at least. Its a big stain on Nvidia's rep, if you ask me.
 
Last edited:
This still has the odd possibility of being related to Samsung since Nvidia has been following best practices up until now at TSMC. You cannot establish ground rules let alone known good designs at zero hour.
 
This still has the odd possibility of being related to Samsung since Nvidia has been following best practices up until now at TSMC. You cannot establish ground rules let alone known good designs at zero hour.
If you mean history, it's not completely true, there was one node in the past where Nvidia didn't follow TSMC spec, the results were sub-mediocre and Nvidia blamed it on TSMC. I can't remember which one, but I'm sure the information is easy to find.
 
After all these wild theories are easy to test, no need any engineering education to prove this wrong or right. Take "bad" crashing card with "bad POSCAPs", test it to confirm crashes... Then desolder "bad POSCAPs", put bunch of 47uF low-ESR MLCCs instead, and test again if its "fixed". Something tells me that it would not be such a simple case and card may still crash, heh. ;-)

This has now been tested:

Gigabyte's board starts with 6x POSCAPs / SP-CAPs... or whatever you wanna call the 470 uF big ones.

der8auer removed 2x 470uF, then replaced them with 20x 47uF MLCCs, achieving a +30MHz clock (0.03 GHz). So yes, it has an effect, but its quite minor.

I think its safe to say that this entire "capacitor" issue has been grossly overblown, based on the practical test from der8auer. The stock 6x 470uF caps were still able to hold a +70MHz overclock and was stable initially. But reaching +100MHz (+30MHz higher than before) with 20x MLCCs does show that there's some degree of benefit to the MLCCs, but nothing major.

I admit that der8auer did a 3090 test instead of the 3080, but I doubt that makes a major difference. The question is what's the effect of "6x Big Caps" vs "60x Small Caps", and that's what the video tests.
 
Gigabyte's board starts with 6x POSCAPs / SP-CAPs... or whatever you wanna call the 470 uF big ones.

der8auer removed 2x 470uF, then replaced them with 20x 47uF MLCCs, achieving a +30MHz clock (0.03 GHz). So yes, it has an effect, but its quite minor.

I think its safe to say that this entire "capacitor" issue has been grossly overblown, based on the practical test from der8auer. The stock 6x 470uF caps were still able to hold a +70MHz overclock and was stable initially. But reaching +100MHz (+30MHz higher than before) with 20x MLCCs does show that there's some degree of benefit to the MLCCs, but nothing major.

I admit that der8auer did a 3090 test instead of the 3080, but I doubt that makes a major difference. The question is what's the effect of "6x Big Caps" vs "60x Small Caps", and that's what the video tests.

The biggest offender here is probably Zotac Trinity with 6x 330uF SP-CAP, plenty of news outlets also mention that the Zotac 3080 is the least stable out of the bunch before the new driver update.

Well all this capacitors issue could also be alleviated when the die's power requirement does not change so rapidly, so I guess Nvidia introduced some clocks ramping hysteresis into the driver to improve stability. That doesn't mean Ampere will run at lower clocks like people would have thought though, just that the clocks would react slower, allowing some undershoot of the power target.
 
asus rtx3080 TUF, for the win...
 
steve on his latest video reports that evga informed him about the 6 poscaps cards used to be mesed up with the first release driver are now running fine with the latest driver.
 
steve on his latest video reports that evga informed him about the 6 poscaps cards used to be mesed up with the first release driver are now running fine with the latest driver.

You better have a look of this, and share it with any one interested.
 
I'm building a new gaming rig. Bought an i9-10900K and was originally planning to pair it with an RTX 3080. My monitor is 2560x1440 (144 Hz), not 4K, so I'm guessing an RTX 3090 would be overkill for me - hence the 3080.

Do you guys recommend I wait a few months before buying? Sounds like I should wait for these reported crashing issues to be ironed out first. I want to buy my GPU as quickly as possible but I also don't want to be a beta tester for something with known issues.
 
so I'm guessing an RTX 3090 would be overkill for me - hence the 3080.
If you have the money, a 3090 would future-proof you for a couple years.
Sounds like I should wait for these reported crashing issues to be ironed out first.
The latest driver update seems to be fixing most of the crashing problems. You should be fine. Waiting a month will not hurt you.

And welcome to TPU!
 
The latest driver update seems to be fixing most of the crashing problems.
I'd like to see comparison tests between drivers first before stating they appear to have fixed the issues.
Gimped performance is more likely.
 
I'd like to see comparison tests between drivers first before stating they appear to have fixed the issues.
Gimped performance is more likely.
They are not likely to say they might have fixed them really though,, , seams fair to say windows driver's were the issue though since no one running Linux had C2D issues.
So after waying up the mediocre test provisioning Nvidia allowed AIB's with any driver, the rush then to get them out and the apparent ease with which Nvidia seam to have fixed the issues with a driver update, it's clear Nvidia are to blame.
No dramas just many an AIB GPU engineer can seek treatment from bus injuries now :):D.
 
If you have the money, a 3090 would future-proof you for a couple years.

The latest driver update seems to be fixing most of the crashing problems. You should be fine. Waiting a month will not hurt you.

Thanks. I think you're right. I'll fork out a bit extra for the 3090 in about a month. I'm waiting a bit anyway to save up a bit more cash - and waiting will also give stores time to get more 3090s in stock (all out of stock near me right now). Plus later batches potentially being improved is an added bonus!

In the meantime, I need to figure out what custom loop I'm going to go with. Never put together one of those before but I think my Lian Li O11 Dynamic is going to struggle to keep temperatures under control if I air cool it.

And welcome to TPU!

Thanks! Happy to be here!
 
This thread title didn't age well.... :p

Its good enough. "Likely" means its still a theory.

Igor's Lab has posted an interesting investigative article where he advances a possible reason for the recent crash to desktop problems for RTX 3080 owners

I think that's fine. I know I've been pushing the opposite throughout this thread, but that's mostly because the internet ran away with the idea and started over-hyping the issue. This article, Igor's article, and the title, all make it clear that its a theory, "likely", or "possible reason". Where things got silly was some other youtubers, or Reddit, where people started discussing the issue with certainty.
 
  • Like
Reactions: TiN
Caps or not, the fix was well predicted I think. Small tweak to GPU boost, some voltage which in turn reduces peak clock automagically.

Good to see they kept the losses at an apparent minimum.
 
Back
Top