• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RTX 3080 Crash to Desktop Problems Likely Connected to AIB-Designed Capacitor Choice


You don't need to watch full video, Buildzoid starts rambling 5 minutes in.
 
Last edited:
Whole article is based on speculations of speculations. First of all, high-current DC-DC PDN (power delivery network) is real challenge, and indeed must use proper decoupling. However it does not mean that use of POSCAP/SPCAP or MLCC is the best in every case. Much more depends on transient tuning and VRM settings, and PCB layout itself, than using MLCC or POSCAP in specific spot. Just replacing everything with MLCCs will NOT help the design to reach higher speeds and stability. Why? Because one need to use all different caps in tandem, as their frequency response is different, as well as ESR, ESL and other factors.

Having everything with MLCC like glorified asus does means you have single deep resonance notch, instead of two less prominent notches when use MLCC+POSCAP together. Using three kinds, smaller POSCAP, bigger POSCAP, and some MLCCs gives better figure with 3 notches.. But again, with modern DC-DC controllers lot of this can be tuned from PID control and converter slew rate tweaks. This adjustability is one of big reasons why enthusiast cards often use "digital" that allows tweaking almost on the fly for such parameters. However this is almost never exposed to user, as wrong settings can easily make power phases go brrrrrr with smokes. Don't ask me how I know...

Everybody going nuts now with MLCC or POSCAP, but I didn't see a single note that actual boards used DIFFERENT capacitance and capacitor models, e.g. some use 220uF , some use 470uF :) There are 680 or even 1000uF capacitors in D case on the market, that can be used behind GPU die. It is impossible to install that much of capacitance with MLCC in same spot for example, as largest cap in 0603 is 47uF for example.

Before looking onto poor 6 capacitors behind the die - why nobody talks about huge POSCAP capacitor bank behind VRM on FE card, eh? Custom AIB cards don't have that, just usual array without much of bulk capacitance. If I'd be designing a card, I'd look on a GPU's power demands and then add enough bulk capacitance first to make sure of good power impedance margin at mid-frequency ranges, while worrying about capacitors for high-frequency decoupling later, as that is relatively easier job to tweak.

After all these wild theories are easy to test, no need any engineering education to prove this wrong or right. Take "bad" crashing card with "bad POSCAPs", test it to confirm crashes... Then desolder "bad POSCAPs", put bunch of 47uF low-ESR MLCCs instead, and test again if its "fixed". Something tells me that it would not be such a simple case and card may still crash, heh. ;-)
 
Dis is why you don't want to be an early adopter.

I am an early adopter, I do run my Blog for eight long years, I do explore solely the planet of electrical test and measurement equipment and testers.
Due lots of reading and practice and the opportunity to receive highest precision parts and measuring tools, I did my entrance also at at electrical metrology.
This is the top of pyramid at that science.
And I won recognition at my sector by the industry it self, as they made the judgement that their Blogger and in a way a trainee early adopter, he does have true potentials to adopt and understand of what their High-tech work can do and it usage.

But here comes the difference between of me and others, I was preparing my self for 30 years as freelancer electrician and electronics repair man, studying , practicing , having a very high success rate when I do repairs or troubleshoot real problems at my local customers.
This is the hard and slow and painful way so some one to develop skills and understanding.

Today because of Igor an German retiree, all YouTube actors / product reviewers, they did found a reason to power on their cameras.
But even so they are clueless of what they are talking about.

And therefore all consumers they should simply wait so NVIDIA and their partners to do their own homework and any new decisions will be officially announced in the market no sooner than 40 days from now.

After all these wild theories are easy to test, no need any engineering education to prove this wrong or right. Take "bad" crashing card with "bad POSCAPs", test it to confirm crashes... Then desolder "bad POSCAPs", put bunch of 47uF low-ESR MLCCs instead, and test again if its "fixed". Something tells me that it would not be such a simple case and card may still crash, heh. ;-)

I can solder and desolder of anything too, but GPU engineering this is something that no one can grasp with out be part of NVIDIA R&D team.
Fifteen years ago the only word that consumers knew was number of pipelines.
GPU engineering has nothing to do of YOU becoming a car mechanic at your own car, it does not work that way due the unimaginable complexity of modern design.
 
Last edited:
it's a doubt whether using POSCAPs means cheap , on some cases one POSCAP could be expensive than ten MLCCs。
 
opportunity to receive highest precision parts and measuring tools, I did my entrance also at at electrical metrology.
Do tell more :)

Though there is no much need for highest precision equipment to be able on capturing bode plot and response of relatively slow DC-DC converter that is used on 3080/3090 GPUs here. One do need decent differential probes, injector or high-speed load and good scope or bode plot analyzer :)

Again, one does not need to know anything about GPU or silicon design to make a good DC-DC converter that can meet power requirements of the chip. You can measure all this in typical EE lab that all AIBs already have. No need to work at NVIDIA to do this, as DC-DC converter design is very common job that is done in majority electronics, be it GPU, motherboard, console or TV.

Also fun fact = MLCC caps produce lot of acoustic noise. Remember sqeaking cards that customers hate and RMA so much? :)
 
Seems that I'll be getting an RTX3070 or a 3060.
 
This is another example why no one should buy a product on its first batches. Let it pass at least a month.
More like example of proper validation required in the R&D process instead of a rushed release. This falls both on Nvidia and (some) AiB partners.
 
Do tell more :)

Again, one does not need to know anything about GPU or silicon design to make a good DC-DC converter that can meet power requirements of the chip.

No need to do so, you are well aware that lack of understanding this limited your joy about bringing back from the dead the 8846A.
I got one a year ago and I even help at developing logging software for it.
3080/3090 GPUs they are more complex than the 8846A. :)
Just keep that in mind.
 
Does this mean a recall is coming? If so not very good for Nvidia.

Nvidia never admits being wrong and always blames the partners (TSMC, Apple, etc.), so here they will say that the fault is with the AIB and the fix will be based on downclocking...
 
Another con of Ampere consumer GPUS made on Samsung's 8nm and ended being an ultra high power draw chip. And power circuit robustness is the same reason that the cheapest AIB models most often than not have biggest RMA rates than the higher quality made ones.
 
Nvidia never admits being wrong and always blames the partners (TSMC, Apple, etc.), so here they will say that the fault is with the AIB and the fix will be based on down-clocking...

We are all here to verify that, but do not expect getting any solid answers faster than four weeks of time.
 
For those that didn't get the memo, here's @TiN:

Just so someone doesn't jump the gun and says he's pulling this out of his you know what.
 
Another con of Ampere consumer GPUS made on Samsung's 8nm and ended being an ultra high power draw chip. And power circuit robustness is the same reason that the cheapest AIB models most often than not have biggest RMA rates than the higher quality made ones.

I thought so far that the cheapest ones receive a hell of torture because of poor people trying to OC them with out use of sanity .:D
 
We are all here to verify that, but do not expect getting any solid answers faster than four weeks of time.

Of course, we need patience.

EVGA stance seems to confirm there is a problem with the choice of capacitors, although maybe not cheaping out is the root of the problem, but rather not enough testing.
On the other hand, FE cards seem to crash, too, so there might be other sources of issues, PSU related or such.
 
For those that didn't get the memo, here's @TiN:

Just so someone doesn't jump the gun and says he's pulling this out of his you know what.

Electronics engineering and GPU architecture they are two different mountain tops.

Of course, we need patience.

EVGA stance seems to confirm there is a problem with the choice of capacitors, although maybe not cheaping out is the root of the problem, but rather not enough testing.
On the other hand, FE cards seem to crash, too, so there might be other sources of issues, PSU related or such.
an 750W PSU this has headroom of 1150W Max, you may expect only 1% relative complain about it.
Mostly because the users they are not aware of the actual health status of the PSU in their hands, current performance delivery in watts due it age.
 
Last edited:
I still missing how GPU architecture or GPU design matter here? One can assume it's magic oompa-loompa inside chip doing the math, and it would be same either way, as soon as you need (can measure) how many amps and what voltage margins loompa's need to stay happy. That is number one test to be done for all new GPUs, before you can even begin to start writing specification of VRM design.

P.S. No joy in 8846A was not because of it's digital issues, but because I am/was not interested in it much, having way more fun with 3458A/2002/etc. :) Even fully working 8846A is quite poor unit for what it costs...

P.P.S. All above are just my personal ramblings, not related to any AIB point of view.
 
I still missing how GPU architecture or GPU design matter here? One can assume it's magic oompa-loompa inside chip doing the math, and it would be same either way, as soon as you need (can measure) how many amps and what voltage margins loompa's need to stay happy. That is number one test to be done for all new GPUs, before you can even begin to start writing specification of VRM design.

P.S. No joy in 8846A was not because of it's digital issues, but because I am/was not interested in it much, having way more fun with 3458A/2002/etc. :) Even fully working 8846A is quite poor unit for what it costs...

P.P.S. All above are just my personal ramblings, not related to any AIB point of view.

It is not in my priorities of me to discover NVIDIA's magic oompa-loompa inside chip, because I do not make money from VGA card repairs.
I am aware of your measuring gear, but your accident did stop your exploration at the discovery of what an 8846A can do as by far most modern design.
Anyway this is another story, and a boring one for the readers of this forum.
 
Anyway this is another story, and a boring one for the readers of this forum.
Please carry on. You are carrying it like the main event. I appreciate it more than the uninformed opinions.

This component race somehow makes me wonder if there are forbidden cheats that don't meet the regulations. Where there is a rule, so is a violation.
 
Looks like the scalpers saved the day!!!! We need to thank them, they sacrificed their wallets in order to protect ours bois!!!!
Re-manucfacturing uhmmmm
Now the companies will have to show that they made a reliable product and works fine.
 
There seems to be more and more indications that this is poor QC control on yield side, not PCBA design problem. Either that or faulty boost algorithm or bad VID/FID table.
Over 2GHz seems to be an issue, either MCU design or design process limit or both.
I personally expect there will be BIOS updates that will lower the maximum boost clock.
 
could this be why Msi put so low power target on their 3x8pin trio cards?
 
Gotta love EMI/RFI design oversights. From what I've read, it is the bane of every freshly college-educated EE and many a veteran EE. I bet somebody on the design teams knew that this would cause a problem and was promptly ignored after referencing datasheets claiming "It'll be fine!"

 
Back
Top