Anything and everything is capable of error. It's always the margins that matter (MTBF, if you will). The longer a system can go without error, the better it is for science. When building a super computer, for instance, you buy parts that proven to work at a given spec, you make sure it isn't a bad processor in a separate computer, then plug it in to the server until it goes bad. You do everything possible to make sure errors are kept to a bare minimum. Anything that tends to cause errors is avoided. What we're talking about here is an error once a month versus an error once a year. That would be a 12:1 ratio. The bigger the ratio, the better it is for science/computing/whatever. Regardless of whether or not the ratio is big or small, all science should be double checked--if not by the computers that did it in the first place then by someone else through peer review (in which case you get laughed at).