A lot of us here run either World Community Grid, F@H, or both, myself included, which is a great thing, and I admire our efforts. Similarly, a lot of us overclock our rigs. Processors, graphics cards, memory, all... presonally, I find myself tweaking whatever can be tweaked to get every ounce of performance out of it... not because I have to, but because I can, and because the faster my components are, the more work I do.
Now, this too is a great thing, but I have seen the topic of 'old school' and 'new school' overclocking argued countless times. Myself, coming from the old school, am sure to say that the new school method is wrong, because the new school meathod seems to be strictly trial and error—set something and roll with it. If something errors, change it. Now, that's fine if that's how you roll, but consider this: if you run your system this way, not knowing whether it's truly stable or not, how can you be sure you're not sending in bad results
to the WCG/F@H servers? Sure, they send the same work unit out and compare the results for differences, but there's two problems with this. The first problem is it's possible for something to slip through, as with any system. The second problem is that if you're sending in work units that are getting thrown out in the end, you would be doing much more work running at stock than overclocked.
Anyways, my point being made, I encourage each and every one of you, if you havn't already, to thoroughly test your overclocks. Run LinX overnight, and if it errors, do something to correct it—back down the clocks, change voltages, whatever. Same with your GPU... run the OCCT GPU test for a while, like set it to run before you go take a shower, and check it when you get back. I usually take about 20 minutes once everything's said and done, this should be enough time to expose any errors. If it errors, back down your clocks.
Links to some stability tests:
tests provided by Stanford: http://folding.stanford.edu/English/DownloadUtils
monitor temps with realtemp and gpu-z