Discussion in 'World Community Grid (WCG)' started by hat, May 21, 2010.
Do you have something to back this claim up?
how else do you think they figured it out?
If you have something to say on the topic of the thread we'd love to hear it... otherwise...
Geez take it easy E. All he is saying is that this is important work that, in the end, we hope will save lives. There is no point in wasting CPU time or sending in bad results if they can be prevented by stability testing.
Yes its true that participating in Distributed Computing is a volunteer act that people do with good intentions, and that's great. However, just because your intentions are good or because you are donating the resources doesn't mean you should cut corners or not hold yourself to a minimum/required standard. Is it okay to donate broken toys to a toy drive, not wash your hands when volunteering at a soup kitchen, or to donate just the muffin tops? No, and I think sending WUs on an untested OC is a near equivalent.
I will agree that its unfair to call someone lazy for not stability testing though, more often then not getting WCG and F@H running right is a lot more work intensive then opening OCCT or LinX and pressing go while you watch TV, go out, or sleep
In the end though, if you want to ensure you are going to be giving worthwhile results to the project, get your stability test in
I guess you missed my point, it doesn't matter. Regardless, continue doing what you are doing for the good of others.
Once again, everybody is entitled to their own opinions. No need to take it so seriously. That's what "error" units are for, they don't count, you don't get credited for them. I have never ever primed either of these rigs or stress tested. My i7 was crunching at 4.3 Ghz earlier for about 1.5 hours in the afternoon from a bench session, I lowered it back to 3.8 Ghz due to temps, was just too lazy to restart earlier. Anyhow, your opinion is your opinion. Here's proof that new school or ol school you can crunch without stability testing.
Zero Errors! My i7 has a new windows install so that's why there is two devices for it.
Sorry if you think I'm coming across as harsh, and this post will probably come across as harsh as well, but if you are going to read this, I plead with you to read all the way until the end—don't stop midpoint in my post carrying the thought of "wow, hat's really an asshole" with you. I truly believe that to run a project like Folding@Home or WCG on an overclocked computer without sufficient stability testing is asinine. Sure, it may say zero errors, but what if there is an error somewhere in one of those work units, and it happens to slip through the cracks? Think of a complex math equation with many steps. What happens if you slip up and change a sign somewhere, slip up on your arithmatic? Sure, all the other stuff might be right, but there was a "stability error", per se, and the whole effort is wasted when you get the wrong answer. What if that is what we are doing... what if our possibly unstable computers are making a miscalculation somewhere, and it slips through the cracks?
As I said before, we could be holding future lives in our hands. When you overclock and get subtle stability errors over time because you never tested and you find out your computer is behaving abnormally, possibly not able to boot to windows because a critical system file got corrupted, there is no real harm done. Sure, it sucks reinstalling windows and all those programs and getting everything set up the way you had it, but at the end of the day when it's all said and done, it's no big deal; however, when an overclocked effort to cure cancer or another disease goes awry like an installation of windows that was slowly knocked off its feet by a slightly unstable system overtime, the effects could be disaterous.
I am not equating anyone's effort to "playing with lives", or at least, I am not trying to, even though it may seem that way. We all run distributed computing projects to help others. Many of us have spent our money to upgrade computers that run these projects to get more work done, and similarly, we overclock knowing that the higher speed will get more work done, and that's great. One of the main reasons I overclock is to get more work done. I'm just saying that if proper tests aren't done to verify the stability of our computers doing this magnificent work, it could be all for naught—or even having adverse effects.
Again, please don't take me the wrong way. I've been here since 2006; if I were a troll, asshole or otherwise, I'm sure someone would have noticed by now. I intend to do no harm to my readers, emotional or otherwise. I think we're a great bunch of people and we have a very tight-knit community for being a tech forum as large as we are. I am friends with many of you, and some of you have helped me with many things. I recall getting a 17" LCD monitor off one of you for free, and I don't think you even asked me to pay shipping (if you're reading this, I havn't forgotten your name, I remember exactly who you are, but I remember you not wanting me to give your name out by publically thanking you). I just believe very strongly that everyone should test thier overclocks, if they are running a distributing project, such as F@H and WCG, the two projects many of us have become so fond of. If you are still reading at this point, and you are one of the ones who are running F@H or WCG without having properly stability tested your computer, I strongly encourage you to do so.
This post was much better than your last couple of posts bro, not harsh at all. That you encourage us to do so is totally fine. However, just because you passed OCT or Prime doesn't mean your computer is stable, you might pass 8 hours, but let it go 8.5 hours and crash. Might take longer, but the errors will still arise and it does not warrant anything. Even at stock clocks, a error can happen for no apparent reason and squeeze through the cracks. It's just something you can't control. I really appreciate your efforts and making this thread in the first place and your efforts toward helping the team and anybody who runs a distributed computing project as a whole, but you can't come in here expecting to change everyone's opinion which is what it seemed like a few posts back. Don't think there is nothing else to really say much as we both voiced our opinions and discussed them much already. Just don't want this back and forth to continue as like I said it's your opinion and then mine, they won't change. Hopefully somebody else chimes with some feedback of their own.
Yes, even stock cpus can be unstable, or overclocks that pass say 24 hours of LinX, the the likelihood of that happening when stacked up againt a "set it and forget it" untested OC are slim to nil.
That aspect of my argument having been argued, I agree, I think we've both made our points, and there's not much left to discuss.
Plus these are doctors and graduate students we are dealing with... They have to factor in an error percentage anyways! Its just good science to do so!!!
Separate names with a comma.