Discussion in 'Folding@Home' started by BUCK NASTY, Jul 10, 2009.
ttt to try and help ChimPowerup
Did you read this: http://folding.stanford.edu/English/WinSMPGuideMPICH
Thanks bumblebee, after clicking through loads of links, I have finally got it sorted. Here I go, more points for ChimPowerUp
SMP F@H is but it feels so when it finally works.
I am a team member on the crunching side but haven't been able to fold due to a 5870. My second dedicated cruncher i just put together has a 4870. Can I crunch and fold at the same time and how do i get on the folding team if it will work. I haven't researched anything on folding so i am a nooob
use a name for the points (I assume garyinhere), then use #50711 to fold for Team TPU. thats it really.
link to a site to download or anything.... i literally haven't researched it at all.
Make sure you read my post at #24
Is it possible to have ones pass key re-sent? I lost ½ year of e-mails due to a SSD crash and would like to avoid to do the 10 WU's.
Can someone tell me if it's possible to fold on two different GPU's in the same machine? I got a GTX275 and was wondering if there would be any problems adding a GTX 260 to the same rig.
Sure it is possble - have a look here http://http://forums.techpowerup.com/showthread.php?t=90420
Some but not all need a dummy plug (resistors to emulate a monitor). Give it a try without first.
So the fact they have different number of shader cores and speeds doesn't matter? Sorry, this was my worry. If so I'd try sticking the GT 240 into the same rig too. Thanks for the link btw will keep it handy.
i have F@H running on 2 gtx480s and 16threads of my x5677's
everything is at stock clocks.
when i start F@H up predicted PPD is around 74k ppd but it rapidly declines continuosly until it reaches 0 ppd.
this happens in both HFM.net and F@HMON.
% is climbing steadily though and im completing WUs.
this sound odd to anyone?
cpu declines far faster than gpus.
Do you use a process manager? Even with a process manager, both wouldn't decline to 0.
How do you get it to load across all 16 threads? 2 SMPs or 2 VMs?
Add the flag of '-smp X' (x = 0-15). Work your way down a couple (15->14->13, etc) and see if that changes anything. It sounds like its getting choked, but usually the CPU only chokes out the GPUs, and the GPUs have minimal effect on the CPU.
In the meantime, I'm going to look something up on Process Lasso. I can't remember if it can handle more than 8 cores.
-15 limits cpu to 96%
-14 limits cpu to 92%
and so on....
its not limiting the cores.. just how much load it can put on the cpus
That's alright, we're only looking to clear up a little head room not necessarily an entire core (or virtual one for that matter). Has it had any effect on the averages you've been noticing?
nothing changed really.
How much CPU is consumed by just the GTX 480s? Do they still degrade in performance without the CPUs active?
Also, I can't seem to find if it supports more than 8 cores, so the best way to find it out would be to blindly install it.
64bit Process Manager, what I currently use. We can try installing it to see if it fixes your problem, but frankly I don't think it will. Limiting the SMP with that flag should have shown some change, so this is up in the air whether it will work or not (or even at all with your 16 cores).
Cool! Now I know that program will be good for more than 8 cores.
Alright, under options, select 'Strictly enforce priorities' <--- the name has changed a few times, but this is the general idea.
Take the F@HCore_15s up to 'Above Normal Priority. Turn off the CPU folding for now and we'll see if that stabilizes the GPUs. You'll want to let it go for at least 3-5 percent on each of the clients so that you'll get a more accurate idea of average. If it does help, start up the CPUs at 50%, either via -smp 8 or with the process manager (I recommend the process manager). You might have to trial and error a bit to see if there is a magic mix of CPU dedicated percentage to GPU dedicated. Unfortunately I'm not sure if the GTX 480s are more picky about free CPU.
Separate names with a comma.