• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

FAH TPU Top 100

Joined
Oct 17, 2020
Messages
64 (0.04/day)
Location
United States
A TPU guy got there first...
EOC Top Point Producers
TPUtop100.png


I was close...
I've been tuning & tweaking, getting all the Nvidia GPUs on Linux. While I'll never make Top100 rank like @MightyMayfield, I'm really close to a Top100 Producer spot... the EOC24hr (7day/avg) is the stat I watch. Although my#s have been weak the last couple of days, I guess others faired worse as I moved up a chunk! Not what I expected to see.
102.png
 
Last edited:
Loving all these stats you can pull and show :D
 
A TPU guy got there first...
EOC Top Point Producers
View attachment 184755

I was close...
I've been tuning & tweaking, getting all the Nvidia GPUs on Linux. While I'll never make Top100 rank like @MightyMayfield, I'm really close to a Top100 Producer spot... the EOC24hr (7day/avg) is the stat I watch. Although my#s have been weak the last couple of days, I guess others faired worse as I moved up a chunk! Not what I expected to see.
View attachment 184756
You ranked up into the Top 100 I see , 99th place, so well done !
How much of this comes from moving to Linux?
 
You ranked up into the Top 100 I see , 99th place, so well done !
How much of this comes from moving to Linux?
TPU has a good team!
Moving to Linux was a big piece of the changes I've made, especially with ALL my GPU being Nvidia and the CUDA core released at the end of Sept.
I still have a couple of Win10 systems, for desktop work.
Also, I've moved to Supermicro server-class hardware for the dedicated folders.
The desktop mobos just don't have enough PCIe lanes to work with.
And it was cheaper to buy & operate! Turns out the power efficiency is much better too.
This guy (photo attached) is putting out ~8M PPD. The mobo,cpu & ram (with a heatsink I replaced) was US$350.
Just checked.. he's currently running at 8.19M.
I've attached my Linux FAHclient Install Guide for Ubuntu 20.04. I also run Linux Mint and like the interface much more than Ubuntu Mate.
Buss-wise, I could attach two additional GPU to the system in the photo, but where would they go? Powering them too would be a challenge.
If anyone has an old system around they try out the Linux install.

I did a scatterplot of the September transition to Linux & CUDA from OpenCL. The more powerful cards like the 3070 2070s & 2060s really benefitted from the switch.
It's a crude plot, but communicates the effects of both CUDA & Linux. You can see the three different atom count levels in the groupings.

To share how simple it is to build a dedicated Linux server class Folder, this is the primary server that I'm working on.
Dual Xeon E5-2630v3, Supermicro X9DR3, 8GB DDR3 1066 RAM (which I upgraded later) Cost was US$170, free shipping.
This how it looks now... 4ea PCIex16 slots that can be configured to bifurcate!
I have the pieces to complete the fan bar & it'll be working soon.
I should point out that I don't fold on CPUs, a system like this isn't power efficient for CPU folding, but it does have 80 PCIe3 lanes to manage GPUs.
 

Attachments

  • xeon-8M.png
    xeon-8M.png
    65.3 KB · Views: 169
  • 20210123_101820.jpg
    20210123_101820.jpg
    3 MB · Views: 201
  • UbuntuFAHinstall-v1d.pdf
    UbuntuFAHinstall-v1d.pdf
    227 KB · Views: 1,403
  • 149xx-20201007.png
    149xx-20201007.png
    23.9 KB · Views: 171
  • 20210123_104156.jpg
    20210123_104156.jpg
    3.1 MB · Views: 159
Last edited:
You mention PCIe lanes.
But do PCIe lanes matter that much for folding I wonder?
Have you found interesting info on the subject on how much of a difference it makes on GPU's running x16 x8 x4 or x1?
The way I "think" about it, is that most of the time is spent on calculations on the GPU, and that data transfer is only a fraction of the total time.
Of course, I could be totally wrong in my assumption, maybe this topic has already been resolved somewhere on the net.
 
I don't believe that PCIe lanes matter all that much. Imagine how many lanes he had:
 
The PCIe lanes have been discussed a lot on the official FAH and as far as I remember you have to have at least x4. The x4 is hit by a few percent. I never took part in that discussion but I have seen a slight decrease at x8 on some projects. The 17800 I run now on a 2070 at x16 uses 14% bus.
 
Last edited:
The EVGA Folders have good discussions on this topic.
Their consensus was a PCIe3 4x slot was ~10% drop. As they would run gaggles of 2080Ti, on a 1660Ti, it might not be noticeable?
Due to the variation in WU, unless you are logging data, it's often difficult to tell a difference. I've logged 9,847 WU since Sept, so the data I share isn't based on observation or impressions. I even change the client designation with any config changes, so setup data isn't commingled.
PCIe3 8x is fine for most (all?) GPUs, the smaller system above is all PCIe3 8x.
Also the more powerful the GPU the higher the required bandwidth.
A 2060 will take a 15-20% hit on PCIe2 4x slot.
Also, I was seeing a significant hit on an AMD x570 PCH slot. I even swapped the two 2060s to verify it was the x570 hdwr.
But that's just one data point. My expectations for that box faded quickly.
Also, throttling a GPU isn't necessarily a bad thing as it conserves the power load & reduces operational temps.
I work to keep everything I run under a 70C max.
However, I like to tinker and find combos that work well together. My last change didn't work. I was reverting everything at 5AM.
Weird as it sounds, the workloads interact. Of course, I'm sure, the Folding experts will tell you that's silly nonsense. :kookoo::kookoo::confused:
All good topics for setting up gear & running real tests.
One real test, is worth a 1,000 'expert' opinions!
 
Last edited:
do you guys think i could make a dent in the rank with my rig?
 

Attachments

  • dent.png
    dent.png
    102.7 KB · Views: 164
do you guys think i could make a dent in the rank with my rig?
With that many cores, you need to have the configuration right, so all cores can be used.
I have read about it, but I don't remember exactly what it was anymore. I believe you had to insert a line in a config-file telling how many cores to use.
 
do you guys think i could make a dent in the rank with my rig?
Yes, but be prepared to deal with the temps!
You ask on the Folding@Home Discord server, they have some avid CPU Folders.

Certain core count groups are better utilized than others...
 
Back
Top