• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Beowulf Cluster

Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
Now that I know what it is called, I want to build one, I have for years. I have literally a half dozen computers that I could reassemble and get back up and running by the end of the week.
Old old stuff, duo core CPU's ddr 2 memory, windows xp or 7 old.
Besides a half dozen pc's what else do I need? A router of some sort I assume.
Has anyone made one? I read the build about the ghetto cluster and I am researching the original Beowulf Cluster.
TBH some of this stuff is over my head, I ain't gonna lie. But this is how I learn.
I can follow basic directions if I can understand them. I don't know squat about Linux and would have to learn that as I understand that Linux is what the original Beowulf Cluster used, or Unbutu.
Does anyone want to collaborate on a fun, probably useless project? If I can get it to work, I will probably use it for looking for e.t.'s or for folding@home.
specs are right now, 6 intel cpu's totaling about 24Ghz and about 48 Gb RAM.
I have a board for each CPU and a PSU for each board. I have more than enough hard drives running around. and miles of cable.
Last I want to know if I could use server chassis to avoid having 6 desktop computers side by side sucking down the juice and creating 6x the heat.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,303 (4.09/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: Athlon II x4 630 3.5GHz
Motherboard ASUS P8P67 Pro :: GIgabyte GA-770T-USB3
Cooling Corsair H70 :: Thermaltake Big Typhoon
Memory 2x4GB DDR3 1866 :: 2x1GB DDR3 1333
Video Card(s) 2x PNY GTX1070 :: none
Storage Plextor M5s 128GB, WDC Black 500GB :: Mushkin Enhanced 60GB SSD, WD RE3 1TB
Display(s) Acer P216HL HDMI :: None
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - iLive IT153B Soundbar (optical) :: None
Power Supply FSP Hydro GE 550w :: something
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
It's an interesting idea, but pretty complex and you'd have to have a specific goal in mind to attempt this. You can't simply combine them all into one 24GHz computer with 48GB RAM. In such a cluster; each machine is still broken down to its individual parts, and the workload has to be something that can be run in a system like this. Folding@Home is one such example: it's a huge project broken up into many very small parts called "work units" that get distributed all over the world. In a sense, you could say the F@H network is already a Beowulf cluster.
 
Joined
Sep 17, 2014
Messages
14,535 (6.17/day)
Location
The Washing Machine
Processor i7 8700k 4.7Ghz @ 1.26v
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) MSI GTX 1080 Gaming X @ 2100/5500
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (34'' 3440x1440x)
Case Fractal Design Define C TG
Power Supply EVGA G2 750w
Mouse Logitech G502 Protheus Spectrum
Keyboard Sharkoon MK80 (Brown)
Software W10 x64
Hahah Beowulf was cool for 1,5 hours if I recall correctly... the movie that is.

Not sure what the purpose of this idea really is beyond a similar duration fun time :) (and keeping busy)
 
Joined
Oct 17, 2020
Messages
34 (0.26/day)
Location
United States
Beowulf Clusters are really old tech. Erasure coded storage is driving the current breed of storage. I've not kept up, but you might want to look at Ceph or even Haddop, maybe gluster or lustre. At one point we were talking about putting zfs on lustre. This could be useful knowledge ;). The end aim of these architectures is huge reliability. RAID6 is (iirc) ~99% with cache & BBU (battery back-up) and small disks, when you get into erasure-coded systems they talk 10,12,15-9s of reliability and Petabytes are the base unit of storage.
 
Last edited:
Joined
Nov 20, 2013
Messages
4,665 (1.76/day)
Location
Kiev, Ukraine
System Name WS#1337
Processor Ryzen 5 1600X
Motherboard Gigabyte x470 AORUS Ultra Gamin
Cooling Xigmatek Scylla 240 AIO
Memory 2x8GB Team T-Force Vulkan DDR4-3000
Video Card(s) MSI RTX 2060 Super Armor OC
Storage Adata SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case Chieftec AL-01B-OP
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Zalman K500 modded (Gateron brown)
Software Windows 10, Ubuntu 20.04 LTS
Forget about Beowulf, it's been outdated for over a decade.
There are many modern alternatives, which can fill any purpose you have.
The question is: do you need a cluster at all?

If I can get it to work, I will probably use it for looking for e.t.'s or for folding@home.
Just run it on individual PCs.
 
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
Beowulf Clusters are really old tech. Erasure coded storage is driving the current breed of storage. I've not kept up, but you might want to look at Ceph or even Haddop, maybe gluster or lustre. At one point we were talking about putting zfs on lustre. This could be useful knowledge ;). The end aim of these architectures is huge reliability. RAID6 is (iirc) ~99% with cache & BBU (battery back-up) and small disks, when you get into erasure-coded systems they talk 10,12,15-9s of reliability and Petabytes are the base unit of storage.
I have no idea what you are talking about. Can you please dumb it down a little? Pretend I am 10.

Forget about Beowulf, it's been outdated for over a decade.
There are many modern alternatives, which can fill any purpose you have.
The question is: do you need a cluster at all?


Just run it on individual PCs.
Of course I don't NEED a cluster. This isn't about filling a need other than the desire to learn and maybe build something stupid cool.
I don't care about out dated, this is a learning exercise. Sure I can run Folding@home on individual computers, but I want to see what a cluster does. I want to be able to say that I built one AND didn't burn down the house in the process.
What else am I going to do with a half dozen C2D computers? But since you brought it up, what alternatives?
At the end of the day, I wasn't looking for criticism so much as other like minded folks who have built a cluster or want to build a cluster.
 
Joined
Aug 14, 2013
Messages
378 (0.14/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999

I would just sell them or junk them. If you want to see the performance you’re looking for you’re probably going to need to spend a couple hundred on your network, and even then you’ll only see a fraction of your performance targets (24ghz is not happening — imagine the bandwidth you’d need!).

But go for it if you want! I’d offer advice but have only ever done this with a couple of G4 laptops. Unfortunately the returns weren’t worth it for me. Still a fun experiment though!
 
Joined
Nov 20, 2013
Messages
4,665 (1.76/day)
Location
Kiev, Ukraine
System Name WS#1337
Processor Ryzen 5 1600X
Motherboard Gigabyte x470 AORUS Ultra Gamin
Cooling Xigmatek Scylla 240 AIO
Memory 2x8GB Team T-Force Vulkan DDR4-3000
Video Card(s) MSI RTX 2060 Super Armor OC
Storage Adata SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case Chieftec AL-01B-OP
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Zalman K500 modded (Gateron brown)
Software Windows 10, Ubuntu 20.04 LTS
Of course I don't NEED a cluster.
That's the main issue. You wanna do folding - there is no option to run it on a cluster. Most of the stuff requires software that's specifically designed for that or the other stack.

I don't care about out dated, this is a learning exercise.
Once again, it all depends on what you are trying to learn.
First thing you need to understand, is that a cluster of computers is not the same as one really powerful computer. You can't just run any software on it, since it usually relies on some sort of message-passing protocol and only gives you benefits in tasks that require lots of parallelization, while having minimum linear loads and even less I/O operations. Otherwise you'll be bottlenecked either by storage or by network, or by single-threaded performance of your slowest core.
The best thing you could try, is setting up an OpenMPI cluster (same thing. Beowulf is just a nickname for any cluster built on commodity hardware).
I'm not sure if there's any readily available software to test it out, but you could do a quick crash-course in C++, read a few tutorials on MPI, and write some code.
But if you want something practical - I suggest dipping your toes into virtualization, containerization etc. Just look up some tutorials on setting up Kubernetes on Raspberry Pi, and apply it to your hardware. It's not exactly the same as "building a super-computer", but it's more practical, more "modern", and more interesting.

At the end of the day, I wasn't looking for criticism so much as other like minded folks who have built a cluster or want to build a cluster.
I'm not trying to criticize anyone. Just clarifying what exactly is it you are trying to achieve. Lots of people have misconceptions about this stuff, so it's important to read-up some theory before wasting days or weeks on something that might not work at all or will end up being not something you actually wanted.

Back in a day I did a similar project. Bought a boxful of broken "fat" PS3s, fixed a few of them, and built a little cluster running YellowDog Linux. That was just that - tested out some code [once or twice] and scrapped the whole thing back to stock PS3 OS and ran F@H for a few months. At that time even with background in programming it was too much of a pain in the ass (not building, but finding practical uses for it), cause only a few months later Nehalem came out, and also CUDA became super-relevant. Auctioned off all of it, and my semi-refurbished console "supercomputer" paid for my shiny new X58 rig that could run Crysis)))
 
Joined
Apr 24, 2020
Messages
738 (2.39/day)
I'd recommend going down the Rasp. Pi clusters route instead.

When you're building a home cluster, it isn't about actually making something fast. (Otherwise, you'd just buy a Threadripper or EPYC or Xeon), What its about, is about learning how to use multiple computers together on a singular task.

EDIT: At 5W per Rasp. Pi and like $20 each, you're gonna be able to setup a 8x way Rasp. Pi cluster at only 40W used and $160 or so, maybe closer to $250 once you account for microSD cards and some other random stuff (power-supply, cables, Ethernet switch, etc. etc).

EDIT2: The other thing you can do, is buy actual supercomputer hardware (aka: GPUs), and learn how to use them. SIMD compute is very efficient, but very difficult to program.
 
Last edited:
Joined
Aug 14, 2013
Messages
378 (0.14/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
That's the main issue. You wanna do folding - there is no option to run it on a cluster. Most of the stuff requires software that's specifically designed for that or the other stack.


Once again, it all depends on what you are trying to learn.
First thing you need to understand, is that a cluster of computers is not the same as one really powerful computer. You can't just run any software on it, since it usually relies on some sort of message-passing protocol and only gives you benefits in tasks that require lots of parallelization, while having minimum linear loads and even less I/O operations. Otherwise you'll be bottlenecked either by storage or by network, or by single-threaded performance of your slowest core.
The best thing you could try, is setting up an OpenMPI cluster (same thing. Beowulf is just a nickname for any cluster built on commodity hardware).
I'm not sure if there's any readily available software to test it out, but you could do a quick crash-course in C++, read a few tutorials on MPI, and write some code.
But if you want something practical - I suggest dipping your toes into virtualization, containerization etc. Just look up some tutorials on setting up Kubernetes on Raspberry Pi, and apply it to your hardware. It's not exactly the same as "building a super-computer", but it's more practical, more "modern", and more interesting.


I'm not trying to criticize anyone. Just clarifying what exactly is it you are trying to achieve. Lots of people have misconceptions about this stuff, so it's important to read-up some theory before wasting days or weeks on something that might not work at all or will end up being not something you actually wanted.

Back in a day I did a similar project. Bought a boxful of broken "fat" PS3s, fixed a few of them, and built a little cluster running YellowDog Linux. That was just that - tested out some code [once or twice] and scrapped the whole thing back to stock PS3 OS and ran F@H for a few months. At that time even with background in programming it was too much of a pain in the ass (not building, but finding practical uses for it), cause only a few months later Nehalem came out, and also CUDA became super-relevant. Auctioned off all of it, and my semi-refurbished console "supercomputer" paid for my shiny new X58 rig that could run Crysis)))

YellowDog is what I used too :) Where’s the G6 IBM?!
 
Joined
Nov 20, 2013
Messages
4,665 (1.76/day)
Location
Kiev, Ukraine
System Name WS#1337
Processor Ryzen 5 1600X
Motherboard Gigabyte x470 AORUS Ultra Gamin
Cooling Xigmatek Scylla 240 AIO
Memory 2x8GB Team T-Force Vulkan DDR4-3000
Video Card(s) MSI RTX 2060 Super Armor OC
Storage Adata SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case Chieftec AL-01B-OP
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Zalman K500 modded (Gateron brown)
Software Windows 10, Ubuntu 20.04 LTS
YellowDog is what I used too :) Where’s they G6 IBM?!
It was a fun distro. My main PS3, that I actually used for games'n'stuff, also ran YLD.
Had hard time setting up a wireless KB/mouse, but once it worked - it pretty much replaced my main rig for mundane tasks like web browsing and checking e-mails.
The only thing that was missing, is proper GPU accel.

I'd recommend going down the Rasp. Pi clusters route instead.
+1. It may sound counter-intuitive, but scrapping those in favor of small ARM farm is actually the best solution.
I think just an electrical bill from 5-6 C2D PCs would pay for one Pi4 in a week or two. Also, it's not gonna be a super-loud fire hazard.
I'd settle on something that can run latest Android, since it'll give you the ability to repurpose those into a crunching farm as well (at least in my experience aarch64 client never worked and seems abandoned). Also did that awhile ago. I believe I even posted some performance numbers from my old devices here on TPU, like Cubietruck, or old smartphones. At least before Zen it was the best PPW the money could get. RPi4 is probably wa-a-ay better than what I used to play with.
 
Joined
Jul 2, 2008
Messages
7,740 (1.67/day)
Location
Hillsboro, OR
System Name Main/DC
Processor i7-3770K/i7-2600K
Motherboard MSI Z77A-GD55/GA-P67A-UD4-B3
Cooling Phanteks PH-TC14CS/H80
Memory Crucial Ballistix Sport 16GB (2 x 8GB) LP /4GB Kingston DDR3 1600
Video Card(s) Asus GTX 660 Ti/MSI HD7770
Storage Crucial MX100 256GB/120GB Samsung 830 & Seagate 2TB(died)
Display(s) Asus 24' LED/Samsung SyncMaster B1940
Case P100/Antec P280 It's huge!
Audio Device(s) on board
Power Supply SeaSonic SS-660XP2/Seasonic SS-760XP2
Software Win 7 Home Premiun 64 Bit
IIRC,

Beowolf clusters came along to provide "cheap" supercomputers to colleges and universities. People don't seem to remember that in the early days of computers, a 2 year old computer was worthless. Beowolf allowed the creation of clusters with HUNDREDS of 486 systems. I always found it ironic though that in order to get the backbone speed fast enough, they had to use fiber optic NICs, and PCI didn't come out until Pentium.

IIRC
 
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none

I would just sell them or junk them. If you want to see the performance you’re looking for you’re probably going to need to spend a couple hundred on your network, and even then you’ll only see a fraction of your performance targets (24ghz is not happening — imagine the bandwidth you’d need!).

But go for it if you want! I’d offer advice but have only ever done this with a couple of G4 laptops. Unfortunately the returns weren’t worth it for me. Still a fun experiment though!
Fun experiment! That's what I am after!! I am old, retired, and bored.
I hadn't thought about potential bottlenecks. Not sure how much bandwidth that would actually hog up. I don't know enough about how that stuff works. Hence this learning exercise.
That's the main issue. You wanna do folding - there is no option to run it on a cluster. Most of the stuff requires software that's specifically designed for that or the other stack.
My understanding of how a cluster works. Please correct me if I am wrong. Isn't it kind of like a supervisor/worker set up or slave/master? Meaning one computer assigns tasks to the others?
Again, I am trying to learn and not argue... but isn't that exactly what a multi core multi thread CPU does?
I get that there may (probably will) be a bottle neck somewhere. Looking at the F@H website I see that there is a Linux download. Can you please explain in simple terms why this won't work for F@H? I thought I could use some form of Linux to make that happen but don't know enough about Linux to know if it is still a viable option.

Once again, it all depends on what you are trying to learn.
I am trying to learn how to build and use a cluster. It is my understanding that Beowulf ran on Linux, since I am wanting to learn Linux, I could potentially kill two birds with one stone. But apparently, there is no software called "Linux". Looking at the Linux.org software, I see there are 25 different programs available which leads to my next question, which program do I get?
The best thing you could try, is setting up an OpenMPI cluster (same thing. Beowulf is just a nickname for any cluster built on commodity hardware).
I'm not sure if there's any readily available software to test it out, but you could do a quick crash-course in C++, read a few tutorials on MPI, and write some code.
But if you want something practical - I suggest dipping your toes into virtualization, containerization etc. Just look up some tutorials on setting up Kubernetes on Raspberry Pi, and apply it to your hardware. It's not exactly the same as "building a super-computer", but it's more practical, more "modern", and more interesting.
This right here is why I posted this. While I am not at all interested in learning C++ just yet, the part about MPI and virtualization and containerization mighht come in handy somewhere.
I'd recommend going down the Rasp. Pi clusters route instead.

When you're building a home cluster, it isn't about actually making something fast. (Otherwise, you'd just buy a Threadripper or EPYC or Xeon), What its about, is about learning how to use multiple computers together on a singular task.

EDIT: At 5W per Rasp. Pi and like $20 each, you're gonna be able to setup a 8x way Rasp. Pi cluster at only 40W used and $160 or so, maybe closer to $250 once you account for microSD cards and some other random stuff (power-supply, cables, Ethernet switch, etc. etc).

EDIT2: The other thing you can do, is buy actual supercomputer hardware (aka: GPUs), and learn how to use them. SIMD compute is very efficient, but very difficult to program.
The only probem with using RPi's is the cost. I have 6 C2D computers that will boot and not a single RPi. I am retired, that means flat broke. When you are flat broke you learn to use what you have.
I would love to buy a half dozen GPU's, see comment above about being broke.

At the end of the day, I will probably use it for a day or two after wasting weeks on getting it set up. But it gives the grey matter something to work on. When you are retired, there are only so many hours of gaming and Jerry Springer you can handle. I don't care if it becomes a museum piece or a boat anchor, or a paperweight. Completely impractical, purpose built white elephants are fun. Being retired is a little like being in prison. You have hours and hours to kill and nothing much to do.

Thanks for everyone's input!
 
Last edited:
Joined
Nov 8, 2020
Messages
48 (0.43/day)
On a more useful note (perhaps), for things you can run on multiple machines that is. Rendering, network render for 3d renderers and what not. I used to do that back in the day, for a decent enough speed increase when I had more than one machine or simply to offload it on the main computer so i can play some games or whatnot while it rendered.
It won't be particularly useful, but i can be fun to play with if nothing else. If you have any interest in trying that, blender + crowd-renderer is what you wanna look at.

A friend of mine got to play with some of his university old server before they chucked them. I recall he wanted to try out some networked computational stuff. Nothing useful at all and 100% "for the lulz" of it. But he enjoyed the tinkering and learning. He only read about such things but never actually tried it before so its cool to do something new.

Other fun things you can try out, pfsense router perhaps? Fun to tinker with on older computers.
 
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
On a more useful note (perhaps), for things you can run on multiple machines that is. Rendering, network render for 3d renderers and what not. I used to do that back in the day, for a decent enough speed increase when I had more than one machine or simply to offload it on the main computer so i can play some games or whatnot while it rendered.
It won't be particularly useful, but i can be fun to play with if nothing else. If you have any interest in trying that, blender + crowd-renderer is what you wanna look at.

A friend of mine got to play with some of his university old server before they chucked them. I recall he wanted to try out some networked computational stuff. Nothing useful at all and 100% "for the lulz" of it. But he enjoyed the tinkering and learning. He only read about such things but never actually tried it before so its cool to do something new.

Other fun things you can try out, pfsense router perhaps? Fun to tinker with on older computers.
Rendering that I am familiar with has to do with Autodesk and similar programs. Is that what you are referring to? Turning line drawings into still shots or walk through or fly through? Not overly excited about creating a drawing so I can render it with a cluster.
I thought that was more of a funtion of the GPU than anything else, is that accurate?

Edit: Can you explain about the pfsense router? Are you thinking to reduce bottleneck?
 
Last edited:
Joined
Apr 24, 2020
Messages
738 (2.39/day)
The only probem with using RPi's is the cost. I have 6 C2D computers that will boot and not a single RPi. I am retired, that means flat broke. When you are flat broke you learn to use what you have.
I would love to buy a half dozen GPU's, see comment above about being broke.

Just keep an eye on overall utility costs. C2D is old, slow, and power-hungry (compared to anything from this decade at least). As long as you're just playing with them, you're probably fine, but if you keep your cluster on for an extended period of time on any long-term problem (ex: Folding at Home, 3d Rendering, Web-Hosting, whatever), you might be spending more $$$ on your kWhrs than you'd initially expect.

I don't think C2D knew how to "deep sleep", you might be hitting 50W to 150W or higher per machine. If you're using the heat anyway (ie: heating up your room), that's probably fine. But in the summer when you spend electricity to move that heat outside your house (ie: Air Conditioning), the costs are going to go even higher.

RPi is nifty because 8x Pi combined will use less power than even one of your C2D machines. So the RPi can be used for long-term problems (web-hosting).

-----------

C2D cluster should be fine for learning, as long as you keep it turned off most of the time to cut back on utility bills.

Rendering that I am familiar with has to do with Autodesk and similar programs. Is that what you are referring to? Turning line drawings into still shots or walk through or fly through? Not overly excited about creating a drawing so I can render it with a cluster.
I thought that was more of a funtion of the GPU than anything else, is that accurate?

Professional-level drawings are still CPU-rendered. GPU-rendering is beginning to become popular: but there are many issues to be solved still. (In particular: CPU-algorithms need to be ported over. But also, GPUs have much less RAM. Professional 3d Renders like Moana have 93 GBs of models + 130GBs of animations, too much data to fit on a GPU and therefore are largely still rendered on CPUs: https://www.disneyanimation.com/resources/moana-island-scene/)

Edit: Can you explain about the pfsense router? Are you thinking to reduce bottleneck?

Routing is level 3. Most home networking is Level 2.

Level 2 is about connecting computers together. Level 3 (Routing) is about connecting networks together. So if Computer#1, Computer#2, and #3 are on a network, and #4 #5, #6 are on a second network, what are the rules you make to have #1#2#3 talk with #4#5#6 ??

That's what PFSense does for you. As such, its a thing you only really can play with when you have assloads of computers. (Networks-of-networks don't really make sense when you only have 2 or 3 computers). Why would you want "rules" ?? Well, security is usually the main issue. You can make it impossible for #1#2#3 to talk to computer #5 for example, but still allow #4 to talk to #1#2#3. Alternatively, you might want to just manage your IP-ranges. Maybe you want static-IP addresses on one network, but dynamic DHCP IP addresses on the 2nd network. (or as some network admins like to call it: Your computers on Network#1 are "pets". Unique names, configured one-by-one, all unique little snowflakes. Computers on network #2 are "cattle". They're mass produced, highly automated. You don't give them names, you just track their resources and distribute jobs to them).

A major benefit of AWS / The Cloud is more about methodology: the mindset to treat computers "like cattle" in a factory farm. You set things up in such a way that the computers automatically configure themselves and do stuff, with as little interaction with you personally as possible. PFSense can play a role in that: since it helps cut-and-join different networks together.

6 computers is small enough that you probably can still have "Pet computers" on a statically managed network without PFSense. But its also enough that you can start playing with the "cattle computers" concept and learning it.
 
Last edited:
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
Just keep an eye on overall utility costs. C2D is old, slow, and power-hungry (compared to anything from this decade at least). As long as you're just playing with them, you're probably fine, but if you keep your cluster on for an extended period of time on any long-term problem (ex: Folding at Home, 3d Rendering, Web-Hosting, whatever), you might be spending more $$$ on your kWhrs than you'd initially expect.

I don't think C2D knew how to "deep sleep", you might be hitting 50W to 150W or higher per machine. If you're using the heat anyway (ie: heating up your room), that's probably fine. But in the summer when you spend electricity to move that heat outside your house (ie: Air Conditioning), the costs are going to go even higher.

RPi is nifty because 8x Pi combined will use less power than even one of your C2D machines. So the RPi can be used for long-term problems (web-hosting).

-----------

C2D cluster should be fine for learning, as long as you keep it turned off most of the time to cut back on utility bills.



Professional-level drawings are still CPU-rendered. GPU-rendering is beginning to become popular: but there are many issues to be solved still. (In particular: CPU-algorithms need to be ported over. But also, GPUs have much less RAM. Professional 3d Renders like Moana have 93 GBs of models + 130GBs of animations, too much data to fit on a GPU and therefore are largely still rendered on CPUs: https://www.disneyanimation.com/resources/moana-island-scene/)



Routing is level 3. Most home networking is Level 2.

Level 2 is about connecting computers together. Level 3 (Routing) is about connecting networks together. So if Computer#1, Computer#2, and #3 are on a network, and #4 #5, #6 are on a second network, what are the rules you make to have #1#2#3 talk with #4#5#6 ??

That's what PFSense does for you. As such, its a thing you only really can play with when you have assloads of computers. (Networks-of-networks don't really make sense when you only have 2 or 3 computers). Why would you want "rules" ?? Well, security is usually the main issue. You can make it impossible for #1#2#3 to talk to computer #5 for example, but still allow #4 to talk to #1#2#3. Alternatively, you might want to just manage your IP-ranges. Maybe you want static-IP addresses on one network, but dynamic DHCP IP addresses on the 2nd network. (or as some network admins like to call it: Your computers on Network#1 are "pets". Unique names, configured one-by-one, all unique little snowflakes. Computers on network #2 are "cattle". They're mass produced, highly automated. You don't give them names, you just track their resources and distribute jobs to them).

A major benefit of AWS / The Cloud is more about methodology: the mindset to treat computers "like cattle" in a factory farm. You set things up in such a way that the computers automatically configure themselves and do stuff, with as little interaction with you personally as possible. PFSense can play a role in that: since it helps cut-and-join different networks together.

6 computers is small enough that you probably can still have "Pet computers" on a statically managed network without PFSense. But its also enough that you can start playing with the "cattle computers" concept and learning it.
Great point about the power consumption. I know there are calculators. I was thinking about using one to estimate my wattage and then convert that to K/hrs to try and get a guestimate of what it is going to cost me to run it.
About the pfsense router. I was thinking I could find some junk router and connect the computers that way. Is that inaccurate? Are you saying that even if I did that and it works it won't talk to another network, like Seti?
The pet vs. cattle analogy makes sense sort of. I don't know that I can automate this as far as set up goes. My understanding is that I have to physically configure each computer to work with other computers in the cluster. Once I get them set up, I can use a master terminal to assign tasks to the other units.
Maybe that level of automation is more advanced than beginner?
Either way thanks very much for the explanation.
 
Joined
Apr 24, 2020
Messages
738 (2.39/day)
About the pfsense router. I was thinking I could find some junk router and connect the computers that way. Is that inaccurate? Are you saying that even if I did that and it works it won't talk to another network, like Seti?

You have 6 C2D computers. Why not download PFSense + buy a $10 to $20 Ethernet port and turn one of those C2D into a router? PFSense is free.

The question is if you want a router (level 3), or if you're sufficient with a switch (level 2). Most people are probably happy with a switch. Routers are used to organize multiple switches together. It really depends on what exactly you wanted to do and/or play with.

You really can do everything you need with a switch (level 2). But... if you're still looking for project ideas, learning how PFSense works and how to do level3 networking is certainly a timesink.

The pet vs. cattle analogy makes sense sort of. I don't know that I can automate this as far as set up goes. My understanding is that I have to physically configure each computer to work with other computers in the cluster. Once I get them set up, I can use a master terminal to assign tasks to the other units.

Automation is simply an issue of knowledge. Get a DHCP server running on your network, get a pxeboot image on a master server, have your storage-free computers download the pxeboot image automatically and run it (which will be a standard Linux installation). Configure the linux installation to do stuff on its first bootup automatically. Now you have cattle instead of pets.

Or maybe you want pets. You can do whatever you want, its your network. There's a simplicity to static-IPs + named servers compared to the automated route. I'm just trying to give you ideas on what you could do.

If you want some computers to be pets, and some other computers to be cattle, that's where something like PFSense starts to come in handy (since you can start cutting up the networks and assigning different IP-ranges to them and submasks). This is starting to get needlessly complicated, but... as you said, you're retired. Now is the time to play with needless Rube Goldberg complicated setups, right? (Especially if these complicated setups are similar to what $200,000+ devops dudes are doing at big companies).

-----------

Another direction is VPN: your PFSense box provides security for the other 5 machines from the internet. You can use your cell-phone to access your home-network PFSense box. If you type in the right encryption certificates, the box lets you into your network (even if you're sitting in Hawaii around the other side of the world). I mean, your computers have to still be on, but maybe you have a network-attached power button + wake-on-lan capabilities and can turn your computers on remotely. (Or you go the simple approach and just leave the computers on 24/7: which is why a 5 Watt Rasp. Pi is useful for these sorts of tasks).

Once you get VPN setup (a feature of PFSense and many other routers), then only the people with your encryption keys can "enter" your network. Furthermore, all traffic is encrypted / protected if you do it correctly. Kids do this all the time to get their friends to play Minecraft with them btw (not that it makes it easy, but... they're willing to work really hard to play Minecraft together...)
 
Last edited:
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
You have 6 C2D computers. Why not download PFSense + buy a $10 to $20 Ethernet port and turn one of those C2D into a router? PFSense is free.

The question is if you want a router (level 3), or if you're sufficient with a switch (level 2). Most people are probably happy with a switch. Routers are used to organize multiple switches together. It really depends on what exactly you wanted to do and/or play with.

You really can do everything you need with a switch (level 2). But... if you're still looking for project ideas, learning how PFSense works and how to do level3 networking is certainly a timesink.



Automation is simply an issue of knowledge. Get a DHCP server running on your network, get a pxeboot image on a master server, have your storage-free computers download the pxeboot image automatically and run it (which will be a standard Linux installation). Configure the linux installation to do stuff on its first bootup automatically. Now you have cattle instead of pets.
I aint gonna lie, IDKWTF you are talking about here. I had to google some of that stuff up and I still don't get it.
I understand pfsense is free. sweet.
Why do I want one of these machines to be a router? Do you mean the master terminal that assigns tasks or do you mean the conventional type of router. As a project, I fail to see the value.
What is a DHCP server going to do for me? I understand static vs dynamic, why do I want one over the other? I googled up pxeboot image, I still don't understand what it is or what it is going to do for me?
 
Joined
Apr 24, 2020
Messages
738 (2.39/day)
Why do I want one of these machines to be a router?

1611360253166.png



That's level 3 routing. Any time you start combining networks together, you need a level 3 router, such as PFSense. I realize this is an abstract concept, but that's because there's many, many, many, many different things you can do with this concept.

1611360435361.png


Here's a VPN setup. Lets say your cell phone wants to pretend its on the same Level2 network as your home-cluster. How do you connect through the big-scary internet (full of hackers) in such a way that ONLY you and your personal cell phone can access your machines? What security policies should you use to stay safe? Who is allowed to talk to your computers, who isn't?

Here's another setup:

1611360726789.png


Routers determine the security / logic involved in connecting networks together. In particular: the Internet (the biggest network of them all) with your personal network at home.

What is a DHCP server going to do for me? I understand static vs dynamic, why do I want one over the other? I googled up pxeboot image, I still don't understand what it is or what it is going to do for me?

DHCP automatically assigns IP addresses to any computer that connects to your network (usually only at the level 2 level).

PXEBoot automatically downloads programs to a computer and executes them when a computer starts up. If you're going the "Cattle" route, something like DHCP + PXEBoot is absolutely essential.

PXEBoot usually is configured to install Linux from scratch (every time the computer starts up). When Linux is done installing, DHCP will automatically give it a new IP Address. Then your "Cattle" computer will automatically start doing whatever computational projects you want automatically (usually Kubernetes / Docker automatic execution stuff).

1611361135696.png


So something like Kubernetes automatically launches applications inside of containers. If you notice that your cluster is slowing down, you can turn on a new Node. Kubernetes can automatically detect (with proper DHCP / networking / etc. etc. set up) that a new computer turned on: and then it migrates your programs from one computer to another computer automatically. Maybe you only need 1 computer most of the time, but suddenly a billion people start visiting your web pages. So you turn on Computer#2, #3, #4, #5... and now you have 5 different computers trying to serve all your web traffic.

Automatically. If you do it right, you can do this without ever lifting a finger. Its just a ton of configuration / programming / Linux / understanding networks to get to this point.
 
Last edited:
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
View attachment 185161


That's level 3 routing. Any time you start combining networks together, you need a level 3 router, such as PFSense. I realize this is an abstract concept, but that's because there's many, many, many, many different things you can do with this concept.

View attachment 185162

Here's a VPN setup. Lets say your cell phone wants to pretend its on the same Level2 network as your home-cluster. How do you connect through the big-scary internet (full of hackers) in such a way that ONLY you and your personal cell phone can access your machines? What security policies should you use to stay safe? Who is allowed to talk to your computers, who isn't?

Here's another setup:

View attachment 185163

Routers determine the security / logic involved in connecting networks together. In particular: the Internet (the biggest network of them all) with your personal network at home.



DHCP automatically assigns IP addresses to any computer that connects to your network (usually only at the level 2 level).

PXEBoot automatically downloads programs to a computer and executes them when a computer starts up. If you're going the "Cattle" route, something like DHCP + PXEBoot is absolutely essential.

PXEBoot usually is configured to install Linux from scratch (every time the computer starts up). When Linux is done installing, DHCP will automatically give it a new IP Address. Then your "Cattle" computer will automatically start doing whatever computational projects you want automatically (usually Kubernetes / Docker automatic execution stuff).

View attachment 185167

So something like Kubernetes automatically launches applications inside of containers. If you notice that your cluster is slowing down, you can turn on a new Node. Kubernetes can automatically detect (with proper DHCP / networking / etc. etc. set up) that a new computer turned on: and then it migrates your programs from one computer to another computer automatically. Maybe you only need 1 computer most of the time, but suddenly a billion people start visiting your web pages. So you turn on Computer#2, #3, #4, #5... and now you have 5 different computers trying to serve all your web traffic.

Automatically. If you do it right, you can do this without ever lifting a finger. Its just a ton of configuration / programming / Linux / understanding networks to get to this point.
Well, that sounds like fun at least. But a couple quick thoughts.
Who is going to hack me? Why should I care. I am not using this cluster to bank with. I am not setting up a secure hosting site. It would be cool to mine with, but all research shows that this isn't for that. Which leaves counting stars or medical research.. Who is going to hack that and to what purpose?
I suppose the bad guys could take over my bot net and use it for nefarious purposes, until I figure it out and disrupt them.
Then there is the fact that I am trying to use Linux to run this. I don't know how hackers do their thing. Wouldn't their hack have to work on linux versus windows?
If I have no security protocols whatsoever is it going to corrupt data for whoever I am crunching for?
I think I can download my current antivirus in a Linux format, wouldn't that be an appropriate fire wall for the rest of the cluster?
Admittedly, I don't know alot of stuff, which is why I am trying to learn.
The latest graphic sort of makes sense, almost. Take the intermediate steps out with the api server, call that a program, call the kubelets machines and it almost, almost makes sense in that the server is having the workers do stuff, which I thought was the whole point of the exercise and doesn't need a hell of a lot of explanation, unless you are trying to point out a fine detail that I am just not getting.
My translation is that pc 1 has pc 2, 3, and 4 doing stuff, which is what I thought a cluster is.
I will re read all this stuff in the morning when I am not braindead from trying to figure out what is supposed to be some fairly simple shit.
 
Joined
Nov 8, 2020
Messages
48 (0.43/day)
Rendering that I am familiar with has to do with Autodesk and similar programs. Is that what you are referring to? Turning line drawings into still shots or walk through or fly through? Not overly excited about creating a drawing so I can render it with a cluster.
I thought that was more of a funtion of the GPU than anything else, is that accurate?

Edit: Can you explain about the pfsense router? Are you thinking to reduce bottleneck?
I was just giving some suggestions on things you might wanna try out or tinker with, as a project to play around with, similar to your idea of clustering them to start with. Once your done you might wanna give a go at something else as well.

I find 3d modeling quite entertaining even if I mostly do fluid simulations, I think its fun to do and fun to watch. And its also a process which can rather easily be done on a network of machines, the actual render that is. Figured it could be fun to try if you run out of ideas in the future.
If its GPU accelerated or not depends on too many asterisk to fit in a message, some things are, some things arnt. CPU render is the most common for home users either way.

pfsense was just another idea of something you can do, its really just some software you can run on older machines and use it as a router for your network. It used to be pretty popular in these parts a few years back when we started getting fiber in many places yet most commercial routers on the market was still as slow as a wet noodle. Getting or using a old computer with pfsense and a few extra network cards was not only cheaper but massively faster. Downside was the power consumption of course.
 
Joined
Apr 24, 2020
Messages
738 (2.39/day)
Who is going to hack me? Why should I care. I am not using this cluster to bank with. I am not setting up a secure hosting site. It would be cool to mine with, but all research shows that this isn't for that. Which leaves counting stars or medical research.. Who is going to hack that and to what purpose?

Its called a botnet. Hackers are constantly looking for computers that are turned on that they can take control of.

Ex: If a hacker is into Bitcoin, they'll take over your computer and start mining bitcoins, using your electricity to make THEM money. Or, they send spam / threatening emails from your computers, so that the FBI comes knocking on your door thinking its from you. If you're going to turn on a set of computers and connect them to the internet, you must have a basic understanding of computer security.

No smart person does illegal things on their own computers. They hack other computers, and then do illegal stuff on those hacked computers.

I suppose the bad guys could take over my bot net and use it for nefarious purposes, until I figure it out and disrupt them.
Then there is the fact that I am trying to use Linux to run this. I don't know how hackers do their thing. Wouldn't their hack have to work on linux versus windows?

I've had my own Linux servers hacked into spam-email servers, to the point where other system-admins had to email me and let me know that my Linux boxes were taken over. Yeah, Hackers know how to hack Linux, Windows, Mac... heck even Android and iPhones these days. Your goal isn't necessarily to make something unbreakable: you just need to be "more defenses than the typical computer". Botnets are usually looking for the easiest computers, so if you put up a basic VPN or configure some basic port-forwarding, you can often stop most attackers.
 
Joined
Oct 17, 2020
Messages
34 (0.26/day)
Location
United States
Guess I got disconnected from this thread. I now have a task that a cluster might be good for. Essentially, anything that single threads with global locking... anyway, found this article that looked simple enough to tinker with.

I have enough cores & 128GB of ECC ram, getting them to work simultaneously is the current challenge.
32cpu.png
 
Last edited:
Joined
Nov 24, 2020
Messages
140 (1.47/day)
System Name MonsterBot
Processor AMD FX 6350
Motherboard ASUS 970 Pro Gaming/AURA
Cooling 280 mm EVGA AIO
Memory 2x8GB Ripjaw Savge X 2133
Video Card(s) MSI Radeon 29 270x OC
Storage 4 500 gb HDD
Display(s) 2, one big one little
Case nighthawk 117 with 5 140mm fans and a 120
Audio Device(s) crappy at best
Power Supply 1500 W Silverstone PSU
Mouse Razer NAGA 2014 left handed edition
Keyboard Redragon
Software Win 10
Benchmark Scores none
Guess I got disconnected from this thread. I now have a task that a cluster might be good for. Essentially, anything that single threads with global locking... anyway, found this article that looked simple enough to tinker with.

I have enough cores & 128GB of ECC ram, getting them to work simultaneously is the current challenge.
View attachment 187149
Not real sure exactly what that but it's kinda cool.
So, what s it?
Looking at the article it's cool that he disclosed the language needed for this. Too bad he didn't disclose how he wired it together.
 
Top