• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Server Project

The Radeon Pro Duo arrived in the mail today. Still have to install and test it. That will happen either tonight or tomorrow...
 
I've been on a mission today, ever since the flashed FirePro S9300 x2 arrived in the mail:
The Radeon Pro Duo is ded. Long live the FirePro S9300 x2!

On a side note, I've preemptively replaced the HPE PCIe ioDuo MLC 1.28TB I/O Accelerator (641255-001) and SanDisk Fusion ioScale MLC 3.2TB Accelerator (F11-002-3T20-CS-0001). I may bring them back if the Gen9 has room for them...
 
BlissOS didn't go over too well last night. Time for some troubleshooting...
 
From what I've done on my end, it looks as though the FirePro S9300 x2 behaves well in a macOS guest (at least Mojave) on vSphere. From what I've watched online, the FirePro S9300 x2 should also behave when split up between multiple Linux KVM guests (Windows 11 - may also apply to Windows 10). Pretty sure this card runs just fine in a Linux guest as well. In all of the tests/scenarios that I've mentioned, the FirePro was flashed to either act as a Radeon Rx Fury or Nano (consumer variants) - though the Radeon Pro Duo also existed. I'm thinking that BlissOS could just be an outlier in this case, and a rabbit hole too deep for me to go down for this project.

As such, unless a software update for BlissOS fixes this oddity before 2023, I'm kicking it from the project for the next year or two. I'll be focusing on LibreNMS as the last major task for this phase of the server project, until I move to the Gen9 server. When I move to the Gen9 server, I'll possibly want more of the FirePro S9300 x2, oddly enough. While it's an old card, it also fills in a gap - the need for multiple GPUs in a single PCIe slot, for a (relatively) affordable price. Its space efficiency and monetary benefits are tough to ignore when SR-IOV and GRID are currently either too expensive for me to implement or locked behind secret handshakes and the need to be a cloud provider.
 
I've installed LibreNMS, haven't learned how to get device auto-detection working yet. Installed Cronicle and used it to resolve a scheduled task(s) issue with Nextcloud. Now working on enabling Nextcloud push_notify and learning more about LibreNMS. BlissOS is gone from the project, and I'm closing in on the last major tasks of this phase of the server project. The next phase requires the Gen9, and I can't hop onto that just yet. Also wanting to get a 2nd FirePro S9300 X2 and a Titan RTX...
 
In the wake of still not having figured out LibreNMS's device/host auto-detect, I've gone on and added many of my commonly-accessed app/service IPv4 addresses by hand. Those include:
  • OOB management appliances
  • multi-node/cluster management instances
  • individual virtual machines
  • hypervisor hosts
  • default gateway for network bridge
I'm also running a simple/quick nmap scan, to look for any obvious hosts that I missed. I've avoided adding:
  • Docker containers
  • switches that comprise the network bridge
for the time being. All of my Docker containers are on one VM. If I ever want to analyse traffic for an individual container, I can still add their individual hostnames later. As for the network switches, all traffic going through them either originates from the default gateway or the DL580 itself (either hypervisor host or one of the individual VMs). If the time ever comes, I can add the switches later as well.

I also took some time to review the DNS records on Cloudflare, and should be a little closer to having proper DMARC/DKIM/SPF. Not perfect by any means, before anyone gets ideas. It's tough to get this crap done right.

Getting Nextcloud's notify_push to work is proving to be very tough. I was hoping to have that and Spreed/Talk HPB running by the end of the year, but I've come to the conclusion that it probably won't happen.

Started looking into ARM servers, just to see what's available on the used market. The answer is, nothing affordable - at least in my area. Was wondering if I could maybe play around with ESXi on ARM64, maybe have an AOSP VM or four? Yeah, that's out the window.

Still waiting to move to the Gen9 in the future...
 
I purchased a 2nd FirePro S9300 X2. Can't wait to see if I can fit it in the DL580 Gen9...
 
I've come to the conclusion that VDI (for other users) will have to be reserved for if I can get a second DL580 Gen9 after moving out. I'd end up replacing the current FirePro S9300x2 with a Radeon Pro W6800, and moving all FirePro S9300x2's to the dedicated VDI host. A single DL580 Gen9 can power up to three of the FirePros, so six available GPUs total for the VDI host if I ever go for it. Assuming that I threw the same 10x HGST HUSMM8040ASS200/HUSMM8040ASS201's at this host, there'd be a little under 4TB of SAS storage available as well. A mix of GPU-equipped and CPU-only VDI instances would be possible. Windows and Linux only though - I don't think macOS VDI exists as s supported solution at this time...
 
After fixing a pesky issue with static routes on the Rocky Linux VM, I've managed to get Wazuh working - for the most part. Can't seem to get a successful makepkg run on Artix OpenRC today, so that's a major impediment. That VM hosts all of my Docker containers. If that had succeeded, I'd then have to figure out the init script situation. Someone on Discord suggested pulling the Gentoo script. While I wouldn't usually try it, I don't think I have many other options - aside from writing one by hand. That's always fun o_O

Once everything with Wazuh has been resolved, I may kick out Malwarebytes in its entirety...
 
Last edited:
I've got 2x Silicom PE310G4SPI9L-XR-CX3's on the way to my house today. May install them in March if I have a chance (waiting for full-height brackets to arrive)...
 
This seems like a never ending project man!! I hope we can see the end results soon enough!! :D
 
This seems like a never ending project man!! I hope we can see the end results soon enough!! :D

Indeed it is! But once VLANs have been implemented (Project vNet), the only other major task will be Project ArcZ (Artix OpenRC on ZFS Root). Then, I have to move out before I can attempt any other pending tasks XD
 
I installed the 2x Silicom PE310G4SPI9L-XR-CX3's yesterday, and they appear to be working. Next task will be implementing VLANs on the new MikroTik RB4011iGS+5HacQ2HnD-IN-US, which I purchased recently to replace the RBD25G-5HPacQD2HPnD and RB4011iGS+. Having the Router and WAP in a single, rackmountable appliance should make cleaning and managing the server rack much easier in long-term. After this, I will be focusing all of my attention on Project ArcZ. Once ArcZ is complete (and has replaced Rocky Linux), I will be working on LXC/LDX containers (as described above). No new containers will be added until ArcZ is ready for daily use. This is now the primary focus of 2023. Some of this won't be possible until I move out (anything requiring resources only present on the Gen9). I will make ArcZ's setup script(s) publicly available in a few months. Do keep in mind, it's built for use in an ESXi VM. This is a massive change in direction for the project, and will push back things quite a bit...
 
Updated Task List (03/26/2023):

Code:
Current ToDo's:
 - Server/Networking:
    - Set RFC2307 attributes for AD Users
        - https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member#Configuring_Samba
        - https://www.server-world.info/en/note?os=Windows_Server_2019&p=active_directory&f=12
        - https://github.com/assen-totin/powershell-unixattributes
        - https://github.com/hkbakke/ad-posix-attrs
    - Project vNet (network isolation via vLANs)
        - https://forum.mikrotik.com/viewtopic.php?p=988254
 - Artix OpenRC (Xfce):
    - Docker Stack: Portainer EE
        - MFA for AD users, via Azure AD OAuth
        - https://docs.portainer.io/admin/settings/authentication/oauth
    ** Replace with Project ArcZ (Artix OpenRC on ZFS root) **

Upcoming ToDo's:
 - Rocky Linux (Wayland/XFS):
    ** Replace with Project ArcZ-EE **
 - Server/Networking:
    - Project New Client
        - Clean install Windows 10 Enterprise LTSC
        - Install Samsung 850 Pro 2TB SSD

Long-term ToDo's:
 - DL580 Gen9 transition (24/7 instances)
 - Artix OpenRC (ArcZ):
    - Docker Stack: Pleroma (federated)
        - https://github.com/explodingcamera/docker-pleroma
    - Docker Stack: YaCy Grid
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker-compose.yml
        - https://community.searchlab.eu/t/pertaining-to-how-yacy-crawls-websites/1090
 - FreePBX Distro:
    - Port out Google Voice phone# to VoIP.ms (DID)
        - https://support.google.com/voice/answer/10130510?hl=en
        - https://support.google.com/voice/answer/1065667?hl=en
        - https://wiki.voip.ms/article/Porting_FAQ#How_to_port_my_Google_Voice_Number_to_VoIP.ms

I’ll be studying for VMware VCP-DCV and CompTIA Server+ when I’m not working on Projects vNet and ArcZ. Cisco CCNA will have to wait, though. While I’m constantly picking up knowledge in that arena (esp. IPv6 and VLANs), that thing will take a while to prep for. Certification maintenance/renewal costs also exist, and CompTIA’s fees can go up as I get higher-level certs.

All of the major Wazuh XDR and Nextcloud changes are being put on hold for now, due to time limitations (there’s currently only one admin). Blackbird deployment and Malwarebytes removal (Windows 10 Enterprise VM and Project New Client) are delayed as well (dependent on progress with Wazuh).

This probably will end up becoming a major project overhaul…
 
Updated Task List (04/04/2023):

Code:
Current ToDo's:
    - Project vNet 1.0
        - Add 10.0.0.0/24 interface to Windows Server VM
        - Move federated containers to 10.12.6.0/24 (IPVLAN, ID 8)
        - Re-generate all certificates (XCA, AD CA, NGINX Proxy Manager)
        - Tag interface ether1 for VLAN ID 51 (Management)
        - Connect dedicated iLO port Ethernet cable to ether1
        - Configure static IPv4 address in iLO3 settings
        - Test ESXi DCUI for emergency troubleshooting/access
        - Create aux. vmkernel adapter and portgroup (VLAN ID 51)
        - Configure static IPv4 address for aux. vmkernel adapter
        - Configure vCSA to use DHCP and migrate to VLAN 51
        - Update Active Directory DNS with new IPv4 addresses
 - Server/Networking:
    - Remove LibreNMS, in favour of MikroTik's "The Dude"
    - Set RFC2307 attributes for AD Users
        - https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member#Configuring_Samba
        - https://www.server-world.info/en/note?os=Windows_Server_2019&p=active_directory&f=12
        - https://github.com/assen-totin/powershell-unixattributes
        - https://github.com/hkbakke/ad-posix-attrs
 - Artix OpenRC (Xfce):

Upcoming ToDo's:
 - Artix OpenRC (Xfce):
    ** Replace with Project ArcZ (Artix OpenRC on ZFS root) **
 - Rocky Linux (Wayland/XFS):
    ** Replace with Project ArcZ-EE **
    - Setup UrBackup Server

Long-term ToDo's:
 - DL580 Gen9 transition (24/7 instances)
 - Artix OpenRC (ArcZ):
    - Docker Stack: Portainer EE
        - https://docs.portainer.io/admin/settings/authentication/oauth#microsoft
    - Docker Stack: Pleroma (federated)
        - https://github.com/explodingcamera/docker-pleroma
    - Docker Stack: YaCy Grid
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker-compose.yml
        - https://community.searchlab.eu/t/pertaining-to-how-yacy-crawls-websites/1090
 - FreePBX Distro:
    - Port out Google Voice phone# to VoIP.ms (DID)
        - https://support.google.com/voice/answer/10130510?hl=en
        - https://support.google.com/voice/answer/1065667?hl=en
        - https://wiki.voip.ms/article/Porting_FAQ#How_to_port_my_Google_Voice_Number_to_VoIP.ms


Replacing ConnMan with NetworkManager was a fun little side task this morning.

While most of the VMs have been successfully migrated, backend infrastructure/services have not yet migrated. That will happen this coming weekend. I've also noticed some concerning performance degradation when attempting to access services like Nextcloud.

Today's time with LibreNMS has shown me that I need to slim things down. I have no more time to mess with it, and am replacing its VM. That time needs to go into Project ArcZ. Cronicle and Wazuh XDR are here to stay. Blackbird deployment and Malwarebytes removal (Windows 10 Enterprise VM and Project New Client) are scheduled for the 2023/2024 transition.

After seeing what Framework has been up to recently, I don't think I'll be continuing Project New Client with the EliteBook 8770w. Their upcoming 16-inch Ryzen mobile workstation (w/ dGPU option) would be more than a worthy replacement.

I'll be studying for VMware VCP-DCV and CompTIA Server+ when I'm not working on Project vNet or ArcZ. Once I've finished that, I may move onto Cisco CCNA. That thing will take a while to prep for, and I'm busy learning VLANS. Certification maintenance/renewal costs exist, and CompTIA's fees can go up as I get higher-level certs iirc.

Cloudflared, Pleroma, and PeerTube are back on the unconfirmed list. Gotta get more prior tasks done first.

I still have a Chateau 5G ax to unbox...
 
The following services have been moved into the Unconfirmed list:
  • Pleroma
  • PeerTube
  • YaCy Grid
Pleroma and PeerTube are federated services, and would entail (potentially) increased exposure for the server. This is a potential security concern, especially for services that are not guaranteed to receive consistent use from the current user base. For PeerTube specifically, we would also potentially need dedicated hardware to have decent media encode/transcode performance (either many CPU cores w/ AVX support, GPUs with ASICs, or dedicated PCIe media cards). YaCy Grid, as a peer-driven service, could run behind the VPN without issue. But, the quality of its service(s) would at least somewhat rely on the work/data of other peers on the YaCy network. While I was willing to play around with this years ago, I now find myself having to decide what apps and services can have dedicated RAM allocated to them. For the potentially large amount of resources that YaCy may consume, I would need to know that there are enough peers in my area to make it usable when compared to other alternatives. Not to mention, the amount of persistent data that these can generate if people actually start to use them. As such, these services will most likely be delayed to a later phase of the project, when there are more servers running in-house.
 
Back
Top