• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Server Project

Okay, I've some project news.

I had to disable the vCenter Server for Windows instance to get the Active Directory instance installed. But I didn't think to change any networking settings on the vCenter Server (embedded - 6.7) instance before disabling it. With the help of a friend, I managed to fix my DNS and get the Active Directory instance working. It was due to some missing NS records. Once I cleaned up the DNS, I was actually able to get a client device joined to the AD. Now I have to figure out how to make Windows clients connect to the VPN before attempting LDAP sign in, since the AD is VPN-locked. Once I figure that out, I will be able to add any devices I want. Also have to see if I can bind vCenter Server for Windows to a single IP address while it's disabled. Otherwise, I'll have to resort to using the relatively cruddy VCSA again - and who knows how that will go in long term. The last time I used it, it started throwing up more warnings and errors than before, which leads me to question the overall longevity and performance of it. Almost tempted to go without vCenter and Virtual Flash because of the trouble. At least I can start focusing on the rest of the project more in the near future...

Without vCenter, how will I be able to add the Precision T7500, as an ESXi host, to my datacentre? For vMotion?

Also have to figure out whether (and if so, how) to destroy the Virtual Flash resource pool or not...
 
Last edited:
vCSA 6.5 is practically neutred without the use of Adobe Flash, and the HTML5 UI was almost useless until at least 6.7u3. The settings I do have in the current install are mostly small ones, but could only be reversed via the FLEX UI (Flash). vCenter also doesn't allow for in-place upgrades. So, it's time to kill the current vCSA and start from scratch. If I had known to look out for the death of Flash, I could have been ahead of this. But, got held up by other responsibilities. Today, I'm re-installing vCSA. Today's going to be a long day...
 
Well, I have more news. I managed to kill the old vCSA (6.5) instance and replace it with a newer (6.7) version. The newer version has a dark theme - nice. Also is pretty well organised, and connected to my ESXi server with no issues. However, Virtual Flash is pretty much dead. I will have to assign the SSDs to something else now. Perhaps I can start setting up the next VM...

On a side note, the current Reddit project mirror is ded again - because those expire every 6 months, regardless of activity. I think it'll stay ded this time. Not in the mood to make yet another one...
 
20210225_230307.jpg
 
Last edited:
GPU Interest Checks:
Not sure if I'll ever get my hands on the AMD card. That would be an interesting card to try out, once I figure out the K80's. The K80's are due for a VirtualGL experiment soon.

Also, just updated the OP(s) for each mirror. Please let me know if anything seems to be missing from one mirror or the other. I intend to work on the server later today, assuming that nothing interferes...
 
Last edited:
May divert my attention from hMailServer for a bit and skip right to the Linux VM if this doesn't get resolved in the next week or so. This has been dragging on for a while now, and I want to get the rest of the server ready. While hMailServer would be nice, I also have other matters to attend to. And it appears that hMailServer's most recent release is 32-bit. It may be having issues working with Ms SQL Express, which is the free edition for Ms SQL Server - because I used a 64-bit release of it? If this keeps up, I may take the mail server role and toss it to Linux as well. Can't even begin to think about touching Exchange Server...
 
Had a momentary power outage today, which took most of my equipment offline again (for the umpteenth time). I've finally decided to just tough out the cost and buy a pair of UPS's this weekend. Time to see if I can get things straightened up around here. They'll have to sit on the floor since I haven't purchased proper rack shelves for them yet. They're both going to be Liebert GXT3 1350W units. No more playing with fire...

 
Last edited:
Once the UPS's get here, I'll be able to get both the server and the workstation protected. Also found out that one of the DIMM slots on the T7500's motherboard went out, so I swapped a 4GB stick for a 16GB DIMM I had laying around.


MariaDB works well with hMailServer so far, and now I'm trying to add a CA to the project, for future security considerations:

Once the CA is ready, it'll be time to get crackin' on the Artix Linux VM.
 
Last edited:
Firstly, I need to re-install my Artix OpenRC VM - got the partitions all wrong. Also need to get the WiFi adapter back in the server, for the Linux VM (router/NAT). I'll do those tasks sometime this week, after work.

Then I need to dust out and service my first UPS this weekend. It arrived this afternoon. When I plugged it in, it showed the following symptoms:

  • beeps every 5-6 seconds
  • Fault and AC Input indicators glow steadily
  • Battery indicator blinks
  • Bypass and Inverter indicators are off
A second, pristine UPS should be arriving in the next week or so. I'll use that one on the server when it arrives, and clean up the current one for the T7500.
 
New partition setup for Artix OpenRC VM:
  • 300GB SAS HDD
    • 8MB, unformatted, [!mnt_point] (bios_grub)
    • 512MB, FAT32, /boot;/boot/efi (esp)
    • 8GB, linuxswap, [!mnt_point] (swap)
    • 256GB, EXT4, / (root;system)
    • 32GB, XFS or ZFS, /home (home)
  • 8TB SAS HDD
    • /srv, still deciding on size and filesystem. Would like to use ZFS possibly
    • /var, still deciding on size and filesystem. Would like to use ZFS possibly

On a side note, seriously considering Docker, podman, or similar for containerisation, to keep things a bit more isolated and cleaner.
 
Okay, finally got around to updating Technitium DNS. The newest installer, for v6, doesn't appear to allow selection of a different install location in the GUI. So I grabbed the portable installer and a copy of .NET v5 instead. Installed .NET v5 first. Then, made a .zip backup of the previous install (because reasons). Nuked everything in the DNS server folder but /config and the backup.zip. Finally, copied the new DNS server files over to the DNS server folder. Also had to register a new Windows service, since the old one does not work with the newer version. Not too difficult if I say so myself - just tedious. And I have to do the process by hand from here on, which is a bit tedious as well. May have to look into a way of automating this myself. May need to see if the DNS server can have a self-signed (CA) certificate as well.

Also waiting on a second drive cage and mini-SAS SFF-8088 to SATA forward breakout cable to arrive, so I can put the 4TB SAS HDDs to use with the Linux VM. If the DL580 G7 can handle powering 2 drive cages at once, I'll give the 16TB drive cage to the Linux VM. Use that in either a RAID10 or RAID0 (OpenZFS pool) and let nextcloud have free reign over that.

New partition setup for Artix OpenRC VM (GPT, BIOS), as of a few nights ago:
  • 300GB SAS HDD
    • 8MB, unformatted, [!mnt_point] (bios_grub)
    • 512MB, FAT32, /boot (esp)
    • 256GB, EXT4, / (root, system)
    • 32GB, EXT4, /home (home - would like to convert to ZFS in the future)
    • 8GB, linuxswap, [!mnt_point] (swap)
  • 8TB SAS HDD
    • 5TB, EXT4, /srv, (Would like to convert to ZFS in the future)
    • 2TB, EXT4, /var, (Would like to convert to ZFS in the future)
Coming soon - either:
  • 16TB (4x4TB), ZFS RAID0, /nextcloud
    • or...
  • 8TB (4x4TB), ZFS RAID10, /nextcloud

Then, this:

It's all coming together now...
 
Last edited:
The test with the 2nd drive cage installed didn't go too well tow nights ago. When connected, the drives in the 2nd cage did not appear in ESXi. In addition to this, only 3 of the HDDs from the original/first cage showed up. One of the 3 from that cage showed up intermittently. I think I may have encountered a power issue. While the activity indicators on both cages did light up, they weren't indicative of the true status of the drives. I also checked the ESXi kernel logs (Alt+F12) during runtime, and saw some interesting errors. I tried rebooting the server, to see if it needed some time to get acquainted with the new hardware. But, two reboots did nothing. Everything appears to be working as expected after removing the 2nd drive cage. If it has been a bad data cable on the 2nd drive cage, I would expect the issue to not affect the drives from the 1st cage. But perhaps I've overlooked something. Now I'm stuck at trying to figure out how to power the second drive cage, since internal power appears to be off the table for this. Perhaps an external SATA-only PSU or DC power supply?

On a different note, I also can't seem to get the WiFi NIC to show up in ESXi - which leaves me with 3 conclusions:
  • the card needs drivers
  • the card needs to be re-seated (for the 7th time)
  • the card is DOA and needs to be replaced
The first one seems most likely, seeing that ESXi may need drivers for anything that wouldn't be in a normal enterprise environment. The second one seems unlikely because of how many times that I've already attempted that solution. The third one is the worst case scenario, and would be one that incurs the most up-front monetary cost to me. If I do have to install drivers for the wireless NIC in ESXi, a backup of the host config needs to be made first. Otherwise, I'll be in hot water if the installation fails. I'll be attempting to use a 3rd party Linux driver in ESXi, with no way to know if it'll work in advance.

On a side note, the second drive cage was the only real way I was ever going to get to play with ZFS in Linux. That would have been a pool of four SAS HDDs that I could have experimented with, using ZFS's RAIDz options. Since that's not in the cards at the moment, the extra 16TB of SAS storage is back to sitting without a use.
 
Just purchased a GXT3-2000RT120 without the batteries. Waiting for it to arrive in the mail. Then need to see if I can get it working in the next few weeks with some fresh batteries...
 
The GXT3-2000RT120 arrived in the mail today in what appeared to be pristine condition this afternoon. I went on and purchased 4 batteries for it, and am now waiting for them to get here next. This Friday, I will need to purchase2 rack shelves. One will be for a new printer that was gifted to me recently (Lexmark Prevail Pro 705), in addition to the drive cage that sits on the back of the DL580 G7. The other will be for the T7500 to sit on. The UPS will end up sitting on the floor for a while, until I can get the rack mount kit for it in a few months. The new UPS will be more than capable of having all devices on the rack connected to it from what I can tell, which will save me space on the rack. Nice not needing to consider buying a second UPS. Current rack setup plan thus far:
  • Top sliding shelf (S1):
    • 1-2x Kingwin MKS-435TL, 1x Lexmark Prevail Pro 705, router/AP (if applicable)
  • 1x HPE ProLiant DL580 G7 (S2)
  • Mid sliding shelf (S3):
    • 1x Kenwood 104AR
  • Lower sliding shelf (S4):
    • 1x Dell Precision T7500
  • Bottom drawer(s - S5)
    • Spare parts, tools, etc.
  • Bottom sliding shelf (S6):
    • 1x Liebert GXT3-2000RT120
1-2 PDUs are planned for this setup as well. Just a matter of time. The Kenwood and Liebert will not have shelves until at least later this year, due to budget constraints. The rack drawers are in the same category as of now.

On a different note, getting open-vm-tools installed onto Artix+OpenRC is proving to be a fun little challenge. Almost tempted to write my own init script for it...
 
I bought 2 of the 4 planned shelves yesterday, from here:
Now I'm waiting for them to arrive in the mail. Also have UPS batteries to install today.
 
The one thing that always causes trouble is when I have to fiddle with that Mini-SAS SFF-8088 to SATA breakout cable. If I have to mess with it too much, and accidentally damage it, that's another 15-20 USD down the drain. Not saying that it's inevitable, though. I simply treat the cable pretty badly at times. The last rack shelf installation may have damaged the previous cable a bit. And I have a spare cable this time, since I still can't connect the other drive cage at this time. Just means that I'm now out of spare breakout cables to trash. Next one has to come out of my paycheck. Happens about every 60 days with my luck XD Really have to look out for that...
 
Tasks that I want to get done tonight, assuming nothing goes wrong:
  • Installing the nVIDIA drivers for a GRID K520
  • Installing a new terminal emulator (terminology)
  • Installing a new file manager (nemo)
  • Installing a browser (unGoogled Chromium)
  • May add it to the AD domain I have running as well
  • Installing docker for container management
  • Adding nextcloud via docker
Time to see how helpless I really am XD
 
Last edited:
Okay, I blew through most of the tasks set out for today. But a few major ones still remain:
Those all can take hours each on their own. Glad to get the other tasks out the way first, so I can have an easier time with those in a bit.
 
Looks like your having a load of progress here with the build, can't wait to hear it's finally up and running :D

Amazing detail as well, thank you for sharing :D
 
Back
Top