• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Server Project

Removed the HP 491838-001 (NC375i) due to space constraints, increased RAM to 128GB, purchased 4TB HDDs (IBM Storwize V7000 98Y3241's) to replace the 2TB HUA722020ALA330's, and delaying the addition of the SolarFlare NIC.
 
Last edited:
Currently working on DNS, after which I'll focus on setting up the first VPN solution - SoftEther.
 
The VMware mirror for this project has been reopened. If you have any questions, feel free to DM me. Just got a new GPU in the mail, which may end up replacing the GTX 1060 6GB. Still troubleshooting this issue...
 
I'm glad to see some progress here @TopHatProductions115 !! :) Is it being a pain to still get up and running how you want or just a few teething issues??
 
I'm glad to see some progress here @TopHatProductions115 !! :) Is it being a pain to still get up and running how you want or just a few teething issues??

For the most part, it's been pretty easy. Just working on DNS (for AD and VPN) and HBA (mass storage), so that the other VMs will have resource access from the get-go.
 
The most important question I have is, does it work how you want it too yet?? :)
 
The most important question I have is, does it work how you want it too yet?? :)

Not yet :D But that's the journey - getting it there, and then keeping it there...
 
Please keep up with the updates :D :D I can't wait to hear it's all working!! :D
 
The K80's are coming...
 
I've got a GRID K520 coming in the mail in about 2 weeks, to replace the Tesla K10. Perhaps I can have one of my Tesla K10's modded into a GRID K2 (or even just buy one) in the near future, so that I can have all three of the major variants for this card. GRID K520 looks like a GeForce card from inside a VM, if my memory isn't failing me. GRID K2 would be the Quadro variant. Tesla K10 is a pure compute version. I wonder if anything like that exists for Tesla K80...
 
Just solved another looming issue for the server project. Now to get that SSD working and added to the Virtual Flash resource pool...

 
P.S. If I could, I'd totally update the OP to reflect the current version of the project XD


Time for a long-overdue project update. Omitting a lot of steps/details here, for relative brevity. A friend of mine, from Discord (the same one who was kind enough to help me troubleshoot the many of the issues I encountered), had me run a Linux LiveCD on the server to troubleshoot the LSI HBA. For those of you who did not know, the LSI HBA wasn’t working as expected until a few hours ago (late last night). I tested it in my current workstation (Precision T7500 - Windows 10), the server (DL580 G7 – ESXi 6.5u3), and even on my laptop (EliteBook 8770w - Windows 10). When tested on the T7500, the HBA showed up – but none of the 4TB hard drives showed up. The same for the laptop and the server. After a bit of Googling (as the cool kids say), I decided that it may behoove me to try flashing it with the IT firmware, to see if that would fix it. I did so from my laptop, by making use of a powered PCIe dock (to prevent further downtime on the T7500 – running a Minecraft server). I did so, using a GUI application called MegaRAID Storage Manager. The HBA was on v17.X, and now it's on v20.X. The drives also appeared in Windows Device Manager for once. However, they didn't stay in Device Manager for long. They popped in and out, sporadically. I was instructed to reboot after the firmware update was applied. MegaRAID Storage Manager stopped being able to connect to the local server after the reboot it said to do, for the firmware update to take hold. That meant that, if the firmware I flashed was the wrong one, I’d have to resort to using sas2flash. After no luck checking on the HBA from my laptop, I decided to put it in the server, with the Linux LiveCD (as mentioned earlier). The Linux LiveCD was running an older build of Manjaro, and managed to see all of the drives in gparted. However, we were unable to get SMART data for most of the HDDs. If you look closely at the HDD models, you may or may not be able to tell why. However, while I was in the LiveCD, I decided to also try GPT scheming the Intel SSD as well, since messing with it in Windows simply did not work for some reason. A short while later, we tried the latest Manjaro LiveCD available (because Manjaro is my preferred distro with sysemd). That one didn’t see the drives at all, but did still see the HBA. At this point, I saw no other way to validate the HDDs further. I made the decision to test them in ESXi and try to pull SMART data from esxcli. The drives showed up in ESXi, and even allowed for us to pull SMART data – but it was limited, in a different format than most common drives on the market. I was able to add the Intel SSD to the Virtual Flash pool for once, though. As such, this is strictly a partial victory. We have the drives ready for use, presumably. But we don’t know how the drives are doing – which is very different from all of my previous experiences, where I could pull up SMART data immediately after installing the drives. The game is afoot.
 
I've added a note for the mods to help with the request as I'm unsure how to let you edit that first post :) So when I hear back my good man, I'll let you know! :)

Nothing is ever simple is it??! :D
 
On a side note, the results of last night's livestreaming attempt are tempting me to make YouPHPTube part of the project again. If this keeps up, I might actually go for it...
 
I've added a note for the mods to help with the request as I'm unsure how to let you edit that first post :) So when I hear back my good man, I'll let you know! :)

Nothing is ever simple is it??! :D

Thank you! I went on and edited the OP, so it is now up-to-date :D



As detailed here, I'm looking into getting some equipment for the server again. Maybe I'll have somewhere to put a UPS this time. But only if the requirements are met. Otherwise, the funds will go elsewhere. Pricing doesn't stay this good for long. One month tops...
 
ToDo List for the next few days:
  • Figure out Split Horizon DNS records (Technitium)
  • Setup ejabberd and hMailServer
    • FQDNs and subdomains
    • AD/LDAP integrations
  • Setup Artix Linux VM
    • secondary Technitium instance (AD DNS forwarding)
 
Regarding GRID, have you seen Craft Computing (on YouTube)'s Cloud Gaming build series? Don't remember the details, but he goes over how it all works and how if you just stick to the first gen of cards it's free. If you're using the second gen or newer the licensing cost is astronomical.
After almost a year and seven episodes, going from GRID K2's to Tesla M60s, and finally to 3x FirePro's, he still hasn't managed to make it work properly. One stream is fine, but as soon as you have multiple streams the cards always run into power limits.
 
Back
Top