• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Server Project

Regarding GRID, have you seen Craft Computing (on YouTube)'s Cloud Gaming build series? Don't remember the details, but he goes over how it all works and how if you just stick to the first gen of cards it's free. If you're using the second gen or newer the licensing cost is astronomical.
After almost a year and seven episodes, going from GRID K2's to Tesla M60s, and finally to 3x FirePro's, he still hasn't managed to make it work properly. One stream is fine, but as soon as you have multiple streams the cards always run into power limits.

  1. Yes, I have been following Craft Computing's Cloud Gaming series on YouTube :) Ironically, I started working on this very idea/concept for the server project in mid-2018. Almost every single step that he's taken thus far (in the GPU department), I've already taken into some form of consideration. I ended up on nVIDIA GRID cards due to used market price and platform costs. GRID K520 to be exact.
  2. I wonder how I'll fare when I attempt it. The DL580 G7 packs 1.2kW PSUs under the hood, and I'm only powering one GRID card (in opposed to multiple).
 
Yes, I have been following Craft Computing's Cloud Gaming series on YouTube :) Ironically, I started working on this very idea/concept for the server project in mid-2018. Almost every single step that he's taken thus far (in the GPU department), I've already taken into some form of consideration. I ended up on nVIDIA GRID cards due to used market price and platform costs. GRID K520 to be exact
Gotcha. That's the same as the 6xx series (GK104), so that should be first gen and free in terms of licensing. Stepping up to the next gen incurs heavy licensing costs for GRID to function. That's why Craft Computing is now experimenting with AMD FirePro's.

I wonder how I'll fare when I attempt it. The DL580 G7 packs 1.2kW PSUs under the hood, and I'm only powering one GRID card (in opposed to multiple).
As far as I understood Craft Computing's latest video in the series, his 1600W PSU was more than enough for the task, the problem was the total power of the GPUs was limited to 300W. This meant each GPU die was limited to just 150W. (The K520s is effectively 2x GTX 670 dies, so each gets half the total power) In the case of the K520, this might be even worse, since it's limited to one PCIe 8pin, so total power is 225 and each GPU is limited to about 100W.

In Craft Computing's case, just running one instance of Crysis ate up 100W, and that was at just 40% utilization. As soon as he started doing multiple streams, the cards were starved for power completely and stuttered like hell. In addition, there was no headroom for video encoding so parsec (which he used to stream the games to another PC remotely; the whole point of the project) got massive encoding issues and the stream stuttered and artifacted. As of now he's gone through 3 different GPUs and still hasn't found one that is cheap enough to be viable, and also be able to handle multiple 1080p60 streams.
 
  1. The experimentation with the FirePros appear to have gone even worse than the GRID cards, sadly. I wish it could have worked, for the sake of a non-Windows VM. But one would need to use Software 264 at that point.
  2. That is true. But who says that I need 1080p60fps for what these cards will be used for :) Also:
 
The experimentation with the FirePros appear to have gone even worse than the GRID cards, sadly. I wish it could have worked, for the sake of a non-Windows VM. But one would need to use Software 264 at that point.
Yeah, it's a shame. Hopefully he manages to get it right at some point!

That is true. But who says that I need 1080p60fps for what these cards will be used for :) Also:
Great that it's working out for your use case, of course! Which I haven't caught yet, what are you going to use the GPUs for?
 
Yeah, it's a shame. Hopefully he manages to get it right at some point!
...
Great that it's working out for your use case, of course! Which I haven't caught yet, what are you going to use the GPUs for?

The GRID K520 will be used for a Linux VM and (hopefully) a MacOS VM, for non-gaming workloads.
For the purposes, please see the OP. That lists the major roles and expectations for each VM in the project.
 
Last edited:
Just removed YaCy from the project, in favour of researching YaCy Grid. Here's to hoping I can get it working in a shared environment...
 
I currently have a PCIe WiFi NIC coming in the mail. I also have a pair of Ethernet NICs sitting in inventory. The server already has a SolarFlare SFN5322F sitting in it. What if I threw FRRouting onto a Linux VM, and passed through the mentioned NICs to it? Sounds like a virtual managed switch in the making. I could have the Linux VM use the wireless NIC to connect to the house WiFi on one network (192.168.1.0), and have it sit at an arbitrary address (Perhaps 192.168.1.2). Then have the wired NICs be used for an internally-managed network (10.0.0.0). Setup the Linux VM as the default gateway (Maybe 10.12.7.1), have it handle DHCP and internal DNS. Last step would be to route all outbound traffic from clients on 10.0.0.0 through 10.12.7.1 => 192.168.1.2 . All outbound traffic from 10.0.0.0 clients will appear to come from 192.168.1.2, which sounds similar to NAT (many clients/private IPs behind one gateway/public IP). Setup forwarding rules and throw the Linux VM sitting at 192.168.1.2 into the DMZ (since port forwarding on the new ISP router is utter garbage for some reason). That would kill off the need for a router/extender in my room, assuming that the only untested component in this equation works - the WiFi adapter I got from overseas. Also still need to work on this. The rack-mounting kit for my server is ~200 USD by itself - yikes...

Questions that I asked recently in other places:
 
Last edited:
Just purchased an HPE 641255-001 (PCIe ioDuo MLC I/O Accelerator) for the server. Not sure if it will work in ESXi. I guess there's only one way to find out!


On a lesser note, the VMware Communities mirror seems to be officially ded this time. Can't edit the OP or add replies. Not sure if I want to go through the trouble of getting it revived myself. If anyone here wants it brought back, feel free to say so...
 
Okay, things got off to a rocky start with the new NICs:


The vCenter Server Appliance hasn't been treating me too nice lately, either. It went from just taking 15-30 minutes to start up (consistently) to sometimes not starting at all - and needing a reboot. The HTML5 UI also is buggy in my case, and doesn't accept my credentials half the time. So I'm stuck using the FLEX UI instead - which requires Flash Player (another EOL technology) and Internet Explorer (a deprecated browser) to make it work. I'm not getting the benefits that I should have gotten from it, for one reason or another, which is unfortunate. Also can't seem to make it keep my Virtual Flash Resource Pool between server reboots, which is another small gripe. Can't manage that if vCenter Server won't start up. I might have reason to switch to the Windows-embedded version instead, sadly. That one may be more reliable in my case...
 
Today may be good for another round of server testing. Perhaps I can get the HPE 10GbE NIC working for once.

Still have to attend to this after all of the VMs are setup:
 
Just got the HP 10GbE NIC working! This means that I may be able to start my 10GbE transition in the next few months. Next will be my 4K60fps livestreaming transition getting the IO Accelerator working :D
 
It's been a slow weekend playing with the server. On Thursday, I couldn't get anything done because of New Year's (which I am fine with). On Friday, I slept in due to how late I stayed up, and then had surprise visitors. Didn't get any work done that day, since I was busy keeping the visitor's kids out of the room. On Saturday, I finally got to throw in the HP NC524SFP NIC (along with its memory module). Once they were attached to the SPI board, I fired up the server and checked to see if the 16TB drive cage and ~1TB Virtual Flash Resource Pool showed up in ESXi - that of which they did
:)


FYI, just about every time I add new hardware to the DL580 G7, I check for those two things - because they tend to act as immediate indicators for whether something is wrong, strangely enough. That's when no other problem indicators are present (which there rarely ever are). vCentre has thrown an occasional warning, but nothing of consequence from what I've seen thus far.

After that, I spent most of last night changing my AD and DNS settings, to prepare for adding my first devices to AD. That went on until close to midnight, and is still not quite done yet. Today, I replaced the SolarFlare SFN5322F with an HPE 641255-001 (PCIe ioDuo MLC I/O Accelerator) - a gutsy move with how the server can be with adding new hardware. At first, only 2/4 SAS HDDs showed up in ESXi. After a reboot, and letting the server warm up for a bit, all storage devices and new components showed up. So far, so good!

However, due to how slow testing has been, I had to put off testing the Tesla K80's and DERAPID PCE-AX200T wireless NIC. If I can get the DERAPID PCE-AX200T working, the Linux VM is definitely going to run an FRRouting instance. Still need to figure out the vCenter startup time issue. At least I can start the 10GbE transition soon...

 
Last edited:
Just attached the rail kit to the server, in preparation for the rack that's coming in the mail this week. Can't wait to take photos of the finished result...
 
Last edited:
Also forgot to note in the previous reply, I need to upgrade the vCenter Appliance from 6.5 to 6.7u3, due to FLEX getting EOL'd. Fun times XD
 
I would have held out for VCSA 6.5 indefinitely if the HTML5 UI was able to manage Virtual Flash/Host Cache resource pools. As noted in past updates, the VCSA took anywhere from 20-45 minutes to initialise. And with the deprecation of FLEX UI (reliant on Adobe Flash - unsupported in 2021), the now-neutred vCenter Server Appliance VM (6.5) had no practical place in this project. Without the option for an in-place upgrade to a newer version, I also do not have the ability to upgrade to VCSA 6.7. It has been replaced, and will soon be decommissioned. vCenter has been moved to the Windows Server 2016 VM, for practicality reasons. The next step is to re-build the failed MS AD instance and promote a new domain controller. That will happen later this week. Hopefully, things will go a bit better this time around...
 
Last edited:
Alright - everything is almost ready for Active Directory setup, attempt #2. Not only did I kick ejabberd to Linux (due to issues when installed on Windows), but I also had to re-install multiple other applications. Demoting the AD DC appears to have been what led to it. So, I got to start from scratch in some sense. Still need to make a new SQL db for hMailServer, unlike last time. But, that should be relatively easy. Already installed vCenter Server, and it starts up way faster than the VCSA. Windows doesn't even take longer to boot from what I've seen. Had what appears to have been an unexpected part failure as well - the Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable. Got that replaced, and now can see all of my SAS HDDs once again. Last step is to (re-)promote the DC and test client devices. This time, I'll set the intended domain from the start (instead of setting it to something else by accident and having to change it twice later).
 
Screenshot (76).png


Backup Complete!
 
Must be super hot and noisy running the setup for 24/7. You don't have battery backup?
1) Not really - the server doesn't run 24/7 yet, and it doesn't heat up the room as much as I'd expect. But it is winter currently, so I just leave the window cracked.
2) Not yet. Still searching for affordable UPS's, since I will need at least 2-3 when everything's ready. Also need rack shelves to put the UPS's on.
 
Really cool!
Alright - everything is almost ready for Active Directory setup, attempt #2. Not only did I kick ejabberd to Linux (due to issues when installed on Windows), but I also had to re-install multiple other applications. Demoting the AD DC appears to have been what led to it. So, I got to start from scratch in some sense. Still need to make a new SQL db for hMailServer, unlike last time. But, that should be relatively easy. Already installed vCenter Server, and it starts up way faster than the VCSA. Windows doesn't even take longer to boot from what I've seen. Had what appears to have been an unexpected part failure as well - the Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable. Got that replaced, and now can see all of my SAS HDDs once again. Last step is to (re-)promote the DC and test client devices. This time, I'll set the intended domain from the start (instead of setting it to something else by accident and having to change it twice later).
Do you mean, you change your domain name twice? If so did it breaks any relation with your clients? Curious, I am planning to rename my lab as well to a different domain but read that it's not wise to do so. I am about to rebuild a new one instead and jumping from 2016 to 2019.
 
Really cool!

Do you mean, you change your domain name twice? If so did it breaks any relation with your clients? Curious, I am planning to rename my lab as well to a different domain but read that it's not wise to do so. I am about to rebuild a new one instead and jumping from 2016 to 2019.
I renamed mine multiple times, but never got the chance to test it due to DNS issues. I'd suggest starting fresh, just to be safe. If using multiple DNS servers, be sure to have your NS records straight.
 
Back
Top