• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Pfsense -- Overkill

Joined
Oct 5, 2008
Messages
1,802 (0.32/day)
Location
ATL, GA
System Name My Rig
Processor AMD 3950X
Motherboard X570 TUFF GAMING PLUS
Cooling EKWB Custom Loop, Lian Li 011 G1 distroplate/DDC 3.1 combo
Memory 4x16GB Corsair DDR4-3466
Video Card(s) MSI Seahawk 2080 Ti EKWB block
Storage 2TB Auros NVMe Drive
Display(s) Asus P27UQ
Case Lian Li 011-Dynamic XL
Audio Device(s) JBL 30X
Power Supply Seasonic Titanium 1000W
Mouse Razer Lancehead
Keyboard Razer Widow Maker Keyboard
Software Window's 10 Pro
Goal : Build a pfsense based computer that can handle a 1GB symmetrical fiber connection, as well create 3 10Gbe Fiber connections for several servers, and my primary desktop. I do a fair amount of video editing and I am adding storage to my FreeNAS server all the time. Being able to write directly to the server at 10Gb speed would be very nice. I also have windows server that I would like to connect into the mix for sending/receiving traffic from the server via the Pfsense box.

So... this is what I am building with;

Super Micro X8DTE server motherboard -- Purchased -- 55 Bucks on Ebay
2x Intel L5520 Xeon processors -- Purchased w/ board
6x4GB Samsung DDR3 1333 ECC ram -- 16 Bucks on Ebay
WD Vrap 150GB for the log backup -- From Stock
1TB SSHD( know there shit but it's what I have) for squid -- From Stock
6x Mellanox X2 Cross 10Gbe single port fiber NICs running PCIe 8x -- 96 Bucks on Ebay
3 TwinAx cross connect cables -- 36 Bucks on Ebay
Seasonic PSU -- haven't decided on the model yet needs dual 8 pin server board support though.
Also, haven't decided on the case.


FreeNAS Server 1
Windows Server 1
Primary PC
Pfsense Router
 
Joined
May 28, 2005
Messages
4,994 (0.73/day)
Location
South of England
System Name Box of Distraction
Processor Ryzen 7 1800X
Motherboard Crosshair VI Hero
Cooling Custom watercooling
Memory G.Skill TridentZ 2x8GB @ 3466MHz CL14 1T
Video Card(s) EVGA 1080Ti FE. WC'd & TDP limit increased to 360W.
Storage Samsung 960 Evo 500GB & WD Black 2TB storage drive.
Display(s) Asus ROG Swift PG278QR 27" 1440P 165hz Gsync
Case Phanteks Enthoo Pro M
Audio Device(s) Phillips Fidelio X2 headphones / basic Bose speakers
Power Supply EVGA Supernova 750W G3
Mouse Logitech G602
Keyboard Cherry MX Board 6.0 (mx red switches)
Software Win 10 & Linux Mint
Benchmark Scores https://hwbot.org/user/infrared
Sounds like a good project, can't wait to see it take shape :D
 
Joined
Oct 5, 2008
Messages
1,802 (0.32/day)
Location
ATL, GA
System Name My Rig
Processor AMD 3950X
Motherboard X570 TUFF GAMING PLUS
Cooling EKWB Custom Loop, Lian Li 011 G1 distroplate/DDC 3.1 combo
Memory 4x16GB Corsair DDR4-3466
Video Card(s) MSI Seahawk 2080 Ti EKWB block
Storage 2TB Auros NVMe Drive
Display(s) Asus P27UQ
Case Lian Li 011-Dynamic XL
Audio Device(s) JBL 30X
Power Supply Seasonic Titanium 1000W
Mouse Razer Lancehead
Keyboard Razer Widow Maker Keyboard
Software Window's 10 Pro
Progress! So I had to trade out some components, the server board I wanted didn't really ..stay stable. I also cut my teeth a bit on configuring pfsense to complete this.

Hardware wise, I think I am done, here is the final build;

AMD FX 8320E Processor w/ stock cooling
Asus TUF R3.0 990FX motherboard
2x4GB DDR3 1600 generic kit from Micro Center
Seasonic power-supply 650W Gold
Antec 300 Case
150GB V-Raptor

6x Mellanox Cross-connect X-2 adapters
3x3M TwinAx fiber cables from Ebay
1 Intel Pro GT nic for Lan

Reasoning;

I really didn't want to build with Intel on this, it was just too expensive and I already rolled the dice on the server board which I would have preferred, but
my biggest need was getting enough PCIe 2.0 8x lanes to house 3-4 of the cards(if I eventually add another host to switch). The super-micro seemed like a good choice, but it was unstable and got returned.

Enter AMD, a 990FX chipset means lost of PCIe lanes, specifically the board I got will operate in 16/8/4/8 simultaneously. Which is perfect, in theory a 4x PCIe 2.0 lane can handle 2.0 GB/s of transfer(wiki) so it should be sufficient for full duplex is my logic. But not all boards allow you to assign 8x to one or 4x to another slot, it depends on how they operate when there fully populated. Also getting enough ports to house 4 x 8x slots was difficult, so the board seemed like a nice compromise.

I got the board and chip for ~250, 280 if you count I had to buy a cheap display card since 990FX has no on board GPU. It was either do this, or go with a 5820 and a X99 board, which was nearly twice as expensive. Bonus points, this board, despite being an AMD board, has an Intel on board NIC. Which I vastly prefer to Realtek.


Software;

This is what I had to do, to make this work.

1st. copy compiled Mellanox drivers into pfsense via SSH/SFTP
2nd. Modify the boot-loader script to load the drivers and recognize the nics
(source ; http://unix.stackexchange.com/questions/272329/pfsense-with-mellanox-connectx-2-10gbit-nics)

3rd. Create a bridge between OPT1-3 - Mellanox Adapter 1-3, and OPT4 - Intel Pro GT 1000.
4th. Create firewall rules on OPT1-4 to allow traffic from LAN subnet
5th. Assign BRIDGE0 to LAN in interfaces.

Now, doing steps 3-5 took using a 5th interface to allow me to still access pfsense while I disable and re-arrange nics, otherwise your cutting out the door your managing through.

Once this was done, DHCP was passing to the mellanox adapters, and traffic was flowing. I made a alot of mistakes by not doing step 4 right at first, I was getting DHCP passing to my systems, but I wasn't getting traffic flowing back. I had assumed that a bridge didn't need any firewall rules to pass traffic, big mistake.

Performance and results;

So far transferring from my mini server to the freenas host has yeilded mixed results, I don't have storage powerful enough to write data to on the other system, so the fastest transfer I have gotten is 400MB/s which isn't bad at all. I need to rip out my gpu to test the fiber adapter on my desktop so I can test to and from a NVMe drive to the freenas system.

The longterm goal, is to allow my system and the others to write directly to freenas at super fast speeds so I can record/transfer data really fast.

Last steps;

I will be building again in January, God willing AMD Releases Zen IN January and completing my switch then, by connecting my desktop via 10Gbe.
 

brandonwh64

Addicted to Bacon and StarCrunches!!!
Joined
Sep 6, 2009
Messages
19,542 (3.67/day)
I have used many linux router OS's but Pfsense is the one I always come back to since it has the most features for free. Pfsense works great as a virtual machine as well. At work I built a PFsense VM with 10Gb interfaces for our office wireless. Only issue I had was I built the VM without 12GB of storage and after a year and a half of running it filled the HD up and the DHCP server stopped handing out IP's. I rebuilt it with a 100GB HD and put it back into production.
 
Joined
Oct 5, 2008
Messages
1,802 (0.32/day)
Location
ATL, GA
System Name My Rig
Processor AMD 3950X
Motherboard X570 TUFF GAMING PLUS
Cooling EKWB Custom Loop, Lian Li 011 G1 distroplate/DDC 3.1 combo
Memory 4x16GB Corsair DDR4-3466
Video Card(s) MSI Seahawk 2080 Ti EKWB block
Storage 2TB Auros NVMe Drive
Display(s) Asus P27UQ
Case Lian Li 011-Dynamic XL
Audio Device(s) JBL 30X
Power Supply Seasonic Titanium 1000W
Mouse Razer Lancehead
Keyboard Razer Widow Maker Keyboard
Software Window's 10 Pro
I have used many linux router OS's but Pfsense is the one I always come back to since it has the most features for free. Pfsense works great as a virtual machine as well. At work I built a PFsense VM with 10Gb interfaces for our office wireless. Only issue I had was I built the VM without 12GB of storage and after a year and a half of running it filled the HD up and the DHCP server stopped handing out IP's. I rebuilt it with a 100GB HD and put it back into production.

At my last gig, we used Pfsense to take a VMware cluster, and pipe traffic from 10+ customers in a Datacenter that just wanted to cross connect into our cabinet, and have dedicated, stand alone connections into there backup environment. It was pretty sweet how it worked in the end, it enabled a pretty awesome setup in the long term, but was alot to manage(thank god I wasn't the DC manager)

So what ever network they were coming from, they would route traffic through a pfsense VM, on their own VLan in our network, and be logically isolated by v-switching.
 
Top