• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AM1 Athlon 5350 FreeNAS file server

Joined
Aug 11, 2011
Messages
4,357 (0.87/day)
Location
Mexico
System Name Dell-y Driver
Processor Core i5-10400
Motherboard Asrock H410M-HVS
Cooling Intel 95w stock cooler
Memory 2x8 A-DATA 2999Mhz DDR4
Video Card(s) UHD 630
Storage 1TB WD Green M.2 - 4TB Seagate Barracuda
Display(s) Asus PA248 1920x1200 IPS
Case Dell Vostro 270S case
Audio Device(s) Onboard
Power Supply Dell 220w
Software Windows 10 64bit
So I've got the task to haul the office "server" (an IOMEGA Home Media case) after the HDD failed. :banghead: Thankfully I had an "unofficial" backup ;) and managed to avert disaster.

So they said, "here are 10,000 pesos, get me a server" and I thought, OK, FreeNAS it is...

Target size is 2TB. We had 360GB of data on the "server" after 5 years of operation so 2TB should last a lifetime.

Parts list:

ASUS AM1I-A mITX
Athlon 5350
2x8GB DDR3-1333
5 x 1TB Black WD drives (4 for RAIDZ2, 1 spare)
Syba 4 port SATA card (SIL3124)
Samsung 2.5" 16GB MLC SSD (ZIL drive)
Sparkle FSP-400GHS 400w SFX PSU 80+ bronze
Acteck BERN mATX low profile case.

I got a few parts today:

f2kf.png


I'll keep you posted :D
 
Last edited:
subb'd, I love freeNAS builds!
 
Reading up on some FreeNAS threads and their own Hardware Recommendation page I think that I don't actually need a ZIL drive since sharing will be using SMB (Windows enviroment) and a ZIL drive is recommended for NFS or if you have lots of I/O. Maybe I'll use the SSD for the OS but I'll determine that after the tests.
 
I received the board and SATA card over the week:

2fq4.jpg



So I started with the build yesterday.



Here's the Athlon 5350. The heatsink is very small, almost comically so.

6vg7.jpg



AM1. Some assembly required.
2jl8.jpg

The heatsink doesn't come with the push pins installed. You have to put them in.

kno1w.jpg

Make sure to put them in correctly.

Now let's install the CPU:

b9by.jpg


3co9.jpg

I don't know if I was doing something wrong or what but pushing the pins into the motherboard mounting holes was way harder than I thought. Easiest way that I found was putting one and then the other... it was either that or grow a third hand.

Doesn't seem too secure, does it?
bqwm4.jpg


Everything is in place...

ixwe.jpg


Now to mod the case.....
 
Last edited:
Awesome pick for a NAS! Subscribed.
Could you please confirm if the cooler pins are 85mm apart? (If you have the time, of course)
I'm trying to find out if there are any cooler mods possible, but haven't found a reliable source for the distance between the pin holes of the retention system.
I thinking a nofan cr-80 can be adapted to it.
 
I don't know if I was doing something wrong or what but pushing the pins into the motherboard mounting holes was way harder than I thought. Easiest way that I found was putting one and then the other... it was either that or grow a third hand.

Yeah, it is definitely a pain. That is the way I've been doing it too, haven't really had a problem. Though, once installed properly, it definitely is secure. I could lift the entire system up by the heatsink and it wouldn't budge.

Could you please confirm if the cooler pins are 85mm apart? (If you have the time, of course)

According to AMD's Techdoc on the cooler design, they should be 85mm apart(well 84.9, but close enough).
 
Last edited:
Modding the case.


The Acteck Bern case is supposed to be used with a vertical stand. Since this thing will weight a lot due to the HDDs I wanted to keep it flat. Unfortunately that meant I had to block the PSU intake.
utpl.jpg


It comes with an el-cheapo 500w(?) SFX PSU which will, of course, be discarded.

stba.jpg


To the spares bin!!!

op8c0.jpg

I had these feet from another case gathering dust.

The frontal feet were easy since I used the holes from the side (now bottom) vent:
j3bs.jpg


After some measuring and drilling:

x324.jpg
 
Now onto the SATA card:


The card comes with a full height bracket. Thankfully I had a low profile bracket lying around.
ndnne.jpg


It doesn't fit completely but should do the trick:
8wv2.jpg


It has a SIL3124 controller which seems to be fully supported by FreeNAS 9.x (according to the FreeBDS 9.2 hardware notes: http://www.freebsd.org/releases/9.2R/hardware.html )
jytx.jpg
 
Last edited:
Tying up and some problems...

I had this lying around doing nothing so I thought I'd use it for this build:

qwor.jpg


but after a quick fitting test it became apparent that there would be a problem for connecting the hard drives:

oe2j.jpg


So I just took the HDD cage from it:

5b7x.jpg


Better! Securing it was a bitch. Had to drill several holes to get it right.


The FSP PSU that will be powering this build:
kck8.jpg


It doesn't have 80+ certification but FSP states that it meets 80+ Bronze.

Since I couldn't use the Evercool Armor fan bracket I removed the PCI covers and affixed an 80mm fan with double side mounting tape:

wgy7.jpg

f8j7.jpg



Wrapping up...
a8ro.jpg


What a mess. I need shorter and thinner cables.

Ok, now let's install FreeNAS. Burn CD, blah, blah, blah. Reboot.
bwmx.jpg


It sure takes its sweet time. I'm used to my unRAID home server to be up and running in 1 minute. Don't know if it's because it's the first start up or what but I think it took about 5 minutes for the main screen to display.

Closing up:

0k5z.png


I'll take the build on site tomorrow and make some tests before deploying.
 
Looks good! I may have to build something similar... What's the model number of the raid card, and are you doing JBOD and letting FreeNAS do the magic, or are you using the on-card raid?
 
Last edited:
I'm interested what is under the heatsink of that card. The SI chip is handling all the data processing, it doesn't have a separate onboard chip for RAID calculation, it doesn't seem to have any onboard cache that would need cooling either...so what is under that heatsink? I wonder if they just slapped a heatsink over essentially nothing to make the card look like some of the higher end cards...
 
I assume it's a PCI to PCIe bridge since SIL3124 is a PCI controller. Don't mind since I'm sure the 250MB/S bandwidth will be enough for 7200RPM HDDs over a gigabit connection, more so with ZFS slowing throughput.

FreeNAS picks up the drives just fine:
py8q.jpg
 
Last edited:
The build is hitting 65w on idle which is... kind of high for what I wanted (I was expecting ~45w). Maybe I should have gone with a Sempron 3850?

I guess I'll fiddle with the bios settings and see if I can turn down the GPU frequency and such. I think I'll make some test with CPU freq at 1.3Ghz to emulate a Sempron and see what I get.
 
The build is hitting 65w on idle which is... kind of high for what I wanted (I was expecting ~45w). Maybe I should have gone with a Sempron 3850?

I guess I'll fiddle with the bios settings and see if I can turn down the GPU frequency and such. I think I'll make some test with CPU freq at 1.3Ghz to emulate a Sempron and see what I get.

If all 5 WD blacks are spun up, I wouldn't be surprised to see the drives alone pulling 35 to 40-watts, plus the RAID controller pulls a little bit, plus memory pulls a little bit, plus there are losses on the PSU and on the VRMs. 65w for the entire rig isn't all that bad IMHO.

I'm assuming you measured power draw off the wall? If you are, the actual DC usage is probably closer to 50-55-watts, in which case doesn't sound unrealistic for what you're doing with the machine.
 
Yep, at the wall. I dunno, maybe I was being too optimistic. The drives are rated for 10w (5v @ 0.68A, 12V @ 0.55A) and I could only fit 4 of them so I was expecting 65w at write/read operation. The system spikes to 80w at startup, maybe that's what I'll get in operation.
 
Perhaps, but don't forget that PCI-E card draws power too and the CPU can't power gate the PCI-E root complex if it's being used. PCI-E 1x low-profile is rated for 10-watts and "high-power" (standard for x4, x8, and x16) is 25-watts. It may not be unrealistic to expect the RAID card to be drawing some of that load. Even if all the drive were consuming between 5 and 10-watts a pop, that's 20 to 40-watts (I thought all 5 were powered up,) plus 10 to 25-watts max for the RAID card, plus 5-10 watts for the RAM, and a 20-Watt TDP on the CPU which should idle at a fraction of that. So worst case, the rig might draw 95 watts which seems high to me, but that's the upper-bounds of what each device might draw. on the other hand, the drives might consume as little as 20-watts for all 4, 5-10 watts for the RAID card, 5 for ram, and 5 for the CPU at idle which would total 35-40 watts as a lower bound. So with a 95-watt upper and a ~40-watt lower, and a realistic draw off the wall at 65w, taking into account efficency losses (15% assuming just the PSU?) the real power draw you're seeing is ~55-watts. I would place that on the lower end of the 40-95 watt range. That sounds entirely realistic to me from a number crunching perspective.
 
That is one of the problems with using WD Black drives, they have basically the worst idle power usage. And they will actually spike higher than 10w at startup because they take up way more power to spin up. That is why most RAID controllers that support more than 4 drives usually use, or at least give the option for, staggered spin-up. Though, I would have gone with RED drives for this application. They are only rated for a max of 3w, and they officially support RAID.

Plus, at 65w you are only at about 15% load on that power supply. Which means for that PSU its efficiency is probably in the 75% range, putting the real power usage in the 45w range. Which given the hard drives, is about as good as you are going to get.
 
yeah, I guess I'll look into getting a 200-250w PSU. Thankfully FSP has that covered too: http://www.sparklepower.com/pdf/PC/FSP200-50GSV-5K.PDF

About Reds, they cost a ton here for some reason (Put "Enterprise" in front of something and charge $100 more). I went with Blacks because of their, supposed, higher reliability over Blues. I'm using RAIDZ2 so whenever they support RAID or not is a moot point.

Fiddling around on FreeNAS it looks as there are some power saving options; I'll have to read up on those as to not fuck things up and in the mean time I'll lower the GPU frequency. That should help with a couple of watts. Not that I'm trying to break records with this thing but I'd like it to be as efficient as possible.
 
Last edited:
Fiddling around on FreeNAS it looks as there are some power saving options;
I was going to ask you if you hadn't played with those in the meantime. Late to the party...it's becoming a trend for me...
What that option does is enable the powerd algorithms to cycle trough the P-States, relative to cpu load.
I think the default for freeNAS is "adaptive", which works a lot like windows does, with c'n'q/speedstep active.
 
I forgot to share the numbers :D

It looks like the build was extremely overkill :peace:
rgxhp.jpg

14GB of free RAM :eek: I think I won't need the SSD in there after all. ;)


Reads go up to ~90MB/s, writes top at ~60MB/s :rockout:

kcv1.jpg



qdoo.jpg


I don't know if ZFS or the SATA card is limiting the write speed but it's a 2x improvement over the previous solution as the maximum write speed was in the low 30s.
 
Last edited:
Back
Top