FreeNAS Corral - Short Lived

I'll just start out with this link...

So, Corral was pretty garbage from a back end/development standpoint, and they decided to axe it. If you're interested in the whole story, I'd take a read through the thread, but TL;DR - They're rolling all the features of Corral into 9.10 with a newer UI.

What does that mean for my install though? Well, Corral isn't production anymore, and honestly, it's shaken any kind of faith I've had in FreeNAS. I'm adopting Proxmox VE as my all in one solution. I know I ragged on it in the previous post, but I decided to roll a VM install to test it out, and after killing and reinstalling it a few times, I found it seemed really stable. The documentation or existing all in ones was few and far between, but I'm pretty comfortable with Debian, and I'm up for a much more pleasing challenge after BSD. So this afternoon, I nuked my FreeNAS install, and installed Proxmox VE.




Here's the fun part about Proxmox. There was no struggle. The install was seamless. The import of my ZFS pool was literally a single command, and everything just worked. Creating shares was an Ubuntu container and a mount point away. It actually took me less than an hour to configure the sharing I wanted, and get a headless Deluge instance running. Fine tuning took a bit longer, but was considerably less painful than FreeNAS. Virtual machines JUST WORK. There's no messing around with config files and setting GRUB boot points. There's no GUI errors regarding "This virtual machine doesn't exist" that disappear after logging out and logging back in. Setting save when you save them, instead of having to do it multiple times over. It's going to be a while to get it all to the point I want it to be at, and to be fully confident in managing it, but it's definitely a treat so far.

A quick rundown on the setup - The Proxmox host has the ZFS pool mounted directly on it, much like FreeNAS would. Instead of installing Samba on Proxmox directly (This likely would have been fine), I've installed it in an Ubuntu 16.04 container, and bind mapped the media directory (The only thing that should be shared from it) directly to the container. From there, Samba is installed on the container, users are setup, and file and samba permissions changed. My headless Deluge instance is also running in a container, with the /Media/Downloads directory bind mapped, and user/group setup to match the authenticated users group on the Samba server. This way I can still openly manage files (delete, edit, etc) from my authenticated account, and guests can still read files. As a trial run I'm pretty happy, though I feel I may implement LDAP on all of my servers for easier permissions management of both files and shares.

This is just a short post to advise of my fun detour, but I intend to have more posts about the migration in the near future. In my opinion, for those looking for an easy to mange hypervisor/file server all in one solution akin to the ESXi/FreeNAS solutions you usually see, Proxmox is promising.

2017 Infrastructure Update - Networking and Servers

No real plans this year to switch out any desktop or notebook hardware, but my WNDR3700N is getting a bit old, and my servers really aren't being used to their fullest. We'll start with the server side of things, as that's probably the easiest to cover, and I'm still unsure if my choice was right. I can always change things in the future, but I've got what I've got now.

For a long while I've wanted to migrate to an all in one virtualization/storage solution for the reduced power consumption, footprint, and noise. I really want to retire the power hungry 95w i7 860 in my ESXi box for a solution that runs on my power sipping i3 2120T. This leaves out the idea of ESXi with FreeNAS as a guest, as the processor doesn't support VT-d. Proxmox VE was my next stop, as it supported the ZFS file system, but I primarily wanted a nice NAS GUI that also incorporated the virtualization management functionality, so that was a no go. Hyper-V has no ZFS support, so that's out the door. UnRAID has the features I want, but isn't ZFS and is also a paid service. Getting frustrated in my search, I finally came across FreeNAS 10. Although in it's infancy, it seemed really promising. Prior to 10, users were running services in BSD jails, but with the release of 10, FreeNAS adopted the BSD based virtualization tech bhyve, along with built in docker support (This is just a boot2docker vm that is manageable from the GUI). On top of that, it's primarily a NAS OS with a fantastic looking brand new GUI. Yes, it's a .0 release, and yes, there's little to no documentation, but I'm a nerd - jumping into this stuff feet first is what I'm all about. FreeNAS was my final choice of NAS OS.

With operating system picked out, it's onto the hardware. My file server is already running the optimal base (i3-2120T, SuperMicro X9SCL+-F, 4x3TB 7200RPM Toshiba drives, 40GB Intel SSD), but the 4GB of RAM wasn't going to cut it. If this was also going to be doing double duty of a host for virtual machines, it was getting a heat sink upgrade too. After a quick trip to the HCL and many internet searches, my wallet was considerably lighter and I had in my posession 32GB of 1600mhz Crucial ECC DDR3 UDIMMs and a trusty Arctic Cooling Freezer 7 Pro.



While waiting on gear to arrive, I took the opportunity to flash the latest BIOS for proper RAM support, and ensure I had all the appropriate ISOs downloaded and on a flash drive. The day everything arrived I got to work, and let me tell you, frustration was abound. I was able to successfully install the new RAM and the heat sink, but when reconnecting the power supply, I got a flash of lights from everything on the board followed by nothing... Alright, I may have destroyed a power supply, not a big deal. Plugging in my spare and powering things on gives me 4 short beeps followed by a long beep - SuperMicro seems to indicate this means no RAM installed. At this point I think I've fried 300 dollars worth of RAM and a power supply, so inside I'm pretty dead inside. Last ditch attempt to resolve, I connect the original power supply, plug it in, and boot successfully. Turns out SuperMicro boards are just picky about everything.

Onto the FreeNAS install - I've discovered that although flash drives are recommended for the install, you'll be 100% happier installing to an SSD. Install initially went fine, then for some reason FreeNAS couldn't correctly import the shares from my ZFS volume. I was able to redo all the permissions, followed by a setup of SMB shares and appropriately mapped to my computer. Confirmed I could read files, great. Attempt to write files, kernel panic of the NAS. At this point, it's 1AM, and I really don't feel like figuring things out and fixing it, so I throw a 2TB drive into my desktop, and start copying over the important stuff. Once copied, I nuke the ZFS pool, recreate a new pool, setup new datasets and shares, and recopy the files. All in, this was done by 4AM and I'm definitely a bit more data-light now, but with a fully functional nightmare of a NAS.

Day 2, I decide to move the install from a flash drive over to the 40GB Intel SSD. Boot times on the flash drive were taking 15-20 minutes, which is abysmal. I pull the configuration backup, and get to work. Apparently, SuperMicro boards are also very picky about boot options. After reinstalling numerous times and playing around with boot settings in the BIOS, I was able to get things successfully booted (much faster, as well), from the SSD. I import my pool, and upload my configuration... Which fails. Not a big deal, that's fine, the shares are still there, I just need to reconfigure users, groups, and permissions, and re-enable all the services. This was finished in half an hour, partially thanks to the SSD, and partially thanks to the previous night's lessons. There were a few times when I had to chown a directory from ssh, but that's about the extent of the command line work I was doing.

Day 3, I get to work creating some virtual machines and playing with Docker. I have to say, Docker is an absolute pleasure to work with! The FreeNAS implementation is a virtual machine running boot2docker, along with a very nice interface for configuring boot options and paths to throw into the container. As long as you're running a container from the FreeNAS library, things "just work". Dockerfiles from other libraries require a bit more work to get running, as they're not tagged for the FreeNAS interface, but over time more and more things are getting converted to include the FreeNAS tags to make things just work. Currently I'm running Deluge and a Minecraft server in containers, and have also played around with PiHole, Unifi controller, and ddclient as well. The Minecraft server and ddclient took a bit of work to get things functional, but PiHole, the Unifi controller, and Deluge were very simple to create and configure. I will likely start looking into converting existing containers into FreeNAS tagged ones, but I just don't currently have the time.

Virtual machines are a bit of another story. Although the interface is nice, there's no documentation for creating virtual machines that aren't in the FreeNAS library, so after digging a lot, I was able to get things worked out mostly, and it's not so bad now. Initial issue was with disk creation - Everything pointed toward having to create ZVOLs to house your virtual machines, but after looking into other install files, I determined you could just specify a *.img file when creating the virtual disk, and it would store the disk on a .img file, which feels easier to manage. The other issue I ran into was bhyve's GRUB implementation. With Linux installs such as Debian and Ubuntu who use non-standard grub.cfg locations, you need to tell bhyve's GRUB to boot from that location specifically. For that, you need to create a grub.cfg file (via ssh) in /PathToVM/files/grub/, with the contents as (This is for Debian/Ubuntu, will differ for other OS, but you're pointing it at the grub.cfg location on the actual VM):
configfile (hd0,msdos1)/grub/grub.cfg

Followed by running the below command in the FreeNAS CLI:
vm %VMNAME set boot_directory=grub

I understand this is a .0 release, but still, I shouldn't HAVE to do this to get a virtual machine functional within an operating system that advertises virtualization as a feature. I hope they improve this in future releases, but as of right now I'm just glad I was able to figure things out.

On the plus side, at least the graphs are cool.



I'll update more on FreeNAS as I spend more time with it, however for the time being, it's time to look at network infrastructure and the upgrade I'm going through with that. My WNDR3700N was aging. It's a solid gigabit router that supports DDWRT/Tomato, however it doesn't have the best range, or AC wireless support which practically everything in my apartment is capable of now. Being a bit of a networking and infrastructure nerd, I craved something a bit more. My first thought was PFSense box, but after reading into it further, for less money I could get everything I want and a more enterprise-eque experience out of an Ubiquiti setup. I decided to jump in full force on a full Unifi setup, and although I'm still waiting on my switch, I couldn't be happier so far.



The purchases for the networking replacement setup are ended up being:
• Ubiquiti Unifi Security Gateway
• Ubiquiti Unifi AP AC Lite
• Ubiquiti Unifi Switch 8-150W
• TP Link TL-SG108 (Stand in)

Well, that's all fine and dandy, but why Unifi over the regular EdgeRouter and a Unifi AP, or any other AP for that matter? Well, it's ecosystem. These things perform great, but setup and management is a breeze. Unifi does site management via "cloud". Now this can be either a local machine (Cloud key appliance, virtual machine, or physical box), or a remotely hosted instance. Yeah, you can manage your network from a VPS. On top of that, you can mange multiple sites from the same VPS, so if you had multiple networks in multiple sites, they could all be managed by logging into a single web portal. My choice in VPS was an OVH SSD VPS instance at just around 5 dollars a month. A single vcore, 2GB of RAM, and 10GB of SSD based hard drive space is plenty for running a single site, and I can even throw other small services onto it as well. I'm so impressed with what you get for the money from OVH that I'm considering moving my web hosting from HostGator over to another VPS instance. But hey, this is more about the hardware, so let's look at the USG.



I'll apologize for the cables, as I'll be moving things to a more permanent home once the 8 port POE switch arrives, which should be soon. The USG is essentially an EdgeRouter Lite internally, however requires the cloud controller for persistent configuration. It supports a single WAN interface, a LAN interface, a VOIP interface (This can be changed to a second WAN port through the config for failover support), and a console port. Most would think it's odd to see a router with such a low number of ports, but unlike consumer devices, switching is delegated to a separate powerful device which scales based on enterprise requirements. What does the USG bring that a consumer router doesn't? Higher reliability, higher throughput, more features. VLAN support? Check. Deep packet inspection? Why the heck not. Locally authenticated VPN? Well, it's coming to GUI in the next release, but it's there. It's not a perfect product, but it's definitely getting closer and closer each controller release, and the ease of setup and management make up for that in spades.



The access point I chose was the AC AP Lite. I didn't need the 1750mbps offered by the AC AP Pro, as my network speeds generally top out at gigabit anyway, and the range is approximately the same between the two. It's 24v POE powered and comes with it's own POE injector, but once the Unifi 8 port switch is in it'll be moved straight to that. A separate AP provides a much more stable and reliable wireless connection, especially in a 16 unit apartment building with a fairly saturated 2.4ghz band. In conjunction with the Unifi controller, I can offer a guest WiFi portal, some pretty neat band steering (Basically "steering" devices onto the best possible band and channel), dynamic channel selection, band scanning to determine channel saturation, etc.

I'll be honest I'm just scratching the surface of what this stuff is capable of, and I have a lot of plans to document it over the coming weeks and months. For the time being, I'll enjoy my full WiFi coverage anywhere in my apartment, with all of my devices, and then some.

Upgrade Plans: 2016 Continued

Alright, to continue on the last post, I've finalized and ordered the upgrade parts! Everything should be here Tuesday next week (Yay holiday weekends...). The final list changed a bit, but it's not too different:
Intel Core i5 4690K
Asus Z97-PRO GAMER
Samsung 850 EVO 250GB SATAIII SSD

I decided on the Asus over the Gigabyte board in my previous post as i felt it was technically superior for a similar price point. After reviewing the specifications, it seemed to have better reviews, newer Intel gigabit LAN, and a better on board audio setup utilizing an isolated section of the PCB for audio, better capacitors, and an EM shielded audio chip. I plan on dropping my HT Omega Striker, so I'm trying for the best on board audio in my price range. The processor in the order remains the same - I'm used to having a Core i5, and performance wise it's very similar in gaming and day to day usage as the i7 4790k is, so I don't see a point in hyperthreading. I have an ESXI lab box for anything that's massively threaded anyway. I also decided to drop the M.2 SSD in favor of a SATAIII model, mainly because the M.2 would disable 2 SATA ports, and the unit I wanted was back ordered. The 850 EVO SATAIII has great reviews, and performance seems to be solid.  I'm also going to try operating without a dual gigabit NIC in my desktop to try streamlining my network a bit. I've since removed my poor man's VLAN management switch, threw my ESXI management on the main network, and direct connected the file server's second NIC to the ESXI box. This should cut down on cabling tremendously.

Next step in the upgrade train will be a case overhaul, along with a new set of fans. I've decided on:
Fractal Design Define R5 Windowless
3x Noctua NF-A14 PWM
2x Noctua NF-F12 PWM

I'll be keeping my NH-U12P, and replacing the P12 that's currently running on it with dual NF-F12s. This is primarily for PWM control, but the F12 is also a bit of a higher performance model as well. I unfortunately lost the second set of fan clips for it, but a quick message to Noctua with the invoice for the NH-U12P got a set of them shipped to me at no charge! Can't complain about that level of support for an 8 year old heat sink. The Define R5 is a quiet case, which is a bit of a departure from what I'm generally used to, but I don't really need the extreme levels of cooling or the gamer looks afforded by my history of Coolermaster cases. I want to start prioritizing noise in computing, and the Define R5 is one of the best options for silent cases at it's price point on the market. Coupling this with the amazing performance and sound levels provided by Noctua fans, all the PWM headers on the Z97-PRO GAMER, and Asus' great fan control options, I should be able to have a quiet system that can really push some air when load starts to get a bit heavier.

Once everything with the case and initial upgrade is completed, I'll evaluate and determine what might be next. I believe my GTX670 is going to be plenty of video card for my current needs, but if I find myself gaming more, I may look into a GPU upgrade - The Asus Strix cards have really caught my attention with their "0 decibel technology" which basically doesn't spin up the fans until a certain temperature is hit, allowing for silent operation. The GTX970 would be a very good stepping stone from the GTX670, and would definitely fall in line with all of my past GPU purchases (Value enthusiast FTW!). I may also consider replacing all of my mechanical storage in the desktop with solid state stuff. 1TB SSDs are coming down considerably in price, and the file server generally handles any large storage requirements like virtual machines or video storage. Time will tell. Upgrades have been a long time coming, and considering how long of a life I generally get out of my hardware, I don't mind splashing out a bit of money for good stuff.

Anyway, another boring text post, but I do hope to have a lot of pictures of the upgrade.

Linux Server Golden Image



As stated in my previous posts, for my lab and internal network infrastructure, I'm using mostly Linux bases servers. They're pretty reliable, low maintenance, low footprint, and they do the job without a GUI. Of course when I'm spinning up a new server every day or two for testing purposes, configuration can get a bit repetitive. To make my life easier, I've decided to create a "golden image" of my currently preferred server distro, Ubuntu Server 12.04 64 bit.

To create this base server, I've thrown together a virtual machine in VMWare Workstation. Standard version 9 virtual machine, 1 core, 512MB of RAM, and a 20GB hard drive. The next snapshot will include the removal of non essentials like sound card, USB, etc. Once the machine is created, Ubuntu Server 12.04.3 64 bit is installed, and configured with a default username and password, along with the SSH service. I don't set a static IP at this point, as I've created a script to take care of that and placed it in the home directory, along with a script to check memory usage.

Once the base install was complete, I installed VMWare tools and Webmin, and called it a day. Once it's shut down, I created a snapshot in Workstation and made notes as to what was done to the virtual machine, which basically prepped it for upload or cloning. To actually upload to my ESXi box, I just use the VMWare Standalone Converter, and make sure to adjust RAM amounts depending on the tasks the VM will be handling, and also set the disk to thin provisioned depending on what disk it will be sitting on.

This whole process takes a lot of the work out of creating/deploying scenarios and new labs, which is great. This, along with my golden image of Server 2012 R2, and I can have labs up within half an hour. Minus the configuration of course.

Lab update

Just dropping a quick post to say everything has been running great! A few hiccups, but it's been a learning process!

The base infrastructure of my network includes Linux based virtual machines, all running on Ubuntu server 12.04 LTS. These lovely little virtual machines let me do more on my main rig without tying up resources. Currently for my main, every day virtual machines I'm running:

  • One BIND based DNS server for internal name resolution

  • One Serviio streaming server with a web interface (I can also control this from an app on my phone/tablet - Sweet!)

  • One web server running with a LAMP stack. This currently hosts my mediawiki install where I keep track of any configuration I do for future reference.

  • One server running the Deluge daemon for downloads. Has a 500GB virtual drive dedicated to it. I access this via either web client or desktop client. (Desktop client actually feels completely local)

  • One Minecraft server running Minecraft My Admin. This can be a bit flaky, but I learned that a custom built one was much better than the turnkey appliance I downloaded initally.


I also have two other resource pools dedicated to testing and labbing. In the test pool I'm just playing around with Server 2012 R2 as a home domain controller (Thinking of moving both my DNS and DHCP to it), along with a couple other random virutal machines. In my lab pool, I have a full suite of Server 2012 R2 machines running various features in the same domain. All of the Server 2012 virtual machines run off the file server, which has provided exceptional performance.

As for the hiccups, I ran into some instability once in a while with Backtrack and USB pass through. The host would go completely unresponsive from time to time with nothing in the logs. After moving Backtrack to my desktop and running it with USB pass through in Workstation 9, the instabilities went away completely. There was also the issue with Deluge constantly crashing after downloading for a few minutes, but it ended up being a bad file.

The Minecraft server is another issue flat out. I was running it as a turnkey virtual appliance for a while, as I didn't want to bother with the configuration at that moment. That was until it broke. Luckily I was able to mount the drive in another virtual machine and recover the data. Once the data was recovered, I hand built the next instance of the server, which has so far been a lot more stable. (Issue this morning with an MCMA update, however it was resolved rather quickly by killing and rerunning the process, and accepting the upgrade.)

In the future I have plans on implementing Puppet or Chef, however that won't be for a little while. I hope if I do that I'll be able to document it!

File Server Update!



The case finally arrived for the file server, which me get it up and running in a test capacity. Still waiting on the hard drives, but hopefully soon! 4x 3TB Toshibas are on their way as of today. The case ordered was a Lian Li PC-A04B, mATX case with loads of drive bays, 3 included 120mm fans, and overall a really good build quality. I had everything installed the night it arrived, along with a 320GB hard drive to test out ZFS and Napp-It.



As you can see in the above picture, the test system is a bit messy. Unfortunately it's going to be a bit difficult to make it look pretty, as the 24 pin power is placed in a really bad spot. It doesn't really interfere with anything, but it does make hiding that one cable a bit tough. I removed the USB/eSATA/audio panel from the top, as the cables were super long, and I wasn't going to be using those features anyway. Once the new drives arrive, I'll be swapping the hard drive cages around and tidying the cables. Hopefully I'll have some new pictures to show off, as I'm really proud of this little machine. Also, after getting it reset back to defaults, IPMI is amazing. I only have 2 network cables and a power cable attached, but I don't even need to touch the physical machine for anything. Power up/down, KVM, etc... All handled by the IPMI chip.  Anyway, final initial specs below!
Intel Core i3 2120T
SuperMicro X9SCL+-F
4GB Kingston ECC DDR3
Lian Li PC-A04B
Corsair CX430 430w PSU
4x Toshiba DT01ABA300 3TB drives in RAID10 equivalent (Striped mirrors, 6TB usable)

I'm considering adding a 40GB Intel SSD as a ZIL, but I'm not sure how the read/write performance would be with it. The main purpose of this will be as a file server, even though the ESXI box will have a direct link to it, and will probably use some of the storage space for low I/O virtual machines. Some are going to find it a little weird that I'm using striped mirrors for a basic file server instead of RAIDZ or RAIDZ2, but I have my reasons. First off, it makes adding drives slightly cheaper. Although it's less storage space, I only have to purchase 2 drives to increase capacity instead of 3. If I were to fill the server to capacity (3 3.5" drives in the 5.25" bays), it would limit me to 15TB of usable space, which is a considerable chunk, and I'm happy with that. The other reason is raw performance. RAIDZ and RAIDZ2 have limited random IO vs a mirror. This will be great if I do decide to host some more intensive virtual machines on it, or stream multiple things from it.

New NIC For The Server




Picked up an HP360T dual port Intel based NIC on the cheap. This will give me a few extra ports to work with when it comes down to adding the file server. Installation was simple, just pop it in and it's recognized as an Intel 82571EB ethernet controller with two vmnics available.  The server has a total of four ports now, three being Intel based, and one Realtek based. I'll probably end up using one port for a direct connection to the file server, the another for management, another for internal virtual machines, and another for web-facing virtual machines.

Moving From Thick To Thin Provisioning Without vCenter

The great thing about SSDs is the performance. You can run multiple virtual machines off a single drive and they don’t skip a beat. The only downfall to SSDs is the capacity. 256GB on the Crucial M4 in my ESXI box isn’t a whole lot to play with.

When I initially setup the virtual machines on my box, I threw them onto spinning disks. I had the extra space, and a thick provisioned disk generally performs better than thin. Moving these disks to an SSD would result in only being able to spin up a few machines before reaching capacity. I really didn’t want to walk through creating new VMs and reconfiguring as well, so I started researching.

With vCenter, all you really have to do is migrate the VM to a new disk, and when migrating, select the provisioning option of thin. Without vCenter however, that option isn’t available with the free standalone product. There are two workarounds however, the first being to SSH into the machine and use the VMKFSTools to convert the disks. That’s great, but there’s room for some error. The second way is the way I chose, which is to use the VMWare Standalone Converter. It allows me to basically clone the machine from spinning disk onto the new SSD datastore, thin provision in the process, then delete the old machine.

 I’m going to be converting my Backtrack install from the server to the server, and here’s a step by step.


Step one is to load up the vCenter standalone converter, and select the convert option. You’ll get a wizard pop up that lets you start. Select VMWare Infrastructure Virtual Machine from the drop down, then log into your ESXI box.


Step two is to select from the list the machine you want to convert. I’m going to select my Backtrack4 virtual machine.


Step three is to select your destination. From the drop down, select VMWare Infrastructure Virtual Machine, the log into your ESXI box.


Step four is to create a name for your converted machine. I’m using Backtrack4_thin


Step five is to select your location. In this case, from the datastore dropdown, I selected CrucialSSD, which is where I want this machine to go.


Step six is to edit the data to copy field. From the type dropdown, select thin. Then select next, or edit any other settings, like memory or vCPU count.


Step seven is to review and select finish. Depending on how fast the datastore is, and how large the conversion will be, it can take anywhere from a few minutes to a few hours.

After the conversion is finished, log into your ESXI box and verify the new virtual machine is functional. Once you’ve verified functionality, delete the original from disk.

The Teardown And Finally A Virtual Playground!


I finally decided to actually go through with the project of tearing down my two machines and reconfiguring. Almost all my watercooling gear has been sold, and I’ve downsized from the HAF932 to the HAF912. So, my new system? Pretty similar to what I was running about 2 years ago.



i5 750 w/ Noctua NH-U12P (Single fan)
16GB Mushkin Blackline DDR3 (Finally sent the bad RAM for RMA!)
EVGA P55 FTW
Gigabyte HD6850 Windforce
HT Omega Striker 7.1 sound card
120GB OCZ Vertex 2 Extended
640GB Western Digital Black
2x1TB Seagate Barracuda (RAID0)
Corsair TX650
HAF912


Not too shabby. I also moved away from my 2x Acer 23” LCDs to a Single Samsung 21.5” LCD. The desk looks a little more empty now, but it’s nice to have the breathing room. I still kept my headphones and speakers, and my Blue Yeti.

Everything else went into creating an ESXI box. A bit of trading and selling left me with a fairly capable machine that I’m pretty happy with. The top picture is the new case, a Silverstone PS08B. It’s by far the nicest little 30 dollar case I’ve played with. Specs below:



i7 860 w/ Stock i5/i7 LGA1156 heatsink
16GB GSKILL DDR3
Gigabyte P55M-UD2
Intel Pro 1000 PT Gigabit NIC
150GB Velociraptor 10KRPM 2.5”
500GB Seagate 7200RPM 3.5”
320GB Toshiba 5400RPM 2.5”
256GB Crucial M4 SSD 2.5”
40GB Intel SSD 2.5”
Antec Earthwatts 430w
Silverstone PS08B


It was living in the HAF932 before, but that just looked foolish. At 30 dollars from Memory Express, the PS08B was really a no brainer. The 16GB of GSKILL DDR3 came from the HardwareCanucks forums, as well as the Pro 1000 PT, the 40GB SSD, and the 256GB Crucial M4. There’s a dual port HP NC360T on the way for another couple NICs. I’ll go over the configuration a bit more in a future post, but so far things are awesome.
Oh, and just for the size comparison between the PS08 and the HAF932, just take a look below.

File Server Plans




Once again I feel the need... The need for more storage space. And with more storage space comes newer, better, more exciting hardware! So my current file server build is pretty basic, something just hobbled together from spare parts.

  • Intel Pentium Dual Core e5200

  • mATX Gigabyte LGA775 board

  • 2GB DDR2

  • 1x1TB+1x2TB in spanned volume (I know, I’m bad)

  • Couple of 2.5” drives for OS and download caching

  • 350w Sparkle power supply

I’m honestly surprised the thing has lasted this long without a drive failure. My luck while typing this, a drive would fail. So, we’re going to address the weak points in my current server build with the new one.

Power consumption

The current server consumes a fair bit of power. The processor really isn’t horrible power wise, but it is a 65w TDP, and it’s running on an older, more power hungry chipset. This is going to be remedied by a much more powerful, more efficient processor, the Intel Core i3 2120T. This is a dual core processor running at 2.6GHz, however, it’s a newer more energy efficient process, with a TDP of 35w, and by benchmarks, is about twice as powerful. The poweful part could easily be due to the included hyperthreading, but a lot of it just comes down to a better overall build process and more efficient transistors. The i3 2120T will find itself at home in my new file server build.

Expandability

The current file server is running on an mATX board which only has 2 DIMM slots, which would be alright, if they took DDR3 memory. The cost of DDR2 is practically outrageous compared to DDR3, and even compared to ECC DDR3. Sure I could populate it with 8GB of DDR2, but that’s as far as it would go. Not only that, but without ECC, one is looking at the possibility of running into errors while processing, which can lead to corrupted files. Another major limitation with the board is the minimal number of expansion slots. One can only do so much with a single PCI-e x16 slot. The board also has fairly limited I/O options, including the serviceable, however, not ideal, Realtek NIC. The final major limitation on the board is the limited number of SATA ports. The board only has 4 making an expansion card practically a necessity. Although an expansion card will be put to use in the new server, it’s not a necessity off the bat, so I can hold out on purchase until required.
For the new motherboard, I chose a SuperMicro X9SCL+-F. This board is extremely flexbile, with some great features to boot. It’s an mATX format with 3 PCI-e x8 lanes, 6 on board SATA ports, IPMI for KVM over LAN, dual Intel Gigabit NICs, an onboard USB port for OS installs, and 4 DIMM slots that accept only ECC DDR3. The board should be rock solid in this regard, serving up lots of usability niceties. Hell, with IPMI, I will only ever have to have ethernet and power hooked up. I may still go with a Tyan S5512WGM2NR, due to the onboard LSI 2008 RAID controller, which, when flashed with IT firmware, would provide 14 usable SATA ports to the server. It also includes triple Intel based LAN, providing even more interfaces for higher bandwidth applications. This however, is probably more trouble than it’s worth in my case, so I will more than likely stick by the SuperMicro board.
For drives, the server will be running pairs of 3TB drives in RAID 1. It will start with 1 pair, allowing for 3TB of usable storage space, and when adding another pair, I will stripe with the existing pair, allowing for a RAID10. This should increase performance while still maintaining a much higher level of redundancy than my current spanned volume. Once I get to the point where I run out of on board SATA ports, I will add a port expander, possibly a SuperMicro AOC-SAS2LP-MV8, which would allow a further 8 drives to be connected. Ideally at some point I would be adding an SLC SSD for a ZIL cache, along with more RAM as well for a larger ARC cache.

That about covers how I’m going to make up for the downsides of the current server, as for the rest of the parts, I believe I have decided on them. For a case, I figure a Fractal Design Define R4 will be more than adequate for what I want to do for storage, however, a Fractal Design Define XL may also be considered due to the extra 5.25” bays, along with the 2 extra 3.5” bays. The fractal cases look absolutely beautiful, and are designed for silence and good cooling. The 3.5” bays all have rubber gromets attached too, to limit hard drive vibrations, and all vents either have the option of being blocked, or include a dust filter to keep things clean.
As for a power supply, I haven’t quite decided on what model, however I am leaning toward an Antec Earthwatts power supply. I have never had any trouble with Antec supplies, and it should be enough to provide the power required for the server. Ideally, it will be a platinum model, to further cut down on power usage.
For hard drives, I’m leaning toward Seagate 7200RPM drives for the cost to performance ratio, as the 3TB models normally go for around $130. I haven’t had any trouble with the Seagate drives that I have purchased in the past, so I have no reason to believe it will be any different this time around.

The final build should look like this:

  • Intel Core i3 2120T

  • SuperMicro X9SCL+-F

  • 16GB ECC DDR3

  • 8x3TB drives in RAID 10 for 12TB usable or 10x3TB drives in RAID 10 for 15TB usable

  • 20GB Intel SLC SSD for ZIL cache

  • 650w Antec Earthwatts modular

  • Fractal Design Define R4 or Fractal Design Define XL

  • OpenIndiana installed to 16GB USB thumb drive on internal USB header.

Over the process of the build, the server will basically bump up by 3TB intervals. The bump for 6TB will also bump the RAM to 8GB, and the bump to 9TB will then bump the RAM to 16GB. I’ll add the other drives as needed then.

That about covers the file server. I’ll be sure to add any hardware updates and whatnot as more parts roll in. I’m hoping to have everything going by Christmas, and ideally at least a functional testbed without drives by the end of October. In future posts I’ll also be logging a virtual server build, and a possible desktop upgrade. And guess what? There’s some more reviews around the corner!

Stay tuned.

Finished! ...Well, sort of.

Alright! Longer time than expected to actually make another post, but hey, that's alright! I ended up finishing Water FTW 1.0 the night of the 16th when I returned home. I really didn't run into any trouble at all, other than the Switech micro reservoir being a bit hard to mount, especially with properly routing the tubing... I ended up getting it done, but the tubing wasn't quite as clean as I had liked. Oh well, it was done. Anyway, here's the first phase, along with all the parts that had been received.



Above is the weekend haul. You can see here, I brought home not only a fair number of shiny fittings and water blocks, but I also have... WHAT'S THAT? 16GB OF DDR3? Oh my. Yeah, I upgraded to 16GB of DDR3, Mushkin Blacklines. All of the fittings were ordered from Dazmode/NCIX, and arrived by Friday! Service from NCIX is normally amazing, but they went above and beyond with these 3 orders to them. The only thing that didn't arrive was the package from Elwoodz, which I was initially disappointed about, but got over it awful quick. Also pictured are the 72 K Cups from Singlecup.ca, my new spot for my coffee fix.  2 boxes of dark roast, a box of medium roast, and a box of jet fuel.










Above are the pictures that I took that actually turned out okay from Water FTW 1.0. As you can see from the first one, it's a bit of a tight fit for the reservoir with the tubing. I only had one leak during the build, and it was actually the fill port of all things. The EK blocks are simply great. Amazing machining, even if the GPU blocks seem a bit rough. I guess seeing finishes like Zalman, or the base of the Supreme HF kinda spoiled me... I was happy with this, but I wanted it to look even better... So off to Dazmode. The results are posted below...





As you can see above, with the dazmode order I decided to add a lovely tube reservoir. This particular tube is the EK Multioption 150 x2 Advanced. It comes with like, 3 holes on the top, 5 on the bottom, 3 tubes for inside the reservoir to reduce cyclone formation, along with an anti cyclone attachment. I decided to go with the tubes.  This really reduced the number of sharp turns, and actually shortened my tubing runs, which was my main goal. My second goal was a usable drain port. Because, holy crap, holding a full HAF 932 over a tub is NOT a fun experience. My drain port is right after the pump, and consists of a t block with a quick disconnect attached to it. The female end is attached to a length of tube, and I keep it for draining. This really does simplify things... A lot.



Picture of the drain port above. The Koolance quick disconnects are simply amazing. really nice build quality to them. You can also see here my pump mounting choice. I decided to zip tie it to the drive cage, with some neoprene from a cheap laptop case acting as a vibration dampener. I can't hear it, and I couldn't really even hear it when I was leak testing. Maybe it's just me, but the MCP355 isn't loud with proper mounting. Definitely not audible over the fans, which are pretty darn quiet as it is.



Upsettingly, I didn't quite have room for the above. This doesn't mean I won't try to fit it in at a later date, but I had to leave this wonderful single radiator out for now. I am, however, very happy with the temperatures I'm getting from this triple radiator. Sadly, the highest I can manage to push the processor with hyperthreading on, while still maintaining good temperatures, is 3.8GHz. I'm partially blaming this on the 16GB of DDR3. At 1.25v in the bios, 1.25v on the VTT, I can manage 3.8GHz, with a maximum temperature of 63 degrees in LinX. That's a 25 pass run with all memory. 4GHz required over .1v more, and shot temperatures up another 10 degrees, if not more. My happy medium is 3.8GHz, as the extra "performance" isn't really worth the heat. I'm very happy with a processor that idles around 22-24 degrees, and has an average load temperature of around 28-30. Even gaming doesn't push it all that far. The maximum temperature I have seen during gaming wasn't even close to the 63 max in LinX on the processor, and the GPU doesn't even hit 50... Oh, and by the way, the GPU idles around 28 degrees. Not complaining there. On the heatsink, it would easily hit 35-40 degrees idle, and I don't think I had ever seen it go under 70 degrees with a gaming load on it.



Oh, as for that 16GB of RAM... This is what I've been doing. ESXi 4.1 running in Workstation 7, virtualizing 3 different operating systems! I plan on doing a lot more tests with it, but I'm a little limited by the single Western Digital Black... I think, however, I can use this as an excuse to set up a raid array!

Well, that's enough for tonight... I'll definitely be back to post again. And, I'll leave this post with one more picture.



-Jon