No real plans this year to switch out any desktop or notebook hardware, but my WNDR3700N is getting a bit old, and my servers really aren't being used to their fullest. We'll start with the server side of things, as that's probably the easiest to cover, and I'm still unsure if my choice was right. I can always change things in the future, but I've got what I've got now.
For a long while I've wanted to migrate to an all in one virtualization/storage solution for the reduced power consumption, footprint, and noise. I really want to retire the power hungry 95w i7 860 in my ESXi box for a solution that runs on my power sipping i3 2120T. This leaves out the idea of ESXi with FreeNAS as a guest, as the processor doesn't support VT-d. Proxmox VE was my next stop, as it supported the ZFS file system, but I primarily wanted a nice NAS GUI that also incorporated the virtualization management functionality, so that was a no go. Hyper-V has no ZFS support, so that's out the door. UnRAID has the features I want, but isn't ZFS and is also a paid service. Getting frustrated in my search, I finally came across FreeNAS 10. Although in it's infancy, it seemed really promising. Prior to 10, users were running services in BSD jails, but with the release of 10, FreeNAS adopted the BSD based virtualization tech bhyve, along with built in docker support (This is just a boot2docker vm that is manageable from the GUI). On top of that, it's primarily a NAS OS with a fantastic looking brand new GUI. Yes, it's a .0 release, and yes, there's little to no documentation, but I'm a nerd - jumping into this stuff feet first is what I'm all about. FreeNAS was my final choice of NAS OS.
With operating system picked out, it's onto the hardware. My file server is already running the optimal base (i3-2120T, SuperMicro X9SCL+-F, 4x3TB 7200RPM Toshiba drives, 40GB Intel SSD), but the 4GB of RAM wasn't going to cut it. If this was also going to be doing double duty of a host for virtual machines, it was getting a heat sink upgrade too. After a quick trip to the HCL and many internet searches, my wallet was considerably lighter and I had in my posession 32GB of 1600mhz Crucial ECC DDR3 UDIMMs and a trusty Arctic Cooling Freezer 7 Pro.
While waiting on gear to arrive, I took the opportunity to flash the latest BIOS for proper RAM support, and ensure I had all the appropriate ISOs downloaded and on a flash drive. The day everything arrived I got to work, and let me tell you, frustration was abound. I was able to successfully install the new RAM and the heat sink, but when reconnecting the power supply, I got a flash of lights from everything on the board followed by nothing... Alright, I may have destroyed a power supply, not a big deal. Plugging in my spare and powering things on gives me 4 short beeps followed by a long beep - SuperMicro seems to indicate this means no RAM installed. At this point I think I've fried 300 dollars worth of RAM and a power supply, so inside I'm pretty dead inside. Last ditch attempt to resolve, I connect the original power supply, plug it in, and boot successfully. Turns out SuperMicro boards are just picky about everything.
Onto the FreeNAS install - I've discovered that although flash drives are recommended for the install, you'll be 100% happier installing to an SSD. Install initially went fine, then for some reason FreeNAS couldn't correctly import the shares from my ZFS volume. I was able to redo all the permissions, followed by a setup of SMB shares and appropriately mapped to my computer. Confirmed I could read files, great. Attempt to write files, kernel panic of the NAS. At this point, it's 1AM, and I really don't feel like figuring things out and fixing it, so I throw a 2TB drive into my desktop, and start copying over the important stuff. Once copied, I nuke the ZFS pool, recreate a new pool, setup new datasets and shares, and recopy the files. All in, this was done by 4AM and I'm definitely a bit more data-light now, but with a fully functional nightmare of a NAS.
Day 2, I decide to move the install from a flash drive over to the 40GB Intel SSD. Boot times on the flash drive were taking 15-20 minutes, which is abysmal. I pull the configuration backup, and get to work. Apparently, SuperMicro boards are also very picky about boot options. After reinstalling numerous times and playing around with boot settings in the BIOS, I was able to get things successfully booted (much faster, as well), from the SSD. I import my pool, and upload my configuration... Which fails. Not a big deal, that's fine, the shares are still there, I just need to reconfigure users, groups, and permissions, and re-enable all the services. This was finished in half an hour, partially thanks to the SSD, and partially thanks to the previous night's lessons. There were a few times when I had to chown a directory from ssh, but that's about the extent of the command line work I was doing.
Day 3, I get to work creating some virtual machines and playing with Docker. I have to say, Docker is an absolute pleasure to work with! The FreeNAS implementation is a virtual machine running boot2docker, along with a very nice interface for configuring boot options and paths to throw into the container. As long as you're running a container from the FreeNAS library, things "just work". Dockerfiles from other libraries require a bit more work to get running, as they're not tagged for the FreeNAS interface, but over time more and more things are getting converted to include the FreeNAS tags to make things just work. Currently I'm running Deluge and a Minecraft server in containers, and have also played around with PiHole, Unifi controller, and ddclient as well. The Minecraft server and ddclient took a bit of work to get things functional, but PiHole, the Unifi controller, and Deluge were very simple to create and configure. I will likely start looking into converting existing containers into FreeNAS tagged ones, but I just don't currently have the time.
Virtual machines are a bit of another story. Although the interface is nice, there's no documentation for creating virtual machines that aren't in the FreeNAS library, so after digging a lot, I was able to get things worked out mostly, and it's not so bad now. Initial issue was with disk creation - Everything pointed toward having to create ZVOLs to house your virtual machines, but after looking into other install files, I determined you could just specify a *.img file when creating the virtual disk, and it would store the disk on a .img file, which feels easier to manage. The other issue I ran into was bhyve's GRUB implementation. With Linux installs such as Debian and Ubuntu who use non-standard grub.cfg locations, you need to tell bhyve's GRUB to boot from that location specifically. For that, you need to create a grub.cfg file (via ssh) in /PathToVM/files/grub/, with the contents as (This is for Debian/Ubuntu, will differ for other OS, but you're pointing it at the grub.cfg location on the actual VM):
Followed by running the below command in the FreeNAS CLI:
I understand this is a .0 release, but still, I shouldn't HAVE to do this to get a virtual machine functional within an operating system that advertises virtualization as a feature. I hope they improve this in future releases, but as of right now I'm just glad I was able to figure things out.
On the plus side, at least the graphs are cool.
I'll update more on FreeNAS as I spend more time with it, however for the time being, it's time to look at network infrastructure and the upgrade I'm going through with that. My WNDR3700N was aging. It's a solid gigabit router that supports DDWRT/Tomato, however it doesn't have the best range, or AC wireless support which practically everything in my apartment is capable of now. Being a bit of a networking and infrastructure nerd, I craved something a bit more. My first thought was PFSense box, but after reading into it further, for less money I could get everything I want and a more enterprise-eque experience out of an Ubiquiti setup. I decided to jump in full force on a full Unifi setup, and although I'm still waiting on my switch, I couldn't be happier so far.
The purchases for the networking replacement setup are ended up being:
Well, that's all fine and dandy, but why Unifi over the regular EdgeRouter and a Unifi AP, or any other AP for that matter? Well, it's ecosystem. These things perform great, but setup and management is a breeze. Unifi does site management via "cloud". Now this can be either a local machine (Cloud key appliance, virtual machine, or physical box), or a remotely hosted instance. Yeah, you can manage your network from a VPS. On top of that, you can mange multiple sites from the same VPS, so if you had multiple networks in multiple sites, they could all be managed by logging into a single web portal. My choice in VPS was an OVH SSD VPS instance at just around 5 dollars a month. A single vcore, 2GB of RAM, and 10GB of SSD based hard drive space is plenty for running a single site, and I can even throw other small services onto it as well. I'm so impressed with what you get for the money from OVH that I'm considering moving my web hosting from HostGator over to another VPS instance. But hey, this is more about the hardware, so let's look at the USG.
I'll apologize for the cables, as I'll be moving things to a more permanent home once the 8 port POE switch arrives, which should be soon. The USG is essentially an EdgeRouter Lite internally, however requires the cloud controller for persistent configuration. It supports a single WAN interface, a LAN interface, a VOIP interface (This can be changed to a second WAN port through the config for failover support), and a console port. Most would think it's odd to see a router with such a low number of ports, but unlike consumer devices, switching is delegated to a separate powerful device which scales based on enterprise requirements. What does the USG bring that a consumer router doesn't? Higher reliability, higher throughput, more features. VLAN support? Check. Deep packet inspection? Why the heck not. Locally authenticated VPN? Well, it's coming to GUI in the next release, but it's there. It's not a perfect product, but it's definitely getting closer and closer each controller release, and the ease of setup and management make up for that in spades.
The access point I chose was the AC AP Lite. I didn't need the 1750mbps offered by the AC AP Pro, as my network speeds generally top out at gigabit anyway, and the range is approximately the same between the two. It's 24v POE powered and comes with it's own POE injector, but once the Unifi 8 port switch is in it'll be moved straight to that. A separate AP provides a much more stable and reliable wireless connection, especially in a 16 unit apartment building with a fairly saturated 2.4ghz band. In conjunction with the Unifi controller, I can offer a guest WiFi portal, some pretty neat band steering (Basically "steering" devices onto the best possible band and channel), dynamic channel selection, band scanning to determine channel saturation, etc.
I'll be honest I'm just scratching the surface of what this stuff is capable of, and I have a lot of plans to document it over the coming weeks and months. For the time being, I'll enjoy my full WiFi coverage anywhere in my apartment, with all of my devices, and then some.
For a long while I've wanted to migrate to an all in one virtualization/storage solution for the reduced power consumption, footprint, and noise. I really want to retire the power hungry 95w i7 860 in my ESXi box for a solution that runs on my power sipping i3 2120T. This leaves out the idea of ESXi with FreeNAS as a guest, as the processor doesn't support VT-d. Proxmox VE was my next stop, as it supported the ZFS file system, but I primarily wanted a nice NAS GUI that also incorporated the virtualization management functionality, so that was a no go. Hyper-V has no ZFS support, so that's out the door. UnRAID has the features I want, but isn't ZFS and is also a paid service. Getting frustrated in my search, I finally came across FreeNAS 10. Although in it's infancy, it seemed really promising. Prior to 10, users were running services in BSD jails, but with the release of 10, FreeNAS adopted the BSD based virtualization tech bhyve, along with built in docker support (This is just a boot2docker vm that is manageable from the GUI). On top of that, it's primarily a NAS OS with a fantastic looking brand new GUI. Yes, it's a .0 release, and yes, there's little to no documentation, but I'm a nerd - jumping into this stuff feet first is what I'm all about. FreeNAS was my final choice of NAS OS.
With operating system picked out, it's onto the hardware. My file server is already running the optimal base (i3-2120T, SuperMicro X9SCL+-F, 4x3TB 7200RPM Toshiba drives, 40GB Intel SSD), but the 4GB of RAM wasn't going to cut it. If this was also going to be doing double duty of a host for virtual machines, it was getting a heat sink upgrade too. After a quick trip to the HCL and many internet searches, my wallet was considerably lighter and I had in my posession 32GB of 1600mhz Crucial ECC DDR3 UDIMMs and a trusty Arctic Cooling Freezer 7 Pro.
While waiting on gear to arrive, I took the opportunity to flash the latest BIOS for proper RAM support, and ensure I had all the appropriate ISOs downloaded and on a flash drive. The day everything arrived I got to work, and let me tell you, frustration was abound. I was able to successfully install the new RAM and the heat sink, but when reconnecting the power supply, I got a flash of lights from everything on the board followed by nothing... Alright, I may have destroyed a power supply, not a big deal. Plugging in my spare and powering things on gives me 4 short beeps followed by a long beep - SuperMicro seems to indicate this means no RAM installed. At this point I think I've fried 300 dollars worth of RAM and a power supply, so inside I'm pretty dead inside. Last ditch attempt to resolve, I connect the original power supply, plug it in, and boot successfully. Turns out SuperMicro boards are just picky about everything.
Onto the FreeNAS install - I've discovered that although flash drives are recommended for the install, you'll be 100% happier installing to an SSD. Install initially went fine, then for some reason FreeNAS couldn't correctly import the shares from my ZFS volume. I was able to redo all the permissions, followed by a setup of SMB shares and appropriately mapped to my computer. Confirmed I could read files, great. Attempt to write files, kernel panic of the NAS. At this point, it's 1AM, and I really don't feel like figuring things out and fixing it, so I throw a 2TB drive into my desktop, and start copying over the important stuff. Once copied, I nuke the ZFS pool, recreate a new pool, setup new datasets and shares, and recopy the files. All in, this was done by 4AM and I'm definitely a bit more data-light now, but with a fully functional nightmare of a NAS.
Day 2, I decide to move the install from a flash drive over to the 40GB Intel SSD. Boot times on the flash drive were taking 15-20 minutes, which is abysmal. I pull the configuration backup, and get to work. Apparently, SuperMicro boards are also very picky about boot options. After reinstalling numerous times and playing around with boot settings in the BIOS, I was able to get things successfully booted (much faster, as well), from the SSD. I import my pool, and upload my configuration... Which fails. Not a big deal, that's fine, the shares are still there, I just need to reconfigure users, groups, and permissions, and re-enable all the services. This was finished in half an hour, partially thanks to the SSD, and partially thanks to the previous night's lessons. There were a few times when I had to chown a directory from ssh, but that's about the extent of the command line work I was doing.
Day 3, I get to work creating some virtual machines and playing with Docker. I have to say, Docker is an absolute pleasure to work with! The FreeNAS implementation is a virtual machine running boot2docker, along with a very nice interface for configuring boot options and paths to throw into the container. As long as you're running a container from the FreeNAS library, things "just work". Dockerfiles from other libraries require a bit more work to get running, as they're not tagged for the FreeNAS interface, but over time more and more things are getting converted to include the FreeNAS tags to make things just work. Currently I'm running Deluge and a Minecraft server in containers, and have also played around with PiHole, Unifi controller, and ddclient as well. The Minecraft server and ddclient took a bit of work to get things functional, but PiHole, the Unifi controller, and Deluge were very simple to create and configure. I will likely start looking into converting existing containers into FreeNAS tagged ones, but I just don't currently have the time.
Virtual machines are a bit of another story. Although the interface is nice, there's no documentation for creating virtual machines that aren't in the FreeNAS library, so after digging a lot, I was able to get things worked out mostly, and it's not so bad now. Initial issue was with disk creation - Everything pointed toward having to create ZVOLs to house your virtual machines, but after looking into other install files, I determined you could just specify a *.img file when creating the virtual disk, and it would store the disk on a .img file, which feels easier to manage. The other issue I ran into was bhyve's GRUB implementation. With Linux installs such as Debian and Ubuntu who use non-standard grub.cfg locations, you need to tell bhyve's GRUB to boot from that location specifically. For that, you need to create a grub.cfg file (via ssh) in /PathToVM/files/grub/, with the contents as (This is for Debian/Ubuntu, will differ for other OS, but you're pointing it at the grub.cfg location on the actual VM):
configfile (hd0,msdos1)/grub/grub.cfg
Followed by running the below command in the FreeNAS CLI:
vm %VMNAME set boot_directory=grub
I understand this is a .0 release, but still, I shouldn't HAVE to do this to get a virtual machine functional within an operating system that advertises virtualization as a feature. I hope they improve this in future releases, but as of right now I'm just glad I was able to figure things out.
On the plus side, at least the graphs are cool.
I'll update more on FreeNAS as I spend more time with it, however for the time being, it's time to look at network infrastructure and the upgrade I'm going through with that. My WNDR3700N was aging. It's a solid gigabit router that supports DDWRT/Tomato, however it doesn't have the best range, or AC wireless support which practically everything in my apartment is capable of now. Being a bit of a networking and infrastructure nerd, I craved something a bit more. My first thought was PFSense box, but after reading into it further, for less money I could get everything I want and a more enterprise-eque experience out of an Ubiquiti setup. I decided to jump in full force on a full Unifi setup, and although I'm still waiting on my switch, I couldn't be happier so far.
The purchases for the networking replacement setup are ended up being:
• Ubiquiti Unifi Security Gateway
• Ubiquiti Unifi AP AC Lite
• Ubiquiti Unifi Switch 8-150W
• TP Link TL-SG108 (Stand in)
Well, that's all fine and dandy, but why Unifi over the regular EdgeRouter and a Unifi AP, or any other AP for that matter? Well, it's ecosystem. These things perform great, but setup and management is a breeze. Unifi does site management via "cloud". Now this can be either a local machine (Cloud key appliance, virtual machine, or physical box), or a remotely hosted instance. Yeah, you can manage your network from a VPS. On top of that, you can mange multiple sites from the same VPS, so if you had multiple networks in multiple sites, they could all be managed by logging into a single web portal. My choice in VPS was an OVH SSD VPS instance at just around 5 dollars a month. A single vcore, 2GB of RAM, and 10GB of SSD based hard drive space is plenty for running a single site, and I can even throw other small services onto it as well. I'm so impressed with what you get for the money from OVH that I'm considering moving my web hosting from HostGator over to another VPS instance. But hey, this is more about the hardware, so let's look at the USG.
I'll apologize for the cables, as I'll be moving things to a more permanent home once the 8 port POE switch arrives, which should be soon. The USG is essentially an EdgeRouter Lite internally, however requires the cloud controller for persistent configuration. It supports a single WAN interface, a LAN interface, a VOIP interface (This can be changed to a second WAN port through the config for failover support), and a console port. Most would think it's odd to see a router with such a low number of ports, but unlike consumer devices, switching is delegated to a separate powerful device which scales based on enterprise requirements. What does the USG bring that a consumer router doesn't? Higher reliability, higher throughput, more features. VLAN support? Check. Deep packet inspection? Why the heck not. Locally authenticated VPN? Well, it's coming to GUI in the next release, but it's there. It's not a perfect product, but it's definitely getting closer and closer each controller release, and the ease of setup and management make up for that in spades.
The access point I chose was the AC AP Lite. I didn't need the 1750mbps offered by the AC AP Pro, as my network speeds generally top out at gigabit anyway, and the range is approximately the same between the two. It's 24v POE powered and comes with it's own POE injector, but once the Unifi 8 port switch is in it'll be moved straight to that. A separate AP provides a much more stable and reliable wireless connection, especially in a 16 unit apartment building with a fairly saturated 2.4ghz band. In conjunction with the Unifi controller, I can offer a guest WiFi portal, some pretty neat band steering (Basically "steering" devices onto the best possible band and channel), dynamic channel selection, band scanning to determine channel saturation, etc.
I'll be honest I'm just scratching the surface of what this stuff is capable of, and I have a lot of plans to document it over the coming weeks and months. For the time being, I'll enjoy my full WiFi coverage anywhere in my apartment, with all of my devices, and then some.