Adobe Creative Cloud Photography Plan




I've been using strictly Photoshop CS3 for a very long while, and while functional, I really wanted to start shooting RAW on my A6000. I quickly discovered that I needed a solution that would work with Sony's RAW files (Fun fact: CS3 does not support them.) and also a solution that would let me quickly categorize, cull, and adjust photos. I've used Lightroom before, but mostly just played with it. When I saw that I could get the latest Lightroom and Photoshop for 10 dollars a month (USD), I decided to jump on it and give it a try. I have to say I'm really happy I did! Although I was initially after the new version of Photoshop, Lightroom has really impressed me and has become very essential to my workflow. I can shoot a lot, import to my file server, cull what I don't like, and quickly edit and compare. Anything heavier that needs to be done can be brought into Photoshop (IE: Exposure blending, blemish removal, etc). All the basics like color correction, exposure/sharpness/noise reduction/lens correction/cropping can all be completed in Lightroom, while saving your original image.

I have a lot of learning to do still, but I pick up new things every time I use the software. My only real complaint is that both pieces of software are pretty big resource hogs. I understand working with RAW files is a bit memory intensive, but CS3 was never this heavy, even with large projects. I never thought I'd need more than 16GB of RAM in my desktop at this point in time, but Creative Cloud is proving me wrong. I look forward to seeing how my notebook handles it, considering it's a lightweight compared to my desktop.

I still have some exploring to do - Lightroom Mobile is seemingly powerful, the 20GB of online backup feels like it could disappear quickly with RAW files, and I'm not sure on Behance yet, but even with just Photoshop and Lightroom, I'm happy with the money spent.

Creative Cloud Photography Plan

FreeNAS Corral - Short Lived

I'll just start out with this link...

So, Corral was pretty garbage from a back end/development standpoint, and they decided to axe it. If you're interested in the whole story, I'd take a read through the thread, but TL;DR - They're rolling all the features of Corral into 9.10 with a newer UI.

What does that mean for my install though? Well, Corral isn't production anymore, and honestly, it's shaken any kind of faith I've had in FreeNAS. I'm adopting Proxmox VE as my all in one solution. I know I ragged on it in the previous post, but I decided to roll a VM install to test it out, and after killing and reinstalling it a few times, I found it seemed really stable. The documentation or existing all in ones was few and far between, but I'm pretty comfortable with Debian, and I'm up for a much more pleasing challenge after BSD. So this afternoon, I nuked my FreeNAS install, and installed Proxmox VE.




Here's the fun part about Proxmox. There was no struggle. The install was seamless. The import of my ZFS pool was literally a single command, and everything just worked. Creating shares was an Ubuntu container and a mount point away. It actually took me less than an hour to configure the sharing I wanted, and get a headless Deluge instance running. Fine tuning took a bit longer, but was considerably less painful than FreeNAS. Virtual machines JUST WORK. There's no messing around with config files and setting GRUB boot points. There's no GUI errors regarding "This virtual machine doesn't exist" that disappear after logging out and logging back in. Setting save when you save them, instead of having to do it multiple times over. It's going to be a while to get it all to the point I want it to be at, and to be fully confident in managing it, but it's definitely a treat so far.

A quick rundown on the setup - The Proxmox host has the ZFS pool mounted directly on it, much like FreeNAS would. Instead of installing Samba on Proxmox directly (This likely would have been fine), I've installed it in an Ubuntu 16.04 container, and bind mapped the media directory (The only thing that should be shared from it) directly to the container. From there, Samba is installed on the container, users are setup, and file and samba permissions changed. My headless Deluge instance is also running in a container, with the /Media/Downloads directory bind mapped, and user/group setup to match the authenticated users group on the Samba server. This way I can still openly manage files (delete, edit, etc) from my authenticated account, and guests can still read files. As a trial run I'm pretty happy, though I feel I may implement LDAP on all of my servers for easier permissions management of both files and shares.

This is just a short post to advise of my fun detour, but I intend to have more posts about the migration in the near future. In my opinion, for those looking for an easy to mange hypervisor/file server all in one solution akin to the ESXi/FreeNAS solutions you usually see, Proxmox is promising.

2017 Infrastructure Update - Networking and Servers

No real plans this year to switch out any desktop or notebook hardware, but my WNDR3700N is getting a bit old, and my servers really aren't being used to their fullest. We'll start with the server side of things, as that's probably the easiest to cover, and I'm still unsure if my choice was right. I can always change things in the future, but I've got what I've got now.

For a long while I've wanted to migrate to an all in one virtualization/storage solution for the reduced power consumption, footprint, and noise. I really want to retire the power hungry 95w i7 860 in my ESXi box for a solution that runs on my power sipping i3 2120T. This leaves out the idea of ESXi with FreeNAS as a guest, as the processor doesn't support VT-d. Proxmox VE was my next stop, as it supported the ZFS file system, but I primarily wanted a nice NAS GUI that also incorporated the virtualization management functionality, so that was a no go. Hyper-V has no ZFS support, so that's out the door. UnRAID has the features I want, but isn't ZFS and is also a paid service. Getting frustrated in my search, I finally came across FreeNAS 10. Although in it's infancy, it seemed really promising. Prior to 10, users were running services in BSD jails, but with the release of 10, FreeNAS adopted the BSD based virtualization tech bhyve, along with built in docker support (This is just a boot2docker vm that is manageable from the GUI). On top of that, it's primarily a NAS OS with a fantastic looking brand new GUI. Yes, it's a .0 release, and yes, there's little to no documentation, but I'm a nerd - jumping into this stuff feet first is what I'm all about. FreeNAS was my final choice of NAS OS.

With operating system picked out, it's onto the hardware. My file server is already running the optimal base (i3-2120T, SuperMicro X9SCL+-F, 4x3TB 7200RPM Toshiba drives, 40GB Intel SSD), but the 4GB of RAM wasn't going to cut it. If this was also going to be doing double duty of a host for virtual machines, it was getting a heat sink upgrade too. After a quick trip to the HCL and many internet searches, my wallet was considerably lighter and I had in my posession 32GB of 1600mhz Crucial ECC DDR3 UDIMMs and a trusty Arctic Cooling Freezer 7 Pro.



While waiting on gear to arrive, I took the opportunity to flash the latest BIOS for proper RAM support, and ensure I had all the appropriate ISOs downloaded and on a flash drive. The day everything arrived I got to work, and let me tell you, frustration was abound. I was able to successfully install the new RAM and the heat sink, but when reconnecting the power supply, I got a flash of lights from everything on the board followed by nothing... Alright, I may have destroyed a power supply, not a big deal. Plugging in my spare and powering things on gives me 4 short beeps followed by a long beep - SuperMicro seems to indicate this means no RAM installed. At this point I think I've fried 300 dollars worth of RAM and a power supply, so inside I'm pretty dead inside. Last ditch attempt to resolve, I connect the original power supply, plug it in, and boot successfully. Turns out SuperMicro boards are just picky about everything.

Onto the FreeNAS install - I've discovered that although flash drives are recommended for the install, you'll be 100% happier installing to an SSD. Install initially went fine, then for some reason FreeNAS couldn't correctly import the shares from my ZFS volume. I was able to redo all the permissions, followed by a setup of SMB shares and appropriately mapped to my computer. Confirmed I could read files, great. Attempt to write files, kernel panic of the NAS. At this point, it's 1AM, and I really don't feel like figuring things out and fixing it, so I throw a 2TB drive into my desktop, and start copying over the important stuff. Once copied, I nuke the ZFS pool, recreate a new pool, setup new datasets and shares, and recopy the files. All in, this was done by 4AM and I'm definitely a bit more data-light now, but with a fully functional nightmare of a NAS.

Day 2, I decide to move the install from a flash drive over to the 40GB Intel SSD. Boot times on the flash drive were taking 15-20 minutes, which is abysmal. I pull the configuration backup, and get to work. Apparently, SuperMicro boards are also very picky about boot options. After reinstalling numerous times and playing around with boot settings in the BIOS, I was able to get things successfully booted (much faster, as well), from the SSD. I import my pool, and upload my configuration... Which fails. Not a big deal, that's fine, the shares are still there, I just need to reconfigure users, groups, and permissions, and re-enable all the services. This was finished in half an hour, partially thanks to the SSD, and partially thanks to the previous night's lessons. There were a few times when I had to chown a directory from ssh, but that's about the extent of the command line work I was doing.

Day 3, I get to work creating some virtual machines and playing with Docker. I have to say, Docker is an absolute pleasure to work with! The FreeNAS implementation is a virtual machine running boot2docker, along with a very nice interface for configuring boot options and paths to throw into the container. As long as you're running a container from the FreeNAS library, things "just work". Dockerfiles from other libraries require a bit more work to get running, as they're not tagged for the FreeNAS interface, but over time more and more things are getting converted to include the FreeNAS tags to make things just work. Currently I'm running Deluge and a Minecraft server in containers, and have also played around with PiHole, Unifi controller, and ddclient as well. The Minecraft server and ddclient took a bit of work to get things functional, but PiHole, the Unifi controller, and Deluge were very simple to create and configure. I will likely start looking into converting existing containers into FreeNAS tagged ones, but I just don't currently have the time.

Virtual machines are a bit of another story. Although the interface is nice, there's no documentation for creating virtual machines that aren't in the FreeNAS library, so after digging a lot, I was able to get things worked out mostly, and it's not so bad now. Initial issue was with disk creation - Everything pointed toward having to create ZVOLs to house your virtual machines, but after looking into other install files, I determined you could just specify a *.img file when creating the virtual disk, and it would store the disk on a .img file, which feels easier to manage. The other issue I ran into was bhyve's GRUB implementation. With Linux installs such as Debian and Ubuntu who use non-standard grub.cfg locations, you need to tell bhyve's GRUB to boot from that location specifically. For that, you need to create a grub.cfg file (via ssh) in /PathToVM/files/grub/, with the contents as (This is for Debian/Ubuntu, will differ for other OS, but you're pointing it at the grub.cfg location on the actual VM):
configfile (hd0,msdos1)/grub/grub.cfg

Followed by running the below command in the FreeNAS CLI:
vm %VMNAME set boot_directory=grub

I understand this is a .0 release, but still, I shouldn't HAVE to do this to get a virtual machine functional within an operating system that advertises virtualization as a feature. I hope they improve this in future releases, but as of right now I'm just glad I was able to figure things out.

On the plus side, at least the graphs are cool.



I'll update more on FreeNAS as I spend more time with it, however for the time being, it's time to look at network infrastructure and the upgrade I'm going through with that. My WNDR3700N was aging. It's a solid gigabit router that supports DDWRT/Tomato, however it doesn't have the best range, or AC wireless support which practically everything in my apartment is capable of now. Being a bit of a networking and infrastructure nerd, I craved something a bit more. My first thought was PFSense box, but after reading into it further, for less money I could get everything I want and a more enterprise-eque experience out of an Ubiquiti setup. I decided to jump in full force on a full Unifi setup, and although I'm still waiting on my switch, I couldn't be happier so far.



The purchases for the networking replacement setup are ended up being:
• Ubiquiti Unifi Security Gateway
• Ubiquiti Unifi AP AC Lite
• Ubiquiti Unifi Switch 8-150W
• TP Link TL-SG108 (Stand in)

Well, that's all fine and dandy, but why Unifi over the regular EdgeRouter and a Unifi AP, or any other AP for that matter? Well, it's ecosystem. These things perform great, but setup and management is a breeze. Unifi does site management via "cloud". Now this can be either a local machine (Cloud key appliance, virtual machine, or physical box), or a remotely hosted instance. Yeah, you can manage your network from a VPS. On top of that, you can mange multiple sites from the same VPS, so if you had multiple networks in multiple sites, they could all be managed by logging into a single web portal. My choice in VPS was an OVH SSD VPS instance at just around 5 dollars a month. A single vcore, 2GB of RAM, and 10GB of SSD based hard drive space is plenty for running a single site, and I can even throw other small services onto it as well. I'm so impressed with what you get for the money from OVH that I'm considering moving my web hosting from HostGator over to another VPS instance. But hey, this is more about the hardware, so let's look at the USG.



I'll apologize for the cables, as I'll be moving things to a more permanent home once the 8 port POE switch arrives, which should be soon. The USG is essentially an EdgeRouter Lite internally, however requires the cloud controller for persistent configuration. It supports a single WAN interface, a LAN interface, a VOIP interface (This can be changed to a second WAN port through the config for failover support), and a console port. Most would think it's odd to see a router with such a low number of ports, but unlike consumer devices, switching is delegated to a separate powerful device which scales based on enterprise requirements. What does the USG bring that a consumer router doesn't? Higher reliability, higher throughput, more features. VLAN support? Check. Deep packet inspection? Why the heck not. Locally authenticated VPN? Well, it's coming to GUI in the next release, but it's there. It's not a perfect product, but it's definitely getting closer and closer each controller release, and the ease of setup and management make up for that in spades.



The access point I chose was the AC AP Lite. I didn't need the 1750mbps offered by the AC AP Pro, as my network speeds generally top out at gigabit anyway, and the range is approximately the same between the two. It's 24v POE powered and comes with it's own POE injector, but once the Unifi 8 port switch is in it'll be moved straight to that. A separate AP provides a much more stable and reliable wireless connection, especially in a 16 unit apartment building with a fairly saturated 2.4ghz band. In conjunction with the Unifi controller, I can offer a guest WiFi portal, some pretty neat band steering (Basically "steering" devices onto the best possible band and channel), dynamic channel selection, band scanning to determine channel saturation, etc.

I'll be honest I'm just scratching the surface of what this stuff is capable of, and I have a lot of plans to document it over the coming weeks and months. For the time being, I'll enjoy my full WiFi coverage anywhere in my apartment, with all of my devices, and then some.

Monitors!



Okay, the 144hz idea was good for a bit, then I played BF4 on a TN panel and wasn't overly happy. So, I ordered 2 Acer G257HUs! 2560x1440, 25" S-IPS LCDs with DVI, HDMI, and Displayport inputs. I really did not expect the resolution bump to be this awe inspiring, but wow, am I ever shocked. I'm very much wishing I did WQHD a long time ago, even if hardware wasn't quite up to snuff.

The ultrawide is now mounted to the right on a monitor arm, and wow does it ever look foolish in portrait! There's 0 contest, the Acers are the better screens too. I really feel I'll be eventually getting another one of these to replace the Ultrawide, but until then, it'll be nice for chats and reading long forum threads.



Other new things include a plethora of Aukey branded stuff from Amazon - A 5 port charger for the bedroom, a 3 port charger for the backpack, and half a dozen 4 foot USB cables which seem to be of very good quality. An Aukey mouse mat was also purchased, and man, it's huge! See the Amazon link here. I'm actually really pleased with the quality, it's very thick, it has a nice rubber non-slip bottom, and all the stitching is very well done. It also covers that nasty missing finish on my desk. You'll see some of it in the image above.

Other than that, all that was ordered was a 64GB Lexar P20 flash drive, which is surprisingly spry for a flash drive, and carries a limited lifetime warranty. It's very well built, and very recommended so far. Time will tell if it holds up! Hopefully I don't have to use that warranty.

Upgrade Plans: 2016 - Final Build Complete!








It's done! Well, at least as done as I want it right now, there's a few more tweaks that could be done, but in the end, this is pretty representative of the final configuration. Onto component choices! First up is the EVGA SuperNOVA 650W P2. It's platinum rated, fully modular, with a fanless eco mode. It's also based off the Superflower Leadex platform, which is known for it's rock solid stability and high performance. This model is great! In testing by reviewers, the fan wasn't even spinning up until it hits ~500w load, something I don't think this system could achieve currently. The 90% efficiency is great, and topping that off, with the current power consumption for Nvidia GPUs, should give me plenty of headroom for SLI if I feel it's necessary.



The video card I chose was of course the GTX 970 STRIX as outlined in the previous post. It matches the motherboard, and has some really impressive build quality and cooling capabilities. The huge heatpipe and low power consumption of Nvidia GPUs lets the thing run without fans until it hits about 67 degrees load, in which it starts ramping up.



It's not longer than the GTX 670, but it is wider to accommodate Asus' design changes with the heatsink and their custom power delivery system. Overall, in doing some playtesting on Battlefield 4, Planetside 2, and Borderlands: The Pre-Sequel, I can say I'm very happy with the performance jump. 2560x1080 seems like it still might be a bit much for the card to drive at full ultra settings for some games, but I can assume that'll be solved with a bit of overclocking.

Of course, here's the Mushkin Reactor 1TB mounted to the rear of the motherboard tray, with the 850 Evo. It's taken the place of my games drive. I've decided I'll be doing VMs exclusively on the virtual server, and large format media/game recordings will go to the file server. Cloning my 640GB Western Digital Black was simple, as I still had a copy of Acronis True Image HD kicking around from Karyn's SSD upgrade. The whole process took a few reboots and about half an hour, from installing the new drive to removing the old one. I'm very pleased with the speed differences between old and new too! Games take 0 time to load now, especially long loading titles like Battlefield 4 and Planetside 2. To top it off, no more hard drive noise! I feel I could have done better with the cable management in the back, but there's still lots of room.

And the finished build shot. It's extremely clean with the hard drive cages removed, and only having 4 of the included cables plugged into the power supply makes cable management a breeze. I'm also very pleased with the even further improvement in acoustics. I actually have to have my ear on the case to hear anything at idle, and even then it's just a mild vibration. Under full gaming load (~1 hour or so of BF4) temperatures on the GPU hit a maximum of 67 degrees, which cased the video card fans to ramp up to 35% or so, and the CPU hit a maximum of 59 degrees on the package. (~59-61 on the cores). All the while the case and processor heatsink fans continued spinning at ~500RPM. Of course, the problem now is the loudest things in the room are my servers, but I'll be outlining plans on what I want to do with those in another post.

So, in the end, I've achieved my silent build. Ultimately this is hitting the performance targets I want currently while being practically inaudible. But, what's next for upgrades? Being a hardware enthusiast, there's always an upgrade path. My immediate thoughts go to moving from 16GB to 32GB of RAM, which can easily be done for under 200 dollars. Of course with offloading all of my virtualization and RAM heavy tasks to a virtual host, I don't think it's necessary. The other thoughts would be getting an NH-D15 with 2 NF-A15s to replace the NH-U12P. This I think would only be necessary if I'm going to be pushing the processor pretty hard in the future. I guess if I was to do anything right away, it'd be to install the Intel gigabit NIC I have sitting on my shelf for a direct link to the file server to allow for better latency when doing Shadowplay.

For now, my concentration is going to be on a new desk, and new monitors. I'm pretty much decided that 2560x1440 is a bit much for the GTX970, so I'll be sticking to 1920x1080 for the immediate future. I've also decided i want to see what all the hype about 144hz monitors is, so I think I'll be grabbing at least one LG 24GM77 to begin with, and possibly another to replace my 21.5. The unit seems to be the best of the bunch for accuracy when it comes to TN panels, and has the best motion blur reduction implementation. Top that off with an egronomic stand (Height adjustable, pivot, tilt, etc), a whole host of inputs, and even a USB3 hub. I'll likely only need one 144hz display, but if push comes to shove, I may end up with three. The option is there! I'll be outlining more in another post when I talk about desk ideas.

Upgrade Plans: 2016 - Initial Build Complete!

DSC04953

Everything was in on Wednesday! What really surprised me was how Canadapost managed to deliver my processor the day before the motherboard and SSD, even though it shipped a day later. Same shipping method, same place of departure. I don't get it. Oh well, waiting the motherboard and SSD was a good thing, as I the case and fans arrived at the same time. Build went very smoothly, I wish I did get more pictures, but the ones I did pick out were the best lit.

DSC04955

Install was smooth, build was very reminiscent of socket 1156, which was to be expected. Lots of cleanup needed to be done. My NH-U12P needed the toothbrush treatment, and I burned through 2 cans of compressed air cleaning that, the video card, RAM, and hard drives. The power supply was actually fairly dust free, but it did have a mesh filter intake. I gotta say, I was super impressed with the build of the Asus board, but I did miss the lack of q connector for the front panel input. Lots of clearance around the socket for larger heatsinks too, which is great. IO is minimal (6 USB ports on the back panel total), but acceptable, as it's what I had on my P55 FTW. I do get 4 USB3 ports however, so it's definitely an improvement over old IO. The sound on board is top notch, everything isolated and the actual Realtek ALC1150 is EMI shielded. I don't really take notice of the lights on the board, but they're there. I guess it'd be nice if I had a case window. Slot selection is perfect, and definitely acceptable for what I need. I'm only using a single PCI-e x16 right now, but there's slot availability for additional graphics cards, and more importantly, additional IO like USB3.1, or network adapters. Fan header layout is good, with 2 CPU fan headers (CPU and CPU_OPT), along with 3 chassis fan headers spread along the bottom, left, and right of the board. All headers are 4 pin PWM compatible.

DSC04954

I gotta say, I loved Noctua fans from when I was buying NF-P12s on the regular, but man, they've really stepped up their game. The NF-F12s and NF-A14s are engineering marvels. I thought the P12s were well built, but these are a step above. I won't go over everything here, but you can see the features of the F12 here and the A14 here. The basics are that the F12 is designed to focus it's airflow directly behind the fan, instead of letting it spill out everywhere. These are great for radiator and heatsink use due to higher static pressure too. The A14 is kind of a jack of all trades - It has some pretty good throw, but it also performs good in radiator tasks where it needs to push or pull in confined areas. In this case, I used the F12s mounted push/pull on the NH-U12P, and the A14s took up the front intakes and rear exhaust of the Define R5.

DSC04963

Noctua included a pretty killer accessory pack too. You get a PWM splitter, a 4 pin extension cable, a 4 pin low noise adapter, and their standard mounting gear. All very well sleeved and worth the extra premium paid per fan. I've used a few of these accessories to have the fans share some PWM headers for easier fan control.

DSC04956

Final build in the R5 was very clean considering the parts I was working with. I opted to remove the 2x5.25" and 5x3.5" drive cages in favor of additional airflow. The NF-A14s in the front are sharing a PWM header, as are the NF-F12s mounted to the heatsink. I mounted the SSD to one of the the removable trays behind the motherboard to keep things a bit cleaner. The Define R5 was a top notch case, and from a builder's perspective, hit all of the major notes. If I had any real complaints, it'd be the front fan mounting was a bit difficult with the extended screws, but that may be in part due to the silicone vibration dampening on the Noctuas. If I was to list my favorite features, it'd be the center standoff for motherboard alignment, the fully removable drive bays, the latched side panel, and the easily removable dust filters.

DSC04957

Windows installed in about 10 minutes thanks to USB3 and the 850 EVO. I took the extra time after installing windows to format my old SSD (I created a VHD out of it late last week) and drive test a few other hard drives that are being sold. I also ended up setting up my fan control with Fan Xpert 3 from Asus. Though I'm not a huge fan of the interface, it worked well to identify the lowest RPMs the fans are capable of, along with letting me set up custom fan curves for each header. My current curves are set up to keep all the Noctuas at about 500RPM until the processor hits 50, then slowly ramp up to max speed at about 80. Fan Xpert can also control spin up and spin down times as well, making the sound curve a bit smoother. My GPU fan curve was set up once again with EVGA's Precision X, and is pretty much identical to what it was before. At idle and medium loads, the computer is spooky silent. I opened up Planetside 2, and played for about half an hour, and didn't hear any fans except the GPU. The processor hit a maximum of 60 degrees, and the GPU about 65. Borderlands The Pre-Sequel pushed things a bit further, and I ended up hearing the GPU spin up a lot more, but the rest of the case remained eerie. There will be more testing, but for right now I'm pretty satisfied.

Now that the initial build is complete, I can discuss my ideas for the rest of the upgrade path.

Asus GTX 970 Strix: The GTX670 is 2 generations removed at this point, and with plans for higher resolutions and newer games, a GTX 970 seems to hit the price to performance sweet spot for resolutions up to 2560x1440. The Asus Strix model works well in my build due to it's fanless operation under no/light load, it's quiet fans when under heavy load, and it's matching of my board (What can I say, color/brand coordinating parts is nice). Once Pascal (Nvidia's new architecture) releases, I'll be prepared to evaluate and upgrade again at that time if necessary. If I do deem an upgrade is needed, the 970 should retain a lot of it's value for resale.

Mushkin Enhanced Reactor 1TB SSD: I can hear the hard drives. Actually, they're probably the only things I can hear in the computer now, until the video card spins up. I run a really old 640GB Western Digital Black as my games/larger programs drive, and 2TB Seagate Barracuda 7200.12s striped with Windows built in RAID features, which I generally use for scratch files, and recording game footage. Replacing all of those drives with another back mounted SSD should give me loads of room for games, lots of speed, and virtually no noise. I can also remove the drive cage too, which should improve airflow even further. If I want to record game footage, I figured Shadowplay has some fairly low write speeds, and could probably be handled to the file server over the network, but I'll need to test this. I'm choosing the Reactor because of it's price point mainly - 1TB of flash memory for under 300 dollars is almost unheard of in Canada. Topping off the great price, it couples that with MLC NAND (Generally more durable and better performing than newer standard TLC NAND) and a proven, problem free Silicon Motion controller. If I do choose to record any footage to the Reactor, it shouldn't be an issue. Doing the math, it's good for about 131GB of writes a day, for 3 years. I don't think I'd be worried about that kind of volume.

Modular/Semi Modular PSU: This one is a tough choice, but I'm pretty sure I have it narrowed down to a final few. The non modular TX650W I currently have is an absolute champ, and it's actually pretty darn quiet, but it's getting old, only bronze rated for efficency, and non-modular. There's an absolute mess of cables behind the motherboard tray, and I'd love to cut down on that a lot. I've narrowed my choices down to a 650w-750w model from the Corsair RMi or RMx series, or a 650w-750w model from the EVGA GQ or G2 series. All power supplies considered contain a semi-fanless mode at low/medium power consumption, and are 80Plus gold rated. The current leader of the race is the FSP built EVGA GQ series, considering it's a bit cheaper due to being semi-modular (Hard wired 24 pin ATX), but includes all ribbon style modular cables. The RMi/RMx/G2 series are all very nice as well, but only really have the advantage of being fully modular, with slightly better voltage regulation/ripple suppression. The RMi series also includes Corsair Link, which I don't think would be overly useful.

Matching QHD Monitors: This one is a pretty big maybe. The 970 seems to benchmark pretty well in QHD, so I was considering trying to find a good deal on a triplicate of QHD displays. The current front runner seems to be Acer 25" H257HUs, offering QHD IPS matte panels, DVI, HDMI, and Displayport inputs, and really slim bezels. Only real downside is they're pretty spartan - No VESA mounting and a pretty average stand. With 3 of those, I'd end up getting rid of my 19" and my 21.5", and I'd mount the ultrawide over the center monitor, likely with an arm. This setup will give me a lot more screen real estate, matching IPS LCDs, and the option to play in Nvidia surround if I wanted to.

Here's hoping for another update soon!

EVGA ACX Cooling Fan Curve - How I Learned I've Been Punishing My Ears

I setup my fan curve initially to be extremely aggressive. The GTX670 was a pretty speedy card, and coming from watercooling, I knew that low temperatures meant a happy, and more importantly, non-throttled video card. I think I set the thing initially to start ramping up at 40 degrees, to hit around 100% fan speed at 65 degrees. Kepler needed to be kept under 70 degrees, or it started stepping down it's boost clock to maintain thermals. Of course, this aggressive fan curve lead to an absolutely roaring fan, even under fairly light loads (2 EVE clients displayed on screen). In my quest for quiet computing, along with installing low noise adapters on the fans in the file sever, and some ghetto sound dampening in the ESXI box (Duct tape and corrugated cardboard for the win!) I decided to try a much less aggressive fan curve on the GTX670.



Reading further into ACX unit reviews, especially with newer cards, it's not uncommon for cards to run with their fans off, even under light to medium load. Why not try to replicate that with what I have now? With the above fan curve, the thing pretty much constantly runs at 30% fans while on the desktop and doing regular stuff like watching videos, or browsing the web. Light gaming like Minecraft, AOEII HD, and EVE don't really even cause the fans to spin up further. Loading Planetside 2 and playing for about an hour saw loads under 70 degrees, and a much quieter case. I think this, coupled with the Define R5's noise dampening panels, should be perfect for a low noise solution while still maintaining awesome gaming performance. And who knows, maybe one of those fancy GTX970s with the "0 decibel" feature will make its way into my hands. GTX980 even? Asus Strix, I'm looking at you.

Upgrade Plans: 2016 Continued

Alright, to continue on the last post, I've finalized and ordered the upgrade parts! Everything should be here Tuesday next week (Yay holiday weekends...). The final list changed a bit, but it's not too different:
Intel Core i5 4690K
Asus Z97-PRO GAMER
Samsung 850 EVO 250GB SATAIII SSD

I decided on the Asus over the Gigabyte board in my previous post as i felt it was technically superior for a similar price point. After reviewing the specifications, it seemed to have better reviews, newer Intel gigabit LAN, and a better on board audio setup utilizing an isolated section of the PCB for audio, better capacitors, and an EM shielded audio chip. I plan on dropping my HT Omega Striker, so I'm trying for the best on board audio in my price range. The processor in the order remains the same - I'm used to having a Core i5, and performance wise it's very similar in gaming and day to day usage as the i7 4790k is, so I don't see a point in hyperthreading. I have an ESXI lab box for anything that's massively threaded anyway. I also decided to drop the M.2 SSD in favor of a SATAIII model, mainly because the M.2 would disable 2 SATA ports, and the unit I wanted was back ordered. The 850 EVO SATAIII has great reviews, and performance seems to be solid.  I'm also going to try operating without a dual gigabit NIC in my desktop to try streamlining my network a bit. I've since removed my poor man's VLAN management switch, threw my ESXI management on the main network, and direct connected the file server's second NIC to the ESXI box. This should cut down on cabling tremendously.

Next step in the upgrade train will be a case overhaul, along with a new set of fans. I've decided on:
Fractal Design Define R5 Windowless
3x Noctua NF-A14 PWM
2x Noctua NF-F12 PWM

I'll be keeping my NH-U12P, and replacing the P12 that's currently running on it with dual NF-F12s. This is primarily for PWM control, but the F12 is also a bit of a higher performance model as well. I unfortunately lost the second set of fan clips for it, but a quick message to Noctua with the invoice for the NH-U12P got a set of them shipped to me at no charge! Can't complain about that level of support for an 8 year old heat sink. The Define R5 is a quiet case, which is a bit of a departure from what I'm generally used to, but I don't really need the extreme levels of cooling or the gamer looks afforded by my history of Coolermaster cases. I want to start prioritizing noise in computing, and the Define R5 is one of the best options for silent cases at it's price point on the market. Coupling this with the amazing performance and sound levels provided by Noctua fans, all the PWM headers on the Z97-PRO GAMER, and Asus' great fan control options, I should be able to have a quiet system that can really push some air when load starts to get a bit heavier.

Once everything with the case and initial upgrade is completed, I'll evaluate and determine what might be next. I believe my GTX670 is going to be plenty of video card for my current needs, but if I find myself gaming more, I may look into a GPU upgrade - The Asus Strix cards have really caught my attention with their "0 decibel technology" which basically doesn't spin up the fans until a certain temperature is hit, allowing for silent operation. The GTX970 would be a very good stepping stone from the GTX670, and would definitely fall in line with all of my past GPU purchases (Value enthusiast FTW!). I may also consider replacing all of my mechanical storage in the desktop with solid state stuff. 1TB SSDs are coming down considerably in price, and the file server generally handles any large storage requirements like virtual machines or video storage. Time will tell. Upgrades have been a long time coming, and considering how long of a life I generally get out of my hardware, I don't mind splashing out a bit of money for good stuff.

Anyway, another boring text post, but I do hope to have a lot of pictures of the upgrade.

Rest In Peace, DFI LANParty




I know the brand has been dead for many years now, but my upcoming hardware upgrade made me think of how awesome the LANParty series of boards was from DFI. I only really have experience with the P35-T2RS and the Blood Iron, but man, the overclocking fun I had with that T2RS gives me a fuzzy feeling. If I could go back and redo things, would I choose a LANParty P55 board over my EVGA? I really can't say. I can say that if LANParty was to ever make a return, it would probably re-spark a bit of the enthusiast in me.

2016 Mobile Setup



My first taste of good notebooks, and my first taste of thin and light was with my Acer Aspire Timeline X 3820TG, which I feel was essentially the precursor to the Ultrabook. Extremely slim (for it's time), packed with a Core i5, loads of RAM and storage, a switchable GPU for extra performance, lots of connectivity, and an absolutely killer battery life for the time, it was a mobile workhorse and carried me handily through my 2nd year in college with lots of back and forth travelling. I fondly remember using it all day in class un-tethered from the wall running virtual machines, web browsing, writing, only to hop on the bus and blog for an hour during my weekend trips back and forth to Amherst. I'd get home to Amherst and connect it to an HDMI monitor and a wireless mouse, and it was like I never left my rig at my apartment.

Of course, with notebooks from that era (2010! Six years ago!) build quality, although pretty alright, was mostly plastic. Only business models like Thinkpads, or Apple products like the Macbook Pro were built to any extremely high standard. The Timeline was falling apart by the end of it's life in 2014. Had it still been in my hands at the time, it may have lasted longer, but that's in the past now. What we do see, and thanks to the Macbook Air, is a big push for higher quality, thin and light devices, all backed by Intel and their Ultrabook format. I decided in 2016, I wanted to finally get a notebook that would meet my needs for a high quality travel companion. My requirements are below:
Good screen: 13.3" or lower IPS LCD screen, 1920x1080 minimum.
Thin and light: Under 2cm thick.
Good build quality: Aluminum unibody, or very high quality plastic build.
Good keyboard: Typing shouldn't be a chore.
Adequate daily performance: It doesn't need to be a monster, but being able to handle my workload is a must.
Killer battery life: Seriously, I want that feeling the Timeline X gave me.

The choices very much came down to the Dell XPS 13 Skylake Edition, a loaded Thinkpad x260, or the above pictured device, the Asus UX305CA. I went with the Asus.
13.3" 1920x1080 matte IPS LCD
Intel Core M3-6y30 @0.9GHz
8GB DDR3 1866MHz
256GB SSD
Intel dual band 7265 AC wireless
3x USB3.0, MicroHDMI, SD card slot, combo headphone jack
45 Wh battery

Build quality is extremely good. The device is 12.3mm at it's thickest point, and constructed of aluminum. This makes it less thick than the 12 inch Macbook! It's not quite as light however, weighing in at 2.6 pounds. There's no real flex in the keyboard tray, or on the screen, and when open the screen doesn't wobble. It's amazing what 6 years can do for differences in build quality and engineering. The Aluminum build also helps a lot with heat dissipation, which is necessary because this is a fanless computer. For usability, the keyboard has a surprising amount of key travel for such a thin device, but there is no backlighting. This isn't a dealbreaker for me, so I won't complain too much about it. The layout of my model is the bilingual version, so it is a bit different than my keyboard at home, but I'm used to it. The trackpad is large and responsive, but they clicks can be a bit loud. Not a dealbreaker though.

For performance, the Core M3-6y30 is a device designed for low TDP devices like tablets and fanless notebooks. You may think this automatically makes this device slower than a ULV CPU or a regular CPU, but it's surprisingly zippy! From my research, it's not that far off burst performance wise from current generation Skylake ULV CPUs, which is helped along by the fact that this is a hyperthreaded dual core with a 2GHz turbo. Considering the aluminum build is great for heat dissipation, extended operation at 2GHz isn't an issue. I would easily put this on par with the Timeline X I had performance wise daily, but with the added benefit of solid state storage and more RAM. I won't lie, I was skeptical going into this, but spending some time doing my daily stuff such as installing and running virtual machines, web browsing, listening to music, and even playing some games (AOEII HD and Minecraft) I can say I've been very pleased with the performance overall! I still have some stuff to try out like basic photo editing, but I'm confident this is going to handle any mobile blogging needs necessary.

The screen is very impressive. The UX305CA also has a QHD touchscreen flavor, but the hit on battery life and system performance wasn't worth it in my opinion. On top of that, the matte screen on the 1920x1080 model makes this pretty usable outdoors too! Color accuracy should be good considering the IPS display, and although not professionally measured, I can say nothing looks off. Viewing angles are great as expected too. The adaptive brightness does a good job regulating, and maximum brightness is pretty eye-searing which should be awesome if I do any outdoor work with it. I find 1920x1080 on a 13" screen is a bit much at 100% scaling, but I did find that 125% was readable while still feeling like it was displaying a lot of information. 150% felt a bit too cramped in comparison. UI scaling in Windows has a ways to go, but it's mainly developers coding static UIs. At 125%, there are some applications that look fuzzy, but I can live with it. Anything I tend to use daily looks great.



For battery life, after testing for a few weeks, depending on workload, I can expect anywhere from 6-11 hours on a single charge. If I'm just browsing the web in bed, it's closer to 11 hours, where if I'm actually doing something like playing games or working with virtual machines, or watching a lot of YouTube, it's closer to 6 hours. This is still very much on par and even exceeding my Timeline X's battery performance. I'd love to see even longer battery life, but heck, the thing has a larger screen and still has better battery life than my tablet! Color me impressed.

If I was to list a weakness of the thing, it'd probably be the speakers. They're not horrible, but they're downward firing and a bit uninspiring. They're loud enough to fill a small room, but don't expect any real depth or richness from them, although clarity is on point. I would have also loved to see the M7-6Y75 model more actively available with my current configuration, but the M3-6Y30 is still plenty fast for my use case.



For accessories, I grabbed an MX Anywhere 2 from Logitech. I used to have a VX Revolution, but I recently discovered it's dead. I did what I could to clean the battery leads, and I even opened it up to ensure everything was connected correctly and there was no corrosion on any of the cables, but alas, it just would not power up. The MX Anywhere 2 drops a few buttons, and there's no middle click, but it does offer what I want in a mouse: Hyperscroll, side scrolling, forward and back mouse buttons, USB rechargeable, and a Bluetooth connectivity option. I've been pleased with the performance of the MX Anywhere 2, however my unit does have a defect. To switch between regular and hyperscrolling, you need to press down on the scroll wheel. This doesn't work 50% of the time, and requires fiddling with the wheel to get it to function. I would have liked to replace through Best Buy, but they're currently out of stock. I'll keep my eyes open, and if they don't get any stock anytime soon, I'll just RMA through Logitech, which always had great support in my past dealings with them.

I also ended up grabbing a notebook sleeve, as although the build is robust on the UX305, I'd still like to keep it separated from other things in my backpack. I don't have any current pictures of the sleeve, but I found a Kapsule branded one on sale at the Source which also included a few nice deep zippered pockets on the sides to store stuff like the tablet, mouse, charger, and any extra cables/drives I might need to take along. It's half decent looking too, so I may have photos up eventually.

Overall, this is a fairly impressive thin and light mobile setup that meets my needs for daily use! I'm pretty happy with my choices, and I hope they keep impressing me as I use them further.

Upgrade Plans: 2016

It's 2016. I've had my current processor and motherboard since 2010. That processor and motherboard released in 2009. I feel like a total scrub, but I'm using SEVEN YEAR OLD HARDWARE. And you know what? It's actually not that bad. You can definitely feel a bit of a performance hit in modern games, but day to day usage isn't hindered by speed. If anything, I really want to upgrade for newer standards, like USB3, SATA3, and M.2. The SSD is still plenty fast, but 120GB of storage is feeling cramped. So, I've set out with a few requirements in mind, and I think I hit most of them with my choices.
Larger SSD
Core i5 or better
USB3/SATA3/M.2
Dual Intel Gigabit LAN
SLI Functionality

I believe my final choices will be:
Intel Core i5 4690K
Gigabyte Z97X-UD3H-BK
Samsung 850 EVO M.2 250GB SSD
HP NC360T Dual Gigabit PCIe x4 NIC

Intel Core i5 4690K: This processor is a 4th gen Intel Core i5, which is considerably faster than my first gen i5. I'm opting for a K series unlocked CPU in case I choose to overclock in the future. I'm currently running my processor stock, so even at stock clocks it should be leaps and bounds ahead. Another great advantage to 4th gen processors is they still support DDR3 fully. Although it would be nice to grab a Skylake CPU, I don't really want to drop the extra money for a RAM upgrade. The 16GB of Mushkin Blacklines I have now will be plenty of RAM for the time being, though I may see myself expand to 32GB over time.

Gigabyte Z97X-UD3H-BK: After a lot of searching, this seems to be the board of choice. It supports USB3 (4 USB3 and 4 USB2 ports on board), SATA3 and M.2 (6 SATA3 ports with 4 usable while the M.2 slot is in use), and has on board Intel Gigabit LAN, even though it's only a single port. rounding that out, it fully supports SLI with 2x PCIe x16 slots (Running at x8 with 2 cards installed) and a 3rd PCIe x16 slot that runs at x4. Reviews are favorable of the on board sound too, with custom capacitors, and a decently loud built in amp. I'll likely retire the HT Omega Striker in favor of the on board audio.

Samsung 850 EVO M.2 250GB SSD: Affordable, fast storage, in the M.2 2280 format. Should save space and be a bit quicker than regular storage, and let me have plenty of breathing room for games that benefit faster loading times. I may decide to go with a regular SATA3 SSD if it's priced any cheaper, but right now this M.2 drive is in a pretty sweet spot.

HP NC360T Dual Gigabit PCIe x4 NIC: Since I couldn't find an affordable Z97 board with dual gigabit Intel NICs, I think the NC360T is a great choice for an add in NIC. Not only is it basically an Intel Pro 1000 PT dual port, it's also freaking affordable, regularly selling on eBay in the 30-40 dollar range. This is due to them regularly being pulled from off lease/EOL servers. I'm very pleased with it's performance in my ESXI box, so I can't see it being disappointing in my regular box. I could go with a single port, but for 5-10 dollars more, it makes sense to just grab a dual port.

At the moment, all in this upgrade should roll for under 800 post taxes and shipping, and I'll be able to sell my existing gear for a bit of cash to offset. If this lasts for another 6-7 years, I think I'll be pretty happy. The only upgrades I can see occurring after this are video card related, and it may just end up with me grabbing another GTX670 for SLI, or migrating into something completely different once the performance gap between my card and something newer is a bit higher. For right now, I can run 3 EVE clients at max settings pretty easily, and I can max out any Source engine game at 2560x1080 pretty easily. There isn't much that I play that would benefit currently from a new video card. If there's any more hardware upgrades, it's gonna be more monitors!

2016 Peripheral Update

I've decided to update my mouse and keyboard for 2016, as both the G9 and G15 were getting to be a bit long in the tooth. Specifically the G15, of all keys that could die, the F7 key kicked the bucket. This isn't so bad, but I do use it pretty regularly for EVE. I decided with a new board, I wanted to try mechanical switches. I also decided that macro keys and screens weren't a huge deal, and I wanted to keep it under 125 bucks (Amazon credit FTW). I also wanted a full sized keyboard with a pretty normal layout and normal key caps.

My pick was the Coolermaster Quickfire XT with Cherry MX brown switches. The browns are a good compromise between tactile feedback and quietness. I personally didn't notice a lot of difference initially, but switching between my work keyboard (membranes) and the Quickfire XT really showed me the difference. I find typing emails for extended periods of time at work tends to feel a bit more fatiguing, and the key presses just aren't as good. Gaming wise, the keyboard layout, although very similar to my G15, has taken some time to get used to. My guess is it might be due to the lack of wrist rest. Other than that, I find performance wise, it feels the same for gaming. Overall I'm happy with the keyboard. If I was to go back and re-buy with a bit more of a budget, I'd probably look at a model with back light, but everything else is perfect. This may come in the future anyway, and I may end up transitioning the Quickfire XT to work in favor of a Ducky brand board with back lighting.


For my mouse, I swapped the G9 with a G502 Proteus Core. With Logitech updating the Core to the Spectrum, and replacing the LEDs with fully customizable RGB ones, the Core got a price cut! I was able to score a pretty sweet deal, and got the Core for 30 dollars off retail. The G502 is pretty much the G9 evolved, and the evolution is perfect. The shape is very familiar, except extended in all the right places. The sensor is just as accurate. There's more programmable buttons. The hyperscroll switch is in a much better spot. I've heard complaints about the scroll wheel design, but I actually like the feeling better than the G9. I'm happy with the replacement, and I feel it's gonna last just as long.

Nexus Player, Ultrawide, G Watch?



We've had a Chromecast in the house for a while, and it's been awesome. Really easy to put stuff on the TV, and cheap too! When I noticed the Nexus Players received a price cut though, I couldn't resist. My one real complaint about the Chromecast was the fact that it only had 2.4GHz wireless. Wireless N helped a little bit, but unfortunately we live in a 16 unit apartment, which has a 2.4GHz wireless router in each unit... For something as sensitive as streaming media, that's a recipe for disaster. Although most of the time the Chromecast was good, you'd get good hiccups with HD content in the evenings when everyone was online. The TV we have has 5GHz wireless N, but it was still really slow and not at all a pleasure to operate. The Nexus Player however supports up to Wireless AC (2x2 MIMO), which is fast as hell, provided we have an AC router. (At the moment we don't but we'll take advantage of the 5GHz wireless N anyway). So, the Nexus player is basically a beefy Chromecast, with built in and downloadable apps and games, and voice search. It's a nice replacement, looks good, feels good, operates quickly. Let's just hope that Google doesn't plan to wrap up the idea of Android TV.



Also back to 3 monitors! I ended up getting a 25" LG Ultrawide for ~200 dollars as a birthday gift for myself. My GTX670 still pushes 2560x1080 about as well as 1920x1080, judging by game performance in Battlefield and Borderlands, and the extra screen real estate is absolutely awesome for EVE online. I'd love to add an additional one, but I really don't have the desk space. I'll likely end up just switching my 19" for another 21.5" and keep things like that.

Last but not least, I ended up scouring the internet over vacation for a G Watch. Yes, it's one of the first Android Wear devices released, but guess what? The internal specs really haven't changed, even with the newer, higher end watches. They all run on the same chipset, with similar battery sizes, and similar sized screens. The only real difference is the quality of the screen and watch case, and if it does or doesn't have GPS/heartbeat sensor built in. Considering the 100 dollar price tag, and the fact that wear has been receiving pretty regular updates, I figured it's time to see what it's all about. I hope to have some pictures and a quick review up sometime after it's arrived.

Long Time, No Update?



It sure has been a while since posting anything. I figured since I'm on vacation I may as well update a few things here. First off - The Nexus 5 is still an absolute monster of a phone. I fall deeper in love with it every day. It's now ~2 years old and I'd still consider it a flagship. Running 5.1.1, stock to the bone, it's absolute butter when it comes to the interface and day to day use. I can see myself being happy with this phone for at least another year or so. Battery is meh, but it does last a day. We'll see what Android M brings! By the looks of things, the Nexus 5 should be one of the first devices to receive it, and battery life improvements seem positive already.

The Shield tablet received Lollipop 5.1, which improved performance a ton - No more random lags in Chrome, or when switching apps. I likened the Shield to a truck prior to 5.1 - Slow to get up and running, but a beast once it started moving. Now it's more like a sports car - Super quick and super powerful. I can say the tablet is a pleasure to use, a pleasure to hold in hand, and it fills the hole that the Nexus 7 left when I gave it away. Maybe one day I'll get the controller and cover for it as well, but for now it's been great as just a media consumption device/web browser/kitchen assistant.

I've also started playing EVE Online. I think at some point I ranted about how I detest the idea of pay to play titles, but I can see the advantages now. EVE is fantastically complex and absolutely hilarious - The entire game is essentially PVP. It doesn't matter where you're at, there's always a risk of getting killed by another player. I do have a bit of an interesting take on the game as I got a fairly well developed character from a friend who stopped playing a while ago. You can check out my adventures by clicking this link here - I'll be updating it from time to time, hopefully with pretty screenshots. (Now dead)

My EDC hasn't really changed a lot. I've changed from the mechanic's ring to a dangler type system as I was finally able to find some cheap ones on eBay. I love supporting communities, but at 20 dollars for two P7 suspension clips? I'll have to pass. The cheaper knockoffs seem built just as well and I was able to get 5 for about 10 dollars, which is probably more than I'll ever use. The nice thing about the dangler is that it prevents things from just turning into a ball in your pocket and looking awkward. It's a lot easier to just pull out your key chain as well!

I was also considering switching out my MiniChamp for an Alox version, but thinking now, the most commonly used pieces on it are the blade, the file, and scissors... Which all exist on the Classic SD. So I might end up just grabbing a Classic SD in plain silver Alox and retire the MiniChamp to a first aid kit or something, where it might get a bit more usage. Silver is definitely a must though! It seems silly, but most of my other stuff that I carry in my pockets now is silver (Everything else on my keychain), and so is my Cadet. There's a lot else I'd like to add or change in my carry, but it's really not a necessity.

Anyway, that's about it for right now... I'm going to get back to enjoying my vacation, maybe play some EVE.

Linux Server Golden Image



As stated in my previous posts, for my lab and internal network infrastructure, I'm using mostly Linux bases servers. They're pretty reliable, low maintenance, low footprint, and they do the job without a GUI. Of course when I'm spinning up a new server every day or two for testing purposes, configuration can get a bit repetitive. To make my life easier, I've decided to create a "golden image" of my currently preferred server distro, Ubuntu Server 12.04 64 bit.

To create this base server, I've thrown together a virtual machine in VMWare Workstation. Standard version 9 virtual machine, 1 core, 512MB of RAM, and a 20GB hard drive. The next snapshot will include the removal of non essentials like sound card, USB, etc. Once the machine is created, Ubuntu Server 12.04.3 64 bit is installed, and configured with a default username and password, along with the SSH service. I don't set a static IP at this point, as I've created a script to take care of that and placed it in the home directory, along with a script to check memory usage.

Once the base install was complete, I installed VMWare tools and Webmin, and called it a day. Once it's shut down, I created a snapshot in Workstation and made notes as to what was done to the virtual machine, which basically prepped it for upload or cloning. To actually upload to my ESXi box, I just use the VMWare Standalone Converter, and make sure to adjust RAM amounts depending on the tasks the VM will be handling, and also set the disk to thin provisioned depending on what disk it will be sitting on.

This whole process takes a lot of the work out of creating/deploying scenarios and new labs, which is great. This, along with my golden image of Server 2012 R2, and I can have labs up within half an hour. Minus the configuration of course.

Youtube Uploads

http://www.youtube.com/watch?v=-gBLrcBrC6s

Thanks to the magic of ShadowPlay from Nvidia, I've been recording most of my rounds of Battlefield 4. The file size is nice and small thanks to the H.264 MP4 render, and playback is generally pretty smooth. Editing and uploading to Youtube is another story though. After playing for hours with Sony Vegas, I finally found my ideal render settings. First step is disabling smart resampling on the base footage. After I finish my edits, i add a bit of sharpening and some brightness/contrast adjustments, then I'll do a render. Main Concept H.264 wasn't working with CUDA all that well, so I tried Sony AVC. This worked great. I set my bit rate to 16 mbps, my frame rate at 30, and made sure my render qualities were set to best. After a (slow) upload to Youtube, you get the above! You can possibly expect more uploads, perhaps with some commentary!

Desktop Update!



If you’re in touch with the gaming world, you’ll know that Battlefield 4 was released at the end of October. If you know me at all, you’ll know I’m a pretty big fan of the Battlefield series. With previous blog posts, you can see that the release of a new Battlefield title almost requires a new hardware update. This release was really no exception.

From my post on my ESXi host, I listed my desktop specifications. Now, the HD6850 I had was an absolute trooper. I was able to play Skyrim and BF3 with few issues, and the overall desktop experience with the Catalyst drivers was actually really good. Honestly, both Nvidia and AMD have very mature drivers with few issues (that I can see). I figured I’d be able to carry my HD6850 over into Battlefield 4 and maintain a similar performance level without having to upgrade. How wrong I was…

I fired up the open beta, and was instantly disappointed. I was running at 1920x1080, ALL low, and barely maintaining 45 frames per second average. Even playing with resolution scaling didn’t work too much. I struggled through maybe 2 to 3 rounds, before deciding to just set it aside for now and look at it once launch day came. (Silly me, I should have updated my drivers). I preordered, the night before updated my video drivers, and was up at 6AM for some launch day fun.

The performance difference was astounding. It’s almost like that short beta worked out a lot of the frame rate issues, and suddenly I could play on all low, with 95% resolution scaling, and maintain a fluid, playable, 60 frames per second. However on the larger maps, I was getting some frame drops during large “Leveloution events”, or when there was a ton of action on screen. This really wouldn’t do for a more performance oriented player, and dropping the resolution scaling any further would result in an extremely poor picture, and put me at a huge disadvantage. So, naturally, I decided to upgrade.


My processor was fine, my RAM and hard drives were fine, it was just the video card. Buying new was out of the question. I don’t need a cutting edge r9 or 7xx series card, so I took to the Hardware Canucks forums as usual. After browsing for a few days, I settled on a really good deal for a GTX 670 FTW Signature 2, by EVGA. The install was easy, and after a clean driver install, I was up and running and good to go.

I started Battlefield 4, jumped in game, and pushed my settings to a mix of high/ultra. Frame rate was definitely better, but I was still getting these really stupid frame drops. I tried pushing my processor from stock to 3.6GHz. Still the same results. Tried running on low, vsync’d, etc. Same problem. I tried practically every fix I could find on the internet, and still the same results. I wasn’t pleased. My last ditch option was Windows 8.1. So, Sunday afternoon, I spent the 2-3 hours pushing the update to my desktop. And you know what? Problem solved.

I run a current mix of ultra/medium settings, with my frame rate capped at 70. I see occasional dips into the 60s, but beyond that, it’s almost always pinned at 70 frames per second. And to be quite honest, Windows 8.1 is pretty awesome too. Resource usage is lower, the task manager is considerably better, being able to pause file transfers is a nice feature, built in Hyper-V will be awesome to play with, and the interface is a bit more mature feeling. Everything can be configured so you rarely have to see “Metro” apps too. I’m really glad I made the jump.

Lab update

Just dropping a quick post to say everything has been running great! A few hiccups, but it's been a learning process!

The base infrastructure of my network includes Linux based virtual machines, all running on Ubuntu server 12.04 LTS. These lovely little virtual machines let me do more on my main rig without tying up resources. Currently for my main, every day virtual machines I'm running:

  • One BIND based DNS server for internal name resolution

  • One Serviio streaming server with a web interface (I can also control this from an app on my phone/tablet - Sweet!)

  • One web server running with a LAMP stack. This currently hosts my mediawiki install where I keep track of any configuration I do for future reference.

  • One server running the Deluge daemon for downloads. Has a 500GB virtual drive dedicated to it. I access this via either web client or desktop client. (Desktop client actually feels completely local)

  • One Minecraft server running Minecraft My Admin. This can be a bit flaky, but I learned that a custom built one was much better than the turnkey appliance I downloaded initally.


I also have two other resource pools dedicated to testing and labbing. In the test pool I'm just playing around with Server 2012 R2 as a home domain controller (Thinking of moving both my DNS and DHCP to it), along with a couple other random virutal machines. In my lab pool, I have a full suite of Server 2012 R2 machines running various features in the same domain. All of the Server 2012 virtual machines run off the file server, which has provided exceptional performance.

As for the hiccups, I ran into some instability once in a while with Backtrack and USB pass through. The host would go completely unresponsive from time to time with nothing in the logs. After moving Backtrack to my desktop and running it with USB pass through in Workstation 9, the instabilities went away completely. There was also the issue with Deluge constantly crashing after downloading for a few minutes, but it ended up being a bad file.

The Minecraft server is another issue flat out. I was running it as a turnkey virtual appliance for a while, as I didn't want to bother with the configuration at that moment. That was until it broke. Luckily I was able to mount the drive in another virtual machine and recover the data. Once the data was recovered, I hand built the next instance of the server, which has so far been a lot more stable. (Issue this morning with an MCMA update, however it was resolved rather quickly by killing and rerunning the process, and accepting the upgrade.)

In the future I have plans on implementing Puppet or Chef, however that won't be for a little while. I hope if I do that I'll be able to document it!

Initial File Server Build Complete!



I got drives on drives! This being said, the 3TB drives arrived on a Wednesday, 2 days before expected arrival. I didn't even have to walk down to the post office to get them, as they were waiting in the mail box. After a quick install, including moving the 40GB Intel SSD from the ESXI host to the file server, we were up and running Open Indiana. I configured for static (after struggling with the BSD way of things) and installed Napp-It. Once that was done, I logged into the web console and started configuring the disks and setting up the shares, which took all of 10 minutes. My single pool consists of two striped mirrors currently, with half the SSD for use as an L2ARC (Basically a read cache). A quick test of transferring an ISO from my desktop to the server showed that I could definitely pin gigabit speeds with sequential writes, which was what I was looking for.

The fun part was moving the data from the old file server to the new. Although it was on a gigabit line/NIC, it still struggled due to the fact that the NIC was Realtek based, and the disks were a Western Digital Green JBOD. It took 7 hours to move 2TB of data, but it's finally done. I quickly decommissioned the old server, recycling the 1TB and 2TB Western Digital Greens for use in the ESXI host. I know, I should have fast local storage for the ESXI box, but these two drives will make good datastores for low IO/mass storage virtual machines. The drives will do until the next phase is eventually rolled out. Honestly, I'm surprised the old file server was still working. I should have taken it offline a couple of times to get it dusted out. It was an old mATX Acer case with a single 120mm fan jury rigged onto the side panel. This intake led to more dust than necessary being brought into the case, and it was pretty evident; the heatsink was absolutely caked. I feel it would have operated fine for a while longer, but I'm glad our data is off the JBOD and onto something a bit more safe.



After finishing with the file transfer, I quickly rewired the home network, and took advantage of the multiple NICs on my many devices to segregate traffic until I can get a managed switch. A quick trip to the dollar store downtown netted me four 25 foot lengths of CAT5e cable with ends for $3.50 a roll. They had longer rolls too, which I'm definitely keeping in mind for future projects. I used my existing gigabit switch  for the management and storage network. This gives my workstation and the ESXI host direct access to a single gigabit port on the file server, and allows me to make management unavailable to general traffic.

I ran into a few problems with the file server along the way. The first was installing the drives. I really should have used right angle cables and thinner SATA power adapters, but I unfortunately didn't have enough on hand. This made it a bit of a pain to close the side panel. The cable management inside the case wasn't great either, as the SATA cables are too long. My plans are to eventually swap the current SATA data/power cables for better options. The other issue with the file server was when it was powered on right after rewiring the network, the static configuration didn't stick, and my brand new pool was missing. I redid the static configuration, and was able to import the existing zpool in Napp-It, all was well, but a mild panic attack. Overall, Open Indiana and ZFS are a learning experience, and so far it's been fun learning along the way.

The next few phases are going to be both easy and difficult. We should have enough remaining storage to get us through for the rest of the year, however beyond that we're looking a bit cramped. With storage currently being a non-issue however, I'll more than likely be investing in increasing speed and infrastructure. For the file server, I have plans on adding a ZIL cache, most likely a 20+GB SLC flash based SSD, more RAM, and a dual port Intel NIC in the near future. This will fill the existing SATA port and pave the road to the next set of drives and adding the 8 port HBA. I have eventual plans on adding four more 3TB drives and four 2.5" drives, most likely SSDs or 1TB+ HDDs. As for infrastructure, I think I'm going to try to consolidate my networking with a nice managed switch. By then I should have another dual gigabit NIC in the ESXI host, which should allow me to aggregate the links on the desktop, file server, and ESXI host.

I'll be sure to keep the blog updated on any changes I do or issues I may have.

File Server Update!



The case finally arrived for the file server, which me get it up and running in a test capacity. Still waiting on the hard drives, but hopefully soon! 4x 3TB Toshibas are on their way as of today. The case ordered was a Lian Li PC-A04B, mATX case with loads of drive bays, 3 included 120mm fans, and overall a really good build quality. I had everything installed the night it arrived, along with a 320GB hard drive to test out ZFS and Napp-It.



As you can see in the above picture, the test system is a bit messy. Unfortunately it's going to be a bit difficult to make it look pretty, as the 24 pin power is placed in a really bad spot. It doesn't really interfere with anything, but it does make hiding that one cable a bit tough. I removed the USB/eSATA/audio panel from the top, as the cables were super long, and I wasn't going to be using those features anyway. Once the new drives arrive, I'll be swapping the hard drive cages around and tidying the cables. Hopefully I'll have some new pictures to show off, as I'm really proud of this little machine. Also, after getting it reset back to defaults, IPMI is amazing. I only have 2 network cables and a power cable attached, but I don't even need to touch the physical machine for anything. Power up/down, KVM, etc... All handled by the IPMI chip.  Anyway, final initial specs below!
Intel Core i3 2120T
SuperMicro X9SCL+-F
4GB Kingston ECC DDR3
Lian Li PC-A04B
Corsair CX430 430w PSU
4x Toshiba DT01ABA300 3TB drives in RAID10 equivalent (Striped mirrors, 6TB usable)

I'm considering adding a 40GB Intel SSD as a ZIL, but I'm not sure how the read/write performance would be with it. The main purpose of this will be as a file server, even though the ESXI box will have a direct link to it, and will probably use some of the storage space for low I/O virtual machines. Some are going to find it a little weird that I'm using striped mirrors for a basic file server instead of RAIDZ or RAIDZ2, but I have my reasons. First off, it makes adding drives slightly cheaper. Although it's less storage space, I only have to purchase 2 drives to increase capacity instead of 3. If I were to fill the server to capacity (3 3.5" drives in the 5.25" bays), it would limit me to 15TB of usable space, which is a considerable chunk, and I'm happy with that. The other reason is raw performance. RAIDZ and RAIDZ2 have limited random IO vs a mirror. This will be great if I do decide to host some more intensive virtual machines on it, or stream multiple things from it.