I got drives on drives! This being said, the 3TB drives arrived on a Wednesday, 2 days before expected arrival. I didn't even have to walk down to the post office to get them, as they were waiting in the mail box. After a quick install, including moving the 40GB Intel SSD from the ESXI host to the file server, we were up and running Open Indiana. I configured for static (after struggling with the BSD way of things) and installed Napp-It. Once that was done, I logged into the web console and started configuring the disks and setting up the shares, which took all of 10 minutes. My single pool consists of two striped mirrors currently, with half the SSD for use as an L2ARC (Basically a read cache). A quick test of transferring an ISO from my desktop to the server showed that I could definitely pin gigabit speeds with sequential writes, which was what I was looking for.
The fun part was moving the data from the old file server to the new. Although it was on a gigabit line/NIC, it still struggled due to the fact that the NIC was Realtek based, and the disks were a Western Digital Green JBOD. It took 7 hours to move 2TB of data, but it's finally done. I quickly decommissioned the old server, recycling the 1TB and 2TB Western Digital Greens for use in the ESXI host. I know, I should have fast local storage for the ESXI box, but these two drives will make good datastores for low IO/mass storage virtual machines. The drives will do until the next phase is eventually rolled out. Honestly, I'm surprised the old file server was still working. I should have taken it offline a couple of times to get it dusted out. It was an old mATX Acer case with a single 120mm fan jury rigged onto the side panel. This intake led to more dust than necessary being brought into the case, and it was pretty evident; the heatsink was absolutely caked. I feel it would have operated fine for a while longer, but I'm glad our data is off the JBOD and onto something a bit more safe.
After finishing with the file transfer, I quickly rewired the home network, and took advantage of the multiple NICs on my many devices to segregate traffic until I can get a managed switch. A quick trip to the dollar store downtown netted me four 25 foot lengths of CAT5e cable with ends for $3.50 a roll. They had longer rolls too, which I'm definitely keeping in mind for future projects. I used my existing gigabit switch for the management and storage network. This gives my workstation and the ESXI host direct access to a single gigabit port on the file server, and allows me to make management unavailable to general traffic.
I ran into a few problems with the file server along the way. The first was installing the drives. I really should have used right angle cables and thinner SATA power adapters, but I unfortunately didn't have enough on hand. This made it a bit of a pain to close the side panel. The cable management inside the case wasn't great either, as the SATA cables are too long. My plans are to eventually swap the current SATA data/power cables for better options. The other issue with the file server was when it was powered on right after rewiring the network, the static configuration didn't stick, and my brand new pool was missing. I redid the static configuration, and was able to import the existing zpool in Napp-It, all was well, but a mild panic attack. Overall, Open Indiana and ZFS are a learning experience, and so far it's been fun learning along the way.
The next few phases are going to be both easy and difficult. We should have enough remaining storage to get us through for the rest of the year, however beyond that we're looking a bit cramped. With storage currently being a non-issue however, I'll more than likely be investing in increasing speed and infrastructure. For the file server, I have plans on adding a ZIL cache, most likely a 20+GB SLC flash based SSD, more RAM, and a dual port Intel NIC in the near future. This will fill the existing SATA port and pave the road to the next set of drives and adding the 8 port HBA. I have eventual plans on adding four more 3TB drives and four 2.5" drives, most likely SSDs or 1TB+ HDDs. As for infrastructure, I think I'm going to try to consolidate my networking with a nice managed switch. By then I should have another dual gigabit NIC in the ESXI host, which should allow me to aggregate the links on the desktop, file server, and ESXI host.
I'll be sure to keep the blog updated on any changes I do or issues I may have.