I have been looking to refresh my home lab environment for some time now. Since I moved into a rental townhouse upon my move to NC, I have had power issues when my home lab is 100% powered-on. A couple of times, I have even tripped the breaker entirely. Between my desktop machine, (2) HP Proliant ML110 G5s, a Netgear ReadyNAS NVX, and (2) Netgear ReadyNAS Duos, it's just too much load for a single breaker to handle; let alone the cost of powering the whole thing.
I had my eye on the Apple Mac Mini 2011 w/Lion Server since it was released as this would make the perfect low power and low noise option. This model comes with a quad core CPU that supports up to 16GB of RAM and uses 85W (peak) of power per machine. The current servers I have are dual core, and are all limited at 8GB of RAM. The idea was to replace the (4) 8GB servers that I have with (2) Apple Mac Minis running ESXi 5.0U1. I had some minor concerns about the fact that the Mini has only a single NIC, but I don't really foresee 100% utilization, but if I do, I always have the older servers available for more capacity. Perhaps someone will come up with a Thunderbolt to Ethernet adapter to address this bottleneck.
When the new Mac Mini was released someone tried to get ESXi installed and running but had some issues getting the Gigabit on-board NIC to be recognized in ESXi. Apparently the driver for the Broadcom NIC that Apple uses didn't get included in the release of ESXi 5. As a result, I put my plans on the back burner, thinking that someone would eventually figure it out.
Well, finally it appears as though someone got it working with ESXi, by installing a custom VIB from VMware for the infamous Broadcom NIC (found here). This was posted on the following site back in January (I have been busy; what can I say). The blog post was pointed out to me on Google+ by my blogger friend & fellow Tech Field Day delegate: Shannon Snowden over at Virtualization Information.
Since he was also successful in getting it working, I thought I would take a stab at it. I did have a couple of pre-requisites around what I wanted to accomplish by doing this, however.
I wanted to have the ability to host nested ESXi servers on the machine, so that I could have an all-in-one ESXi lab cluster. In order to realistically accomplish this, I needed to have an SSD in the machine, one with high IOPS performance and high enough capacity to hold all the VMs. I currently use an OCZ Vertex 2 in my desktop machine, and decided that I would do a quick search to see the current deals at Newegg, Amazon, etc. By coincidence, Newegg had a deal (which has since expired) for a 240GB OCZ Vertex 3 with a free 32GB OCZ Onyx for $249.99 (before $20 rebate). They also had the 2x8GB Corsair SODIMMS that others have installed successfully in the Mac Mini for $99. I decided on picking up 2 of each in anticipation of a 2-node build.
The parts arrived 2 days later, so I took a lunch trip over to the Apple store and picked up the Mac Mini w/Lion Server ($939 with my NetApp discount ;) After returning, I spent 30m disassembling the Mini and swapping out the RAM and 2 HDDs. It was a bit tricky getting at the 2nd HDD, but well-documented on iFixIt. After getting the hardware in working order, I performed the install of ESXi on the 32GB drive and reserved the 240GB drive as a local VMFS datastore. In addition, I added the required Broadcom NIC driver. It only ended up taking 1-2hr total, including the hardware upgrade, to get everything working. Once it was proven that everything worked, I picked up another Mini last Fri. and performed the same operation; this time it went much faster. By about 10PM on Fri. night, I had a working, silent, 2-node ESXi cluster.
I did run into one issue that I wanted to point out... Upon powering on the 2nd machine and starting the ESXi installer, I noticed some pretty sluggish performance. The installer was taking longer to got through the motions than it had for the initial build. I thought maybe I had a memory issue or another problem. As I was moving the machine around on my desk, I noticed that there was an unreasonable amount of heat coming off the aluminum casing. I removed the cover/foot from the bottom and realized that the power connector for the fan wasn't fully-seated and thus the fan was not spinning. As this is the last component to get re-installed after replacing the HDDs, it is somewhat difficult to ensure that it is re-seated properly. Also, it's hard to hear whether the fan is spinning as the Mini is so quiet. Please check it carefully before closing the access panel, so you don't make the same mistake. Thankfully, it doesn't appear that any harm was done and the sytem is running much better and much cooler now.
Once I had everything in working physical order, I decided to start working on getting the virtual layer set up to suit my needs. I started by moving my AD, MSSQL and vCenter VMs temporarily to the local SSD storage to get an idea of the performance. It was pretty good, however, I wanted to make sure that I wasn't isolating those important VMs on local storage so I then moved them back to my iSCSI datastore on the ReadyNAS, where they now sit. This allowed me to use Update Manager to update the ESXi installs with all the latest patches, etc.
I can report that the Minis are working quite well and were successful at performing a "burn in" over the weekend. I haven't done any real load testing on them yet, but I do plan on getting the nested ESXi builds started this week. I will update you all with another blog post with the results of that testing.
Finally, I decided that this is such a cool use case of nested ESXi and pefect hardware for a home lab that I submitted an abstract last week for VMworld to talk about how to implement and get the most out of it to learn ESXi and have a very portable VMware lab solution. Please stay tuned as the public voting comes available. I would love to have you vote so that I can expose the importance and usefulness of a home lab to more and potential VMware professionals. Also stay tuned for more on the Mac Mini as I perform additional testing.
Thanks, as always, for reading!