Quantcast
Channel: Sean Crookston
Viewing all articles
Browse latest Browse all 50

The Software Defined VMware Home Lab

$
0
0

Sorry, I figured it was my turn to throw out the latest tech jargon. I’ve recently gone through the process of building out a new home lab and wanted to share the process for anyone else out there looking to build something that is affordable and realistic for their home.

Last year I decided to pare down the home lab. Ultimately I ended up completely selling it and going back to using the lab we had at work. It is a pretty nice setup with 8 UCS Blades, a ton of memory, and VNX storage. Recently though I’ve decided I wanted something I had complete ownership of again and the ability to make any changes I wanted and not worrying about affecting any demo or other persons setup.

The Old Lab

Last time around my lab consisted of the following:

  • 3 Dell T110 Servers(4 core Xeon I believe)
    • Extra 4 Port NICs
    • FC HBAs
  • EMC AX150 FC Array(12 X 500 GB 7.2k RPM Disk)
  • Brocade FC Switch
  • Netgear Gigabit Switch
  • DD-WRT Based Router for routing

This setup served me well for a while but I ended up running into a few issues that led me to get rid of it. First the storage  became a problem from a performance perspective when doing multiple disk operations at once. I was only using single path 2 GB FC so that could have had a slight affect, however the disk itself was definitely being saturated from a performance perspective during these operations.This was more of a slight inconvenience than anything but this time around I think my long term plan will be to go with SSD storage. The price has dropped substantially.

Secondly I was eating up a lot of power. It wasn’t unbearable but I believe last I checked it was running me about 50$/month for everything. At one time I had gotten a great deal on a Cisco MDS 9216 switch that I had planned on using at that alone would have added on another 30-50$ a month to it so I quickly resold that after I found out the power it was drawing.

Scoping Out the New Lab

As mentioned above I wanted to get something with a little better storage performance and not break the bank on power. Additionally I did not want to drop anywhere close to what I spent last time around. I don’t have the exact costs but I spent ~2500$ from recollection. With that said the resale value on everything was pretty good and I got almost all of that back. I actually made some money on the storage array too.

As I started exploring the various options another thing that I realized I wanted was less complexity. There was certainly a time where I wanted to mess with zoning on the switches but I no longer required this. Additionally vSphere has continued to increase abilities in nested virtualization which led me to think, do I really even need a series of hosts to have in my lab?

Some people load up VMware Workstation on a system and run the VMs from there as needed but I didn’t want to worry about scaling issues. My intention is to be able to run as much as I need and keep it running without having to worry about it.

Once I got it in my head I only needed a single host, I realized I no longer needed to purchase a fancy switch. To make it even simpler I’ve decided why not just load this box up with storage and carve it out for the hosted VMs and nested Hypervisors and VMs.

Once I got down to this simplicity I then realized I didn’t need to load this thing up with NICs and could get away with two NIC ports. With it being a lab I could probably get away with a single one but didn’t want to box myself into

The New Lab

I found exactly the solution when I came across a blog entry laying out the home lab config of Ed Grigson. To sidetrack a bit Ed has a lot of really detailed and great posts and if you aren’t following him already you should be.

Last year Ed built a single host with scalability that should last some time for anyone’s home lab. I’ve used almost the same exact configuration as Ed as I know it will run ESXi without issue. I did make a few minor changes and ended up getting a slightly higher rev CPU and registered memory. Registered memory is required to reach the boards full potential of 256 GB of RAM, otherwise it only supports 64GB. Registered memory is not usually very cheap but I found a supported 16 GB DIMM on SuperMicro’s website that was being sold en mass on Ebay for 99$ used.

  • CPU: 1 x Intel Xeon E5-2620 6 Cores(12 Threads)
  • Heatsink: Supermicro SNK-P0048AP4
  • Motherboard: Supermico X9DRL-3F, dual CPU socket, up to 256GB RAM, onboard SAS & SATA(2×6, 4×3), IPMI/KVM, dual Gb NICs
  • RAM: 2 X 16 GB Memory (SAMSUNG 16GB PC3-12800R REG ECC DDR3-1600 MEMORY MODULE M393B2G70BH0-CK0)
  • Power supply: Silverstone Strider Plus 600W (Pretty important not to skimp here as you need a power supply that supports a multi-processor system)

Additionally I have some SATA and smaller SSDs I plan on using for the initial pieces and will replace and Add SSD in as needed in the near future. You can get some really great SSDs these days and in terms of capacity I’ve recently noticed 1 TB SSDs have dropped in price with Crucial offering one at 600$.

In the future if I outgrow the core count and memory I will add an additional processor and additional physical memory. I also had some success swapping to SSD in the lab so may explore that option before adding too many more DIMMs.

The gear is not yet in so it will be a bit before I am up and running but once I do I’ll have a lab setup that is completely encapsulated in one system, less routing which will be accomplished by an old router I have. Everything will be carved out with virtual storage and the use of virtual hypervisors. Truthfully I could do the same with the routing too but at the moment I don’t have a real need to do that. I plan on setting up virtual hosts under the physical that will be used for View and vCloud test environments along with all the usual suspects directly to the physical.


Viewing all articles
Browse latest Browse all 50

Trending Articles