Rebuilding my home lab: Part 1

I use my home lab setup extensively for testing and developing solutions for clients, as well as learning various aspects of installing and operating products within the Red Hat portfolio.  My lab hardware also hosts a handful of ‘production’ VMs that provide services such as DNS and authentication to the rest of the house.

The basic setup was two servers, both installed with Red Hat Enterprise Linux (RHEL) and local storage, using libvirt KVM to host several virtual machines per host. A single 802.1q trunk carried multiple VLANs to separate ‘core’ network traffic from two ‘lab’ networks, one of which is configured via firewall rules on the router to have no connectivity to the outside world, simulating an air-gapped disconnected environment.

Server1:  MSI 970 Gaming board with 32Gb RAM, AMD FX-6300 with 6 cores, 120Gb SSD, with 2x 2Tb drives configured as a RAID1 array

Server2:   Supermicro X10SRF-i with 64Gb RAM, Xeon E5-2620 CPU with 6 cores, 120Gb SSD, with 2x 3Tb drives configured as a RAID1 array

Original Lab Configuration

This configuration was good, but had several limitations that prevented me from fully making use of what I had – I wanted to be able to host my VMs on common shared storage, with the ability to move VMs between hosts for maintenance and load balancing activities – something that libvirt can’t really do on its own.

On a few occasions the lack of remote console connection into Server1 has resulted in time consuming ‘crash cart’  (There’s got to be a spare monitor and keyboard around somewhere, right?)  access to the server when it has not come back following upgrades or power failures – the IPMI remote KVM access in the Supermicro boards is a fantastic feature.

Time for a redesign

My intention was to use Red Hat Virtualisation (RHV), however for the two hosts to be members of the same cluster (and therefore allow migration) they need to be of similar hardware capabilities. The CPU families of my existing hosts did not match, so this wasn’t an option for me.

The second problem I needed to solve was the issue of storage. I only wanted to run the two servers – Standing up another box as a storage host wasn’t ideal, and although I also have a Synology NAS, it is our primary storage for everything important – I did not want to clog it up with VM images for my lab.

New hardware

I ended up biting the bullet and procuring a second SuperMicro board and building an identical server to my Server2 configuration.

Firstly I needed to cold-migrate all of the VMs that were hosted on Server2 over to Server1 (it JUST had enough storage to host all of the images). Only the critical hosts are running – it will last until my new cluster is ready…

Once everything was off Server2 I took the opportunity to install a quad port NIC in both Supermicro boxes to give a total of 6 interfaces, plus IPMI on each, and reconfigure the storage to 3x 3TB drives in a RAID5 configuration. I also flashed the IPMI firmware and system BIOS to the latest releases – something I had been nervous about doing whilst the hosts were serving up production VMs.

The new setup also has a slightly different network configuration. In the original design, the servers were installed and then manually configured so that the network connection was a dot1q trunk. Since I prefer to have a reproducible deployment method using Satellite 6, I need to be able to kickstart the bare metal boxes into a known state. Kickstarting via a tagged VLAN interface doesn’t work, and my switch does not allow for a native VLAN to be present. So, in the redesign I have opted to have one NIC (eno1) as a dedicated management interface – this is my provisioning interface defined in Satellite.

The second NIC (eno2) is the 802.1q interface and carries my six tagged VLANs. None of these have an IP address configured on the host, as the VLANs will be ultimately presented to VM’s via RHV.  Using that new quad-port card, I now also have a dedicated storage network as well as a RHV migration network (simply a link between the two servers, no switch involved).

Updated Lab Configuration

When creating the baremetal hosts in Satellite, I am not defining any VLAN tagged interfaces – The RHV installation will take care of creating those as required later on.

Interfaces defined in Satellite

By defining all of the physical interfaces here, they are defined on the system when it is kickstarted, so this ticks one of the boxes for reproducible deployment.

[root@baremetal1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b4:96:91:09:96:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.1.12.21/24 brd 10.1.12.255 scope global noprefixroute ens5f0
       valid_lft forever preferred_lft forever
3: ens5f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b4:96:91:09:96:c5 brd ff:ff:ff:ff:ff:ff
    inet 10.1.2.21/24 brd 10.1.2.255 scope global ens5f1
       valid_lft forever preferred_lft forever
4: ens5f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether b4:96:91:09:96:c6 brd ff:ff:ff:ff:ff:ff
5: ens5f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether b4:96:91:09:96:c7 brd ff:ff:ff:ff:ff:ff
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 0c:c4:7a:a9:1f:08 brd ff:ff:ff:ff:ff:ff
    inet 172.22.1.21/24 brd 172.22.1.255 scope global eno1
       valid_lft forever preferred_lft forever
7: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 0c:c4:7a:a9:1f:09 brd ff:ff:ff:ff:ff:ff

The two new hosts have now been kickstarted with RHEL 7.5 ready for the next phase of the project – figuring out what to do with the storage.

In the next part of this series we will look at some storage options.

Leave a Reply

Your email address will not be published. Required fields are marked *