Home Lab – Wrap up

Earlier this year I documented the rebuild of my home lab environment using Gluster and RHV. This post is a final wrap-up of that rebuild, as one or two things changed significantly since the last part was written, and I have been intending to write this follow-up article for some time…

Issues with the original setup

If you recall from my series on the rebuild I was using three nodes, each with an internal RAID set acting as a gluster brick, giving three bricks in total per volume.

Well, that setup worked really well – UNTIL one Sunday when both main nodes (baremetal1 and baremetal2) decided to run an mdadm scan on the soft-raid volume, at the same time (thanks cron).

What happened was that the disk IO times went south big time, and the cluster pretty much ground to a halt. This resulted in RHV taking all the volumes offline, and the manager itself stopping. The two hosted-engine hypervisors then went into spasms trying to relaunch the engine, and I was spammed by a couple of hundred emails from the cluster over the space of several hours.

I was able to stabilise things once the mdadm scans had finished, but this was far from a usable solution for me. With the cluster stable, I stood up a temporary filestore on my NAS via iSCSI and relocated all VM images over to that with the exception of the ovirt-engine.

Then I trashed the cluster and rebuilt it a little differently.

(more…)

Rebuilding my home lab: Part 5

In the previous parts of this series, I have rebuilt my lab using Red Hat Gluster Storage (RHGS) and Red Hat Virtualisation (RHV), and added and configured the RHV hosts. We still have some tidying up to do to our environment before we are really ready to start hosting virtual machines.

Before we go into the next section of configuring some of the ancilliary services, I will mention that in order to view remote consoles on another Fedora/CentOS/RHEL workstation,  we need to install the  spice-xpi  package on that workstation. This will allow your browser to open the downloaded spice metadata with the remote viewer application. I only mention this here as I did need to access the console of the RHV engine VM from my Fedora workstation, and was unable to as I had not installed the spice xpi (Firefox plugin).

(more…)

Rebuilding my home lab: Part 4

Configuring RHV

In part 3 of this series, we installed the Self Hosted RHV Engine (RHV-M) on our baremetal1 server. This server is now a RHV Hypervisor with a single VM, however that VM has not yet been imported so cannot be seen from RHV-M.

Add Storage Domains

Our next step in the build process is to add our second Gluster volume (vmstore) for general VM storage. We do this from the RHV-M administration portal as the admin user.

(more…)

Rebuilding my home lab: Part 3

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In part 2 we installed and configured Gluster to provide shared storage between the hosts. In this part we will look at installing Red Hat Virtualisation and performing the initial configuration.

Red Hat Virtualisation can be installed in a number of ways. For my usage, the ideal way would be to make use of Red Hat Hyperconverged Infrastructure (RHHI), which uses an installation method that provides a wizard to install and configure both Gluster and RHV. As I mentioned in part 2, this unfortunately wasn’t possible for me due to the architecture and version mismatch on my Raspberry Pi Gluster arbiter node.

Another method allows for RHV to be installed using pre-existing resources, which we now have. The RHV Manager (RHV-M) is the first component that needs to be installed, and there are two methods to do this as well. The first is to have the manager on a dedicated host, the second is to have the manager ‘self-hosted’. Self Hosted means that the RHV-M server is hosted as a guest on a RHV hypervisor, in a similar way that VMWare operate their vCenter Appliance (VCA). I am going to be using the Self-Hosted option, using the RHV 4.1 Self Hosted Engine guide.

(more…)

Rebuilding my home lab: Part 2

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In this part we will look at how to solve the issue of presenting a shared storage domin from individual servers.

If you remember back from part 1, our new setup looks like this:

(more…)

Rebuilding my home lab: Part 1

I use my home lab setup extensively for testing and developing solutions for clients, as well as learning various aspects of installing and operating products within the Red Hat portfolio.  My lab hardware also hosts a handful of ‘production’ VMs that provide services such as DNS and authentication to the rest of the house.

The basic setup was two servers, both installed with Red Hat Enterprise Linux (RHEL) and local storage, using libvirt KVM to host several virtual machines per host. A single 802.1q trunk carried multiple VLANs to separate ‘core’ network traffic from two ‘lab’ networks, one of which is configured via firewall rules on the router to have no connectivity to the outside world, simulating an air-gapped disconnected environment.

(more…)