Rebuilding my home lab: Part 4

Configuring RHV

In part 3 of this series, we installed the Self Hosted RHV Engine (RHV-M) on our baremetal1 server. This server is now a RHV Hypervisor with a single VM, however that VM has not yet been imported so cannot be seen from RHV-M.

Add Storage Domains

Our next step in the build process is to add our second Gluster volume (vmstore) for general VM storage. We do this from the RHV-M administration portal as the admin user.

Navigate to Storage -> New Domain. Select Data as the domain function, and GlusterFS as the type. Since the storage is presented on our local baremetal1 host, select that as the host to use. Tick the ‘Used managed gluster’ check box, so that we will be able to see the brick status in the RHV-M UI. Select our data-dom volume from the Gluster dropdown.

We also need to make sure we define our alternate gluster servers (baremetal2 and baremetal3) in the mount options by entering
backup-volfile-servers=baremetal2:baremetal3   in the Mount options field. This allows this volume to be accessible should the primary baremetal1 node go down.

When this Data Domain has been added, RHV-M will automatically import the hosted_storage domain (the engine volume that we installed the RHV-M engine on) and also import the Hosted Engine VM.

Next we need to add an ISO domain, which is used to store installation media (ISO files) in case we need to install from disk. My ISO domain will be an existing NFS share on my Synology NAS that is already used to store ISO images.

For this to work, I needed to create the ‘vdsm’ user (uid 36) and ‘kvm’ group (gid 36) on the NAS, and set the permissions and ownerships on the ISO export directory on the NAS:

root@nas1:/volume1# id vdsm
uid=36(vdsm) gid=100(users) groups=100(users),36(kvm)
root@nas1:/volume1# grep kvm /etc/group
kvm:x:36:vdsm
root@nas1:/volume1# ls -ld ISO
drwxrwxr-x 1 vdsm kvm 182 Apr 14 19:34 ISO

Again, select Storage -> New Domain. This time the Domain Function will be ISO and the type will be NFS:
With this in place, we now have three storage domains in our RHV setup:

We can check where volumes are being mounted from on the baremetal hosts using df -TH – this will show us the currently mounted network devices including glusterfs.

[root@baremetal1 ~]# df -TH
Filesystem                               Type            Size  Used Avail Use% Mounted on
/dev/mapper/vg_sys-root                  xfs              11G  3.3G  7.5G  31% /
devtmpfs                                 devtmpfs         34G     0   34G   0% /dev
tmpfs                                    tmpfs            34G  4.1k   34G   1% /dev/shm
tmpfs                                    tmpfs            34G   11M   34G   1% /run
tmpfs                                    tmpfs            34G     0   34G   0% /sys/fs/cgroup
/dev/mapper/vg_sys-var                   xfs              22G  735M   21G   4% /var
/dev/mapper/vg_sys-log                   xfs             4.3G   44M  4.3G   2% /var/log
/dev/mapper/vg_sys-audit                 xfs             1.1G   36M  1.1G   4% /var/log/audit
/dev/mapper/vg_sys-tmp                   xfs             6.5G   36M  6.4G   1% /tmp
/dev/sda1                                xfs             1.1G  151M  913M  15% /boot
tmpfs                                    tmpfs           6.8G     0  6.8G   0% /run/user/0
/dev/md126p1                             xfs             108G  3.6G  104G   4% /bricks/engine
/dev/md126p2                             xfs             5.6T   40M  5.6T   1% /bricks/vmstore
baremetal1:/engine                       fuse.glusterfs  108G  3.6G  104G   4% /rhev/data-center/mnt/glusterSD/baremetal1:_engine
baremetal1:/vmstore                      fuse.glusterfs  5.6T   40M  5.6T   1% /rhev/data-center/mnt/glusterSD/baremetal1:_vmstore
nas1.core.home.gatwards.org:/volume1/ISO nfs4            3.2T  1.6T  1.6T  50% /rhev/data-center/mnt/nas1.core.home.gatwards.org:_volume1_ISO

Add second host

Additional hosts can be added as needed – however to enable failover of the Hosted Engine, we need to perform an additional step to deploy the new host as a self-hosted engine node. This step will allow the shared storage domain (engine) to be made available on the new host.

Deployment of the new host is via the RHV-M console. Navigate to Hosts -> New. The Host Clister and Data Center will be pre-filled, as there is only one option to choose. Enter the name and address of the new host, and enter the root password. We could opt to use SSH keys, but for this installation I have not done this – maybe if I re-install 🙂

In the Advanced parameters, UNCHECK Automatically configure host firewall. As we did for baremetal1, we have already set up the firewall and don’t want RHV to screw with it.

On the Hosted Engine tab, selct Deploy from the dropdown list. If we don’t do this step, the new host will be a RHV hypervisor, but will not be able to run the RHV-M VM if we need to fail it over.

 

 

 

 

Because I want to be able to use nested virtualisation (i.e. run a hypervisor as a guest VM, for example an OpenStack Compute node), on the Kernel tab tick the Nested Virtualization checkbox. The use of this feature depends on your hardware capabilities.

Click OK, and the installation/deployment on the second host will commence. I get a warning that I have not configured power management yet – that’s OK, I will configure that part later on both hosts. The second host should appear with a package icon indicating deployment in progress.

When the deployment has completed successfully, the second host should change to UP, with a silver crown to indicate that this is a standby RHV-M host (The host currently running the self-hosted RHV-M VM will have the gold crown). The ! symbol indicates an alert for the host – this will be the fact that power management has not yet been configured.

The nested virtualisation option we selected can be checked in the hosts grub config – it will not take effect until the host is rebooted.

[root@baremetal2 ~]# grep nested /etc/grub2.cfg
	linux16 /vmlinuz-3.10.0-862.el7.x86_64 root=/dev/mapper/vg_sys-root ro nofb splash=quiet crashkernel=auto rd.lvm.lv=vg_sys/root rd.lvm.lv=vg_sys/swap rhgb quiet LANG=en_US.UTF-8 kvm-intel.nested=1

If we check our two gluster volumes now, we should also see 2 bricks marked as UP and no bricks DOWN:

 

Add third host

Unfortunately we won’t see our third brick here (The Raspberry Pi), as RHV won’t be able to install the required packages to see that brick.

Since our third host is also running RHEL and gluster, we can now import it as well. However, in my case the HP Microserver is a different CPU architecture (AMD) to the main Intel based boxes, so cannot be part of the same cluster. It can however be imported into its own cluster as a standalone host. This will also allow us to see the status of (and manage) the bricks on that host.

First, we need to create a NEW cluster to house this different architecture host. Navigate to Clusters -> New. The important parameter is the CPU Architecture and Type. In my case the N54L is an AMD Opteron G3, so I am selecting that. Ensure that the Gluster Service is enabled, otherwise the brick on this host won’t be visible to us.

After creating the cluster, a wizard will be displayed allowing you to create the new host. Let’s do that, and add our third baremetal box to this new cluster.

For this third host, I am setting the SPM value to ‘Never’ so that it cannot manage the storage pools, and am NOT deploying the hosted engine to it – we won’t be able to migrate VMs to and from baremetal3.

And also after adding this third node, we now see all three bricks in our gluster volumes:

Add Networks

Now that we have a RHV cluster consisting of both of our hypervisors and the manager, we can add some networking. If you remember from part 1 of this series, we have several networks to add. We have a dedicated MIGRATION VLAN, which will be used for VM migration between hosts, and we have an 802.1q trunk with six tagged VLANs for various VM traffic. We also have a STORAGE VLAN over which our gluster traffic is being passed.

For this, we navigate to the Data Centers -> Default -> Logical Networks tab, and select New. Give the network a name (I’ll call this one Migration). Uncheck the VM network checkbox, as the migration network is internal to RHV and will never carry VM traffic. We are not using an external network provider (e.g. OVS) so we leave all those fields blank. Click OK to create the network.

Next we repeat this for our Storage network. This network also will NOT be a VM network – and I will set the MTU to 9000 to make use of jumbo frames for volume replication.

Next we repeat this for our VLAN interfaces, except these will all be VM networks, so leave that checkbox ticked. We also enable VLAN tagging and enter the VLAN ID.

Rinse and repeat for our other VLAN tagged interfaces. Don’t worry about the DOWN indication on the networks – we haven’t assigned them to a NIC yet.

Once all of the networks have been defined at the cluster level, they can be attached to each host on the associated NIC. Navigate to the Hosts tab, select the first host (baremetal1) and click on the Network Interfaces tab.

Click the Setup Host Networks option. A Drag and Drop type wizard will be launched, showing the physical interfaces on the left, and the logical networks you created on the right.

Simply drag the logical networks to the correct physical interface, and click OK to save. For VLAN tagged interfaces, just drop the VLANs all onto the same physical interface (eno2 in my case)

When the configuration has been saved, we should see all of the physical and logical networks shown correctly for the baremetal1 host.

Now, go ahead and repeat these steps on the second host (baremetal2).

After the networks on both hosts have been configured, the last step is to configure VM Migration and Storage traffic to use the migration and storage networks we configured. To do this, navigate to the Clusters entry in the tree and select the Default cluster. Select the Logical Networks tab.

Click on Manage Networks. The role of each network will be displayed. Select the radio button for ‘Migration Network’ for our actual Migration network, and ‘Gluster Network’ for our Storage network.

In my environment, the migration network is a crossover cable between the two baremetal hosts. Since there is no switch in the path, if one host goes down, the link on the surviving host also goes down. The default RHV behaviour is then to take that host offline as not all defined interfaces are up, resulting in the whole cluster going down. We can avoid this by marking the Migration network as NOT REQUIRED. By doing this, if the link goes down, it won’t take the surviving host offline as well. The loss of link on any other network however will still take that host offline.

After clicking OK, the status display of the networks in the summary page will update to show the migration icon on the migration network, and a storage icon for the storage network.

We can now go ahead and perform all of the network configurations again on our second cluster containing our third host. However this time we need to enable access to the existing Gluster network from the Default cluster

Enable nested virtualisation on host 1

We have added our second and third hosts with nested virtualisation enabled, but now we need to enable it on our original host 1.  Navigate to Hosts and select our baremetal1 host and double click.  The Edit Host screen will be displayed, and from there we can go to the Kernel tab. Note that everything is greyed out – we need to click the Reset button so that we can enable the ‘Nested Virtualization’ checkbox and save the host record.

Next we need to perform a re-install of baremetal1 from within the RHV-M interface to get the kernel parameters to take effect. Navigate to Hosts, select the entry for baremetal1, and select Management -> Maintenance  (We don’t need to stop the Gluster service at this time).

By putting baremetal1 into maintenance, the hosted engine VM should migrate to baremetal2 – the crown icon should become gold to indicate this.

Now, select Installation -> Reinstall.  On the host installation window that pops up, uncheck the host firewall option on the General tab, and on the Hosted Engine tab ensure that DEPLOY is selected.

Click OK.  The RHV components will be reinstalled, taking into account our updated kernel parameters for the host.  Once the reinstall has completed, the host will go back into service.

To complete the update, we need to place baremetal1 back into maintenance, this time with the Gluster service turned off. Once it is in maintenance, reboot the host and wait for it to come back up.  When the host has rebooted, remove it from maintenance and we’re all done.

Emergency access and backup

One thing I did get bitten with when my RHV-M VM would not start at one point was How do we access the console of the manager if it is down? Easy. We need to run a command from the host running the hosted engine VM to set the console password. This seems to be a SINGLE USE password – I needed to set it each time I needed to access the remote console from outside the RHV environment.

[root@baremetal1 ~]# hosted-engine --add-console-password
Enter password: 

Now – remember that we set the hosted-engine remote console to VNC instead of spice?  This makes it easier for us to access it, as VNC is the native console display for KVM, and is the easiest to get working. Remember if we can’t get the hosted-engine up, we’re pretty much stuck.  To access the remote console from another Linux host (my workstation is a Fedora host):

geoff@shack[~/Downloads] $ remote-viewer vnc://baremetal1.core.home.gatwards.org:5900

We’ll be prompted for the password we defined on the manager console, and then we are in!

It also makes sense at this point to make a BACKUP of our hosted engine VM – if we manage to totally screw things up (I did this during failover testing) we can restore a known good copy of our hosted engine VM.

I followed the instructions at https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/chap-backing_up_and_restoring_a_rhel-based_self-hosted_environment#Backing_up_the_Self-Hosted_Engine_Manager_Virtual_Machine

Basically, we need to put our baremetal1 host into maintenance mode via the admin UI, wait for it to be fully in maintenance mode, with all VM’s and the SPM (Storage Pool Manager) migrated to baremetal2, and then run the backup command.

 

That’s it for now. We should now have a working cluster with two hosts, Gluster VM and Engine storage, an NFS ISO store and all networking configured as needed.

Testing our availability

So – if we shut down one host the other should survive right?  Lets try it out.

If we look at the VMs tab we can see our Hosted Engine currently running on baremetal1.

We can also see on both baremetal hosts our gluster storage is mounted from storage1

[root@baremetal1 ~]# df -TH | grep gluster
storage1:/rhvm-dom                       fuse.glusterfs  108G  3.6G  104G   4% /rhev/data-center/mnt/glusterSD/storage1:_rhvm-dom
storage1:/data-dom                       fuse.glusterfs  5.6T   40M  5.6T   1% /rhev/data-center/mnt/glusterSD/storage1:_data-dom

[root@baremetal2 ~]# df -TH | grep gluster
storage1:/rhvm-dom                       fuse.glusterfs  108G  3.6G  104G   4% /rhev/data-center/mnt/glusterSD/storage1:_rhvm-dom
storage1:/data-dom                       fuse.glusterfs  5.6T   40M  5.6T   1% /rhev/data-center/mnt/glusterSD/storage1:_data-dom

So what happens if baremetal1 (storage1) goes down?  Let’s find out.  I killed the power on baremetal1 abruptly.

On both baremetal1 and our Raspberry we can see that the gluster volume is now missing one node:

[root@baremetal2 ~]# gluster peer status
Number of Peers: 2

Hostname: storage3
Uuid: 6b347e19-e595-49e2-a55f-87a6a4c815e5
State: Peer in Cluster (Connected)

Hostname: storage1
Uuid: 53a66e48-bd61-4164-8c7e-7bec2a479123
State: Peer in Cluster (Disconnected)
Other names:
10.1.12.21

[root@baremetal2 ~]# gluster volume status
Status of volume: data-dom
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick storage2:/bricks/brick5/brick         49153     0          Y       12289
Brick storage3:/bricks/brick6/brick         49156     0          Y       773  
Self-heal Daemon on localhost               N/A       N/A        Y       1983 
Self-heal Daemon on storage3                N/A       N/A        Y       917  
 
Task Status of Volume data-dom
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: rhvm-dom
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick storage2:/bricks/brick2/brick         49152     0          Y       12241
Brick storage3:/bricks/brick3/brick         49157     0          Y       780  
Self-heal Daemon on localhost               N/A       N/A        Y       1983 
Self-heal Daemon on storage3                N/A       N/A        Y       917  
 
Task Status of Volume rhvm-dom
------------------------------------------------------------------------------
There are no active volume tasks

After a short time, we are booted from our web session – This is expected as the hosted_engine VM needs to be restarted on an available host. Logging in again we can see that the Hosted Engine has moved over to baremetal2, and baremetal1 is marked as DOWN.

Looking at our Volumes, we can see that one brick is now marked DOWN, whilst one is still UP.

Checking our storage pools in this condition, we see that they are still UP

Checking the gluster heal information for our volumes, we can see that there are a number of files that now require healing, or syncing over to baremetal1 when it comes back up.

root@glusterpi:~# gluster volume heal rhvm-dom info
Brick storage1:/bricks/brick1/brick
Status: Transport endpoint is not connected
Number of entries: -

Brick storage2:/bricks/brick2/brick
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.920 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/dom_md/ids 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.272 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.142 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.440 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/07fa4c23-208e-45ef-a70c-b35a66505532/f45e4a7f-8883-440b-a95e-a98bfde3b77d 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/92fe9cec-f4d5-47d3-99d7-3c15dac265cd/e39c21c3-6e0b-4638-8296-9a1e427568ce 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.165 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.176 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.800 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.164 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/d208a502-2959-42fe-a487-a1fd3a053651/cfe29c2d-1e69-4c16-9942-9a5f41c3185b 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.824 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.872 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.137 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.183 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.344 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.152 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.128 
Status: Connected
Number of entries: 19

Brick storage3:/bricks/brick3/brick
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.920 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/dom_md/ids 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.272 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.142 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.440 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/07fa4c23-208e-45ef-a70c-b35a66505532/f45e4a7f-8883-440b-a95e-a98bfde3b77d 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/92fe9cec-f4d5-47d3-99d7-3c15dac265cd/e39c21c3-6e0b-4638-8296-9a1e427568ce 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.165 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.176 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.800 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.164 
/022d2af1-a84a-4510-a1c6-96ba73e618eb/images/d208a502-2959-42fe-a487-a1fd3a053651/cfe29c2d-1e69-4c16-9942-9a5f41c3185b 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.824 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.872 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.137 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.183 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.344 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.152 
/.shard/d5d94c42-7604-4dcb-a454-f6e0f567cad0.128 
Status: Connected
Number of entries: 19

Powering on our baremetal1 host again, our host changes state to UP, however the gluster bricks in the volumes do not change state to UP until all of the outstanding heal tasks have been completed.

Frustration…. !!

After writing and testing these articles, I am abandoning the Raspberry Pi arbiter for my home setup. Whilst the concept was good, the reliability just wasn’t there. I trashed the gluster volumes on numerous occasions just testing failover scenarios, due to the Pi just dropping connections seemingly at random to the RHEL nodes.

This just isn’t reliable enough to go into production in my lab unfortunately.

I have performed some initial testing using a spare HP ProLiant N54L microserver that wasn’t being used, and so far this is looking MUCH better.  Even though it is still an arbiter node, since it has the same RHEL 7.5 installation, I can install the SAME version of gluster, AND add it as a third host to RHV – so I can see the status of all three bricks. And, even though the N54L is an AMD processor (as opposed to the Intel in the SuperMicro) I can still use it as a standalone RHV host, still managed by the RHV-M engine.

I will document what the revised configuration looks like in a future post, however the installation and setup pretty much mirrors what has been done in parts 2-4 of this series.

Leave a Reply

Your email address will not be published. Required fields are marked *