Rebuilding my home lab: Part 5

In the previous parts of this series, I have rebuilt my lab using Red Hat Gluster Storage (RHGS) and Red Hat Virtualisation (RHV), and added and configured the RHV hosts. We still have some tidying up to do to our environment before we are really ready to start hosting virtual machines.

Before we go into the next section of configuring some of the ancilliary services, I will mention that in order to view remote consoles on another Fedora/CentOS/RHEL workstation,  we need to install the  spice-xpi  package on that workstation. This will allow your browser to open the downloaded spice metadata with the remote viewer application. I only mention this here as I did need to access the console of the RHV engine VM from my Fedora workstation, and was unable to as I had not installed the spice xpi (Firefox plugin).

Additional configs for the engine VM

I had to change a couple of things in the hosted engine VM itself. These can be done by using SSH as the root user to our engine VM.

Firstly the time source for chrony was configured as the default RHEL NTP pool servers – I run my own NTP service with a GPS locked stratum 1 server, so I needed to update /etc/chrony.conf to use my servers as the timesource.

I also noticed that for some of the gluster polling, the engine was trying to reference hosts as storage[1-3] to gather stats. I believe this may have been causing the RHV ‘Self-Heal Info’ for the gluster volumes to not match the heal status shown with the gluster commands directly at the OS level. Since the engine VM resides on my core network and not the internal storage network, I needed to add three entries to /etc/hosts on the engine so that the available interface of the storage nodes can be reached.

[root@rhvm ~]# cat << EOF >> /etc/hosts
172.22.1.21   storage1
172.22.1.22   storage2
172.22.1.23   storage3
EOF

Power Management

The only alerts shown in my setup at this stage were one from each host about unconfigured power management. Configuring the power management allows the engine to power cycle (fence) the physical hosts if a lockup or other condition is detected that requires a host reboot.  In my environment, baremetal1 and 2 have an IPMI interface, and baremetal3 has a HP remote access card that also provides an IPMI interface.

In each case, a user with administrator permission in the IPMI needs to be created. I have created a user ‘rhvadmin’ on all three IPMI devices.

In the manager UI, navigate to our cluster, and select the host we want to enable power management on. Select ‘Edit’ and then click on the ‘Power Management’ tab. We want to Enable Power Management. I am turning off Kdump integration as this is my home environment and I’m not that interested in kdump collection or reporting.

Next, we need to add a Fence Agent. Fencing agents are the software methods that allow power control of a host. There are many options available to us, including ilo for true HP ProLiant servers, drac for Dell servers, and ipmilan for generic IPMI. This is what we will be using.  Click on ‘Add Fence Agent’, and fill in the details for your fencing device. In my case, I am using the ipmilan agent type, with the rhvadmin user I created on that device.

Click the ‘Test’ button to confirm that the address can be reached, and the credentials are valid. In my case, the response to the test was
“Test successful: power on”

After enabling the power management, I needed to reinstall each RHV host as described in part 4 for the change to become effective.

Satellite Registration – Errata

If we click on the ‘Errata’ portion of the system tree, we would expect to see any outstanding errata, right?  Well in my case we don’t, as we have not used Satellite (or Foreman) to deploy the engine – all we see is the following:

I have a Satellite installed here, so how do we fix this?

First we need to create a service account in our Satellite for RHV-M to use. Again, I have used the rhvadmin account. This account has administrator privilege in Satellite – at a later stage I will narrow down the exact permissions required.

With the Satellite account created, we can add Satellite as an External Provider to RHV. We do this from the ‘External Providers’ section of the system tree, and we select ‘New’.  Fill in the details of our Satellite, and click Test. We are prompted to import the SSL certificate from Satellite, after clicking Yes hopefully we see a successful test connection.

Click OK, and we should see our Satellite listed as an external provider.

Now we can go to each of our hypervisor hosts and select ‘Edit’. Enable the ‘Use Foreman/Satellite’ checkbox and save the host. Note that you will only be able to see host errata if the hosts are registered with Satellite. I built mine from Satellite kickstarts, so they are already registered. Note that if we create the Satellite provider before adding our additional hosts, we can deploy this configuration at the same time as the hosts.

At this point, we can now see any outstanding errata for our hypervisors through the RHV-M interface for each host:

But what about the manager itself?   At no point during the engine installation did we register to Satellite. We need to do this manually, by logging in as the root user via SSH to the manager. Register to our Satellite, using an activation key that gives access to the RHV manager repositories. We also need to install the katello tools so that Satellite can calculate any outstanding errata.

[root@rhvm ~]# rpm -ivh http://sat62.core.home.gatwards.org/pub/katello-ca-consumer-latest.noarch.rpm

[root@rhvm ~]# subscription-manager register --org GatwardIT --activationkey rhel7-lib-rhv

[root@rhvm ~]# subscription-manager repos --enable rhel-7-server-rhv-4.1-manager-rpms \
                                          --enable jb-eap-7-for-rhel-7-server-rpms \
                                          --enable rhel-7-server-rpms \
                                          --enable rhel-7-server-satellite-tools-6.3-rpms \
                                          --enable rhel-7-server-optional-rpms

[root@rhvm ~]# yum -y install katello-host-tools

Now, we can navigate to the Engine VM in the RHV manager interface, and select ‘Edit’.  Click on ‘Show Advanced Options’ to reveal the Foreman/Satellite tab.  From in that tab, select our Satellite provider and save the VM record.

Now, we should be able to see any outstanding errata in the ‘Errata’ tab within our engine VM in the RHV-M interface.

Clicking on any of the errata fields shows us the details:

At this point, we can run an update of the RHV-M engine as the root user via SSH.Before doing so however, we need to put the cluster into ‘Global Maintenance’ mode. We can do this easily from the command line on one of our baremetal engine hosts:

[root@baremetal2 ~]# hosted-engine --set-maintenance --mode=global

Next, we can run the upgrade check from the manager VM.

[root@rhvm ~]# engine-upgrade-check 
VERB: queue package ovirt-engine-setup for update
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64 (0%)
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64 2.3 k(100%)
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64/updateinfo (0%)
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64/updateinfo 116 k(100%)
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64/primary (0%)
VERB: Downloading: jb-eap-7-for-rhel-7-server-rpms/x86_64/primary 287 k(100%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64 (0%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64 2.0 k(100%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64/updateinfo (0%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64/updateinfo 1.9 M(100%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64/primary (0%)
VERB: Downloading: rhel-7-server-optional-rpms/x86_64/primary 4.3 M(100%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64 (0%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64 2.3 k(100%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64/updateinfo (0%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64/updateinfo 46 k(100%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64/primary (0%)
VERB: Downloading: rhel-7-server-rhv-4.1-manager-rpms/x86_64/primary 165 k(100%)
VERB: Downloading: rhel-7-server-rpms/x86_64 (0%)
VERB: Downloading: rhel-7-server-rpms/x86_64 2.0 k(100%)
VERB: Downloading: rhel-7-server-rpms/x86_64/updateinfo (0%)
VERB: Downloading: rhel-7-server-rpms/x86_64/updateinfo 2.7 M(100%)
VERB: Downloading: rhel-7-server-rpms/x86_64/primary (0%)
VERB: Downloading: rhel-7-server-rpms/x86_64/primary 16 M(57%)
VERB: Downloading: rhel-7-server-rpms/x86_64/primary 28 M(100%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64 (0%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64 2.1 k(100%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64/updateinfo (0%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64/updateinfo 7.1 k(100%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64/primary (0%)
VERB: Downloading: rhel-7-server-satellite-tools-6.3-rpms/x86_64/primary 20 k(100%)
VERB: processing package ovirt-engine-setup-4.1.11.2-0.1.el7.noarch for update
VERB: package ovirt-engine-setup-4.1.11.2-0.1.el7.noarch queued
VERB: Building transaction
VERB: Transaction built
VERB: Transaction Summary:
VERB:     update     - ovirt-engine-lib-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-lib-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-base-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-base-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-ovirt-engine-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-ovirt-engine-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-ovirt-engine-common-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-ovirt-engine-common-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.11.2-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-websocket-proxy-4.1.10.3-0.1.el7.noarch
VERB:     update     - ovirt-engine-setup-plugin-websocket-proxy-4.1.11.2-0.1.el7.noarch
Upgrade available.

In this case the RHV-M manager can be updated from 4.1.9 to 4.1.11. We need to now run the engine-setup command to perform the upgrade (simply running yum update will not perform all of the required upgrade steps)

[root@rhvm ~]# yum -y update ovirt\*setup\*
[root@rhvm ~]# engine-setup 
  Would you like to proceed? (Yes, No) [Yes]: 
  Setup has found updates for some packages:
...
  do you wish to update them now? (Yes, No) [Yes]: 
  Do you want Setup to configure the firewall? (Yes, No) [Yes]: 
 Would you like to backup the existing database before upgrading it? (Yes, No) [Yes]: 
 Perform full vacuum on the engine database engine@localhost? (Yes, No) [No]: 
  During execution engine service will be stopped (OK, Cancel) [OK]: 
...
 Please confirm installation settings (OK, Cancel) [OK]: 
...
[ INFO ] Execution of setup completed successfully

I found that running the update this way did upgrade the manager, but left outstanding errata for the JBoss EAP packages.  I resolved these by running a standard yum update AFTER the engine-setup script had updated the engine. As a precaution I then restarted the ovirt-engine service on the manager.

[root@rhvm ~]# yum -y update

[root@rhvm ~]# systemctl restart ovirt-engine

Finally, we can take the cluster out of maintenance mode from our baremetal host:

[root@baremetal2 ~]# hosted-engine --set-maintenance --mode=none

After the update of the manager, we can now also see a summary of outstanding errata in the main ‘Errata’ section of the system tree:

When updates are available for our hypervisor hosts, we can see this in the manager UI as well, with the package icon displayed next to each host, and a link to perform the upgrade in the status page for each host.

Clicking on the ‘upgrade’ link automagically places the host in maintenance and updates the packages – neat!

Replacing the SSL Certificates

The RHV-M engine is installed using self-signed certificates. In my environment, I have my own root CA and sub-CA configured, and I want to use these to simulate what would be normal practise in an enterprise environment.

We need to have available to us at this point our CA chain, SSL key and signed SSL certificate, all in PEM format. I will cover the creation and signing of SSL certificates in another post one day – for the purposes of this post these have already been created.

[root@rhvm ~]# ls -ltr
total 48
-rw-------. 1 root root 3483 Apr 30 21:39 GatwardIT-CA2.pem
-r--------. 1 root root 6276 Apr 30 21:39 GatwardIT-chain.pem
-r--------. 1 root root 2793 Apr 30 21:39 GatwardIT-TLS.pem
-rw-------. 1 root root 1766 May  2 19:18 rhvm.core.home.gatwards.org.key
-rw-------. 1 root root 2155 May  2 19:19 rhvm.core.home.gatwards.org.pem

First we need to install our ‘enterprise’ CA chain in the RHV-M engine VM so that we can automatically trust certificates that are signed by it. To do this, I have copied my root and sub CA pem files to /etc/pki/ca-trust/source/anchors on the rhvm host. We then update the system CA trust.

[root@rhvm ~]# ls -l /etc/pki/ca-trust/source/anchors/Gat*.pem
-rw-------. 1 root root 3483 Apr 30 21:39 GatwardIT-CA2.pem
-r--------. 1 root root 6276 Apr 30 21:39 GatwardIT-chain.pem
-r--------. 1 root root 2793 Apr 30 21:39 GatwardIT-TLS.pem

[root@rhvm ~]# update-ca-trust

Next, we need to replace the existing certificate and key with our signed certificate and the key used to generate it, and then restart the httpd service:

[root@rhvm ~]# rm -f /etc/pki/ovirt-engine/apache-ca.pem
[root@rhvm ~]# cp /etc/pki/ca-trust/source/anchors/GatwardIT-CA2.pem /etc/pki/ovirt-engine/apache-ca.pem
[root@rhvm ~]# cp rhvm.core.home.gatwards.org.nopass.key /etc/pki/ovirt-engine/keys/apache.key.nopass
[root@rhvm ~]# cp rhvm.core.home.gatwards.org.pem /etc/pki/ovirt-engine/certs/apache.cer

[root@rhvm ~]# systemctl restart httpd

Now we need to update three configuration files for the ovirt-engine service and restart it.

[root@rhvm ~]# cat << EOF > /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf
ENGINE_HTTPS_PKI_TRUST_STORE="/etc/pki/java/cacerts"
ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=""
EOF

cat << EOF > /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
PROXY_PORT=6100
SSL_CERTIFICATE=/etc/pki/ovirt-engine/apache-ca.pem
SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
SSL_ONLY=True
EOF

[root@rhvm ~]# cat << EOF > /etc/ovirt-engine/logcollector.conf.d/99-custom-ca-cert.conf
[LogCollector]
cert-file=/etc/pki/ovirt-engine/apache-ca.pem
EOF

[root@rhvm ~]# systemctl restart ovirt-engine

Now when we browse to the manager web interface we should see that our SSL session is trusted, as it is signed by our ‘enterprise’ CA.

Integration with Authentication Provider

So how do we now use an external authentication source to login to the RHV-M engine?  I have a pair of IPA servers running in my environment with a trust to an Active Directory domain, and I want to be able to use my standard AD credentials to connect to the manager, rather than using the internal ‘admin@internal’ account.  When I installed the baremetal hosts, my kickstart enroled them into the IPA realm – but that only covers login to the hypervisors and not the manager.

RHV-M supports both IPA and AD integration. Since my users exist in AD, I am going to bind directly to AD. I could just as easily bind to IPA in the same way. These methods are detailed in section 16.3 of the the RHV 4.1 Administration Guide

Again, we need to SSH into the engine VM as the root user in order to run these commands.

First we need to install our ‘enterprise’ CA chain in the RHV-M engine VM so that we can automatically trust certificates that are signed by it. To do this, I have copied my root and sub CA pem files to /etc/pki/ca-trust/source/anchors on the rhvm host.

[root@rhvm ~]# ls -l /etc/pki/ca-trust/source/anchors/Gat*.pem
-rw-------. 1 root root 3483 Apr 30 21:39 GatwardIT-CA2.pem
-r--------. 1 root root 6276 Apr 30 21:39 GatwardIT-chain.pem
-r--------. 1 root root 2793 Apr 30 21:39 GatwardIT-TLS.pem

[root@rhvm ~]# keytool -importcert -noprompt -trustcacerts \
    -alias GatwardIT-CA2 -file /etc/pki/ca-trust/source/anchors/GatwardIT-CA2.pem \
    -keystore /etc/ovirt-engine/aaa/GatwardIT-CA2.jks \
    -storepass s0m3pa55w0rd!
Certificate was added to keystore

[root@rhvm ~]# cat << EOF > /etc/ovirt-engine/aaa/profile1.properties
pool.default.ssl.startTLS = true
pool.default.ssl.truststore.file = ${local:_basedir}/GatwardIT-CA2.jks
pool.default.ssl.truststore.password = s0m3pa55w0rd!
EOF

With the certificates in place, we can run the ldap-setup script. I am ensuring that we use StartTLS so that all LDAP queries are encrypted using SSL. Make sure that a test lookup of a valid AD user is successful. If the ldap-setup script completes successfully, the last step is to restart the engine.

[root@rhvm ~]#  yum -y install ovirt-engine-extension-aaa-ldap-setup

[root@rhvm ~]# ovirt-engine-extension-aaa-ldap-setup
 Welcome to LDAP extension configuration program
 Available LDAP implementations:
 1 - 389ds
 2 - 389ds RFC-2307 Schema
 3 - Active Directory
 4 - IBM Security Directory Server
 5 - IBM Security Directory Server RFC-2307 Schema
 6 - IPA
 7 - Novell eDirectory RFC-2307 Schema
 8 - OpenLDAP RFC-2307 Schema
 9 - OpenLDAP Standard Schema
 10 - Oracle Unified Directory RFC-2307 Schema
 11 - RFC-2307 Schema (Generic)
 12 - RHDS
 13 - RHDS RFC-2307 Schema
 14 - iPlanet
 Please select: 3
 Please enter Active Directory Forest name: ad.home.gatwards.org
 Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS
 Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File
 File path: /etc/pki/ca-trust/source/anchors/GatwardIT-CA2.pem
 Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): cn=rhv admin,ou=Service Accounts,ou=Accounts,dc=ad,dc=home,dc=gatwards,dc=org
 Enter search user password: 
 Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]: No
 Please specify profile name that will be visible to users [ad.home.gatwards.org]: AD
 Please provide credentials to test login flow:
 Enter user name: geoff
 Enter user password: 
[ INFO ] Login sequence executed successfully
 Select test sequence to execute (Done, Abort, Login, Search) [Done]: Done

[root@rhvm ~]# systemctl restart ovirt-engine

At this point, we should be able to select our AD profile from the login dropdown, and login to the RHV-M using an AD user.

BUT – we have not granted this user any access from the RHV side yet, so although we successfully login, we can’t do anything!

Setting user permissions

I am going to cheat and add my AD user (geoff) to the manager with SuperUser access. In the real world we would create users with roles that provide granular access.

To add an AD user, navigate to Configure -> System Permissions. This lists the current users and thier roles.

Click on Add. A search box will open… Ensure that we are searching for a User in the AD profile we created earlier. Enter the user to search for and click Go.  A list of matching users will be displayed. Select the appropriate user, make sure the correct role is listed and click OK. We could also select a Group here to assign privileges to all users within the specified AD group.

We should see our new user listed as we defined.

Close the window and we are done.  This AD user should now be able to login with the privileges assigned by the selected role.

 

 

Rebuilding my home lab: Part 4

Configuring RHV

In part 3 of this series, we installed the Self Hosted RHV Engine (RHV-M) on our baremetal1 server. This server is now a RHV Hypervisor with a single VM, however that VM has not yet been imported so cannot be seen from RHV-M.

Add Storage Domains

Our next step in the build process is to add our second Gluster volume (vmstore) for general VM storage. We do this from the RHV-M administration portal as the admin user.

(more…)

Rebuilding my home lab: Part 3

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In part 2 we installed and configured Gluster to provide shared storage between the hosts. In this part we will look at installing Red Hat Virtualisation and performing the initial configuration.

Red Hat Virtualisation can be installed in a number of ways. For my usage, the ideal way would be to make use of Red Hat Hyperconverged Infrastructure (RHHI), which uses an installation method that provides a wizard to install and configure both Gluster and RHV. As I mentioned in part 2, this unfortunately wasn’t possible for me due to the architecture and version mismatch on my Raspberry Pi Gluster arbiter node.

Another method allows for RHV to be installed using pre-existing resources, which we now have. The RHV Manager (RHV-M) is the first component that needs to be installed, and there are two methods to do this as well. The first is to have the manager on a dedicated host, the second is to have the manager ‘self-hosted’. Self Hosted means that the RHV-M server is hosted as a guest on a RHV hypervisor, in a similar way that VMWare operate their vCenter Appliance (VCA). I am going to be using the Self-Hosted option, using the RHV 4.1 Self Hosted Engine guide.

(more…)

Rebuilding my home lab: Part 2

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In this part we will look at how to solve the issue of presenting a shared storage domin from individual servers.

If you remember back from part 1, our new setup looks like this:

(more…)

Rebuilding my home lab: Part 1

I use my home lab setup extensively for testing and developing solutions for clients, as well as learning various aspects of installing and operating products within the Red Hat portfolio.  My lab hardware also hosts a handful of ‘production’ VMs that provide services such as DNS and authentication to the rest of the house.

The basic setup was two servers, both installed with Red Hat Enterprise Linux (RHEL) and local storage, using libvirt KVM to host several virtual machines per host. A single 802.1q trunk carried multiple VLANs to separate ‘core’ network traffic from two ‘lab’ networks, one of which is configured via firewall rules on the router to have no connectivity to the outside world, simulating an air-gapped disconnected environment.

(more…)