Stop Satellite 6 from messing up host NIC definitions

Have you ever provisioned a host with a specific interface configuration in Satellite, only to try to reprovision that host later and have it fail because some application has reconfigured the NIC’s post-deployment and Satellite is now polluted with virtual and/or invalid NIC information?

This happened to me whilst messing with RHV (ovirt) – during the configuration of RHV many additional interfaces were created on my hosts, and when I tried to re-kickstart them from Satellite it just complained about the 20+ virtual interfaces that RHV had created, and my primary interface had been changed to the ovirtmgmt bridge – kickstart on a bridge doesn’t work so well…  Turns out if you create an interface in RHV called ‘traffic’ then you will have that as an interface in Satellite.
If it’s VLAN tagged then you’ll have the tagged interfaces such as traffic.11  traffic.12  etc as well.  Gets messy real quick – thanks facter 🙂

Since I used a common base kickstart to build my hosts, puppet was installed even though I am not using it to deploy any configurations. Puppet itself has facter as a dependency, and so when puppet calls home to Satellite it reports the facts gathered as well, which then update the host details in Satellite.

There are a couple of ways to get around this problem… The first is to simply not install puppet on the hosts – no puppet, no report. But in many cases puppet may actually be wanted/needed to deploy classes required by an enterprise for security.

Thankfully we can address this particular annoyance on the Satellite side. In Satellite, under Settings -> Provisioning is an option to ‘Ignore Puppet facts for provisioning‘. The default is NO. Setting this to YES causes Satellite to ignore any interface facts sent by Puppet, so your provisioning configuration doesn’t get screwed up.

This also applies to the installed OS – ever had it where you kickstatred with 7.3 but after the system upgrades to 7.4 and you try to reprovision, you have some weird OS version mismatch in Satellite because part of the host record is 7.4 but the provisioning record is still 7.3?  Yep – this one had been annoying me for some time with various Satellite installations, time to stop it for good.  Again, under Settings -> Provisioning is the option to ‘Ignore facts for operating system‘. Once more the default in NO, but changing this will stop version updates on the host from changing the Satellite records, so re-provisioning will ‘just work’ without version conflict errors, provided the original OS version is still available for use. (You should, of course, update the host records in Satellite to use the latest versions)

Alternatively, if you know in advance what the naming conventions for the virtual NICs is going to be (you are deploying into a development/QA environment first, right?) you can add them to the ‘ignore interfaces with matching identifier‘ parameter, which already has interface prefixes for Openstack (qvo/qbr/tap etc). All you will need to do is to add the interface prefixes that you will be using in RHV to this list.

Note that these parameter changes are GLOBAL so be careful – however I can’t see too many instances where you want your known provisioning state in Satellite to be modified without your knowledge….

Home Lab – Wrap up

Earlier this year I documented the rebuild of my home lab environment using Gluster and RHV. This post is a final wrap-up of that rebuild, as one or two things changed significantly since the last part was written, and I have been intending to write this follow-up article for some time…

Issues with the original setup

If you recall from my series on the rebuild I was using three nodes, each with an internal RAID set acting as a gluster brick, giving three bricks in total per volume.

Well, that setup worked really well – UNTIL one Sunday when both main nodes (baremetal1 and baremetal2) decided to run an mdadm scan on the soft-raid volume, at the same time (thanks cron).

What happened was that the disk IO times went south big time, and the cluster pretty much ground to a halt. This resulted in RHV taking all the volumes offline, and the manager itself stopping. The two hosted-engine hypervisors then went into spasms trying to relaunch the engine, and I was spammed by a couple of hundred emails from the cluster over the space of several hours.

I was able to stabilise things once the mdadm scans had finished, but this was far from a usable solution for me. With the cluster stable, I stood up a temporary filestore on my NAS via iSCSI and relocated all VM images over to that with the exception of the ovirt-engine.

Then I trashed the cluster and rebuilt it a little differently.

(more…)

Rebuilding my home lab: Part 5

In the previous parts of this series, I have rebuilt my lab using Red Hat Gluster Storage (RHGS) and Red Hat Virtualisation (RHV), and added and configured the RHV hosts. We still have some tidying up to do to our environment before we are really ready to start hosting virtual machines.

Before we go into the next section of configuring some of the ancilliary services, I will mention that in order to view remote consoles on another Fedora/CentOS/RHEL workstation,  we need to install the  spice-xpi  package on that workstation. This will allow your browser to open the downloaded spice metadata with the remote viewer application. I only mention this here as I did need to access the console of the RHV engine VM from my Fedora workstation, and was unable to as I had not installed the spice xpi (Firefox plugin).

(more…)

Rebuilding my home lab: Part 4

Configuring RHV

In part 3 of this series, we installed the Self Hosted RHV Engine (RHV-M) on our baremetal1 server. This server is now a RHV Hypervisor with a single VM, however that VM has not yet been imported so cannot be seen from RHV-M.

Add Storage Domains

Our next step in the build process is to add our second Gluster volume (vmstore) for general VM storage. We do this from the RHV-M administration portal as the admin user.

(more…)

Rebuilding my home lab: Part 3

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In part 2 we installed and configured Gluster to provide shared storage between the hosts. In this part we will look at installing Red Hat Virtualisation and performing the initial configuration.

Red Hat Virtualisation can be installed in a number of ways. For my usage, the ideal way would be to make use of Red Hat Hyperconverged Infrastructure (RHHI), which uses an installation method that provides a wizard to install and configure both Gluster and RHV. As I mentioned in part 2, this unfortunately wasn’t possible for me due to the architecture and version mismatch on my Raspberry Pi Gluster arbiter node.

Another method allows for RHV to be installed using pre-existing resources, which we now have. The RHV Manager (RHV-M) is the first component that needs to be installed, and there are two methods to do this as well. The first is to have the manager on a dedicated host, the second is to have the manager ‘self-hosted’. Self Hosted means that the RHV-M server is hosted as a guest on a RHV hypervisor, in a similar way that VMWare operate their vCenter Appliance (VCA). I am going to be using the Self-Hosted option, using the RHV 4.1 Self Hosted Engine guide.

(more…)

Rebuilding my home lab: Part 2

In part 1 of this series we looked at the hardware configuration for the new lab, and installed the base OS via kickstart from Satellite. In this part we will look at how to solve the issue of presenting a shared storage domin from individual servers.

If you remember back from part 1, our new setup looks like this:

(more…)

Rebuilding my home lab: Part 1

I use my home lab setup extensively for testing and developing solutions for clients, as well as learning various aspects of installing and operating products within the Red Hat portfolio.  My lab hardware also hosts a handful of ‘production’ VMs that provide services such as DNS and authentication to the rest of the house.

The basic setup was two servers, both installed with Red Hat Enterprise Linux (RHEL) and local storage, using libvirt KVM to host several virtual machines per host. A single 802.1q trunk carried multiple VLANs to separate ‘core’ network traffic from two ‘lab’ networks, one of which is configured via firewall rules on the router to have no connectivity to the outside world, simulating an air-gapped disconnected environment.

(more…)