lcfg-xen-1.0.10

* Fixing bug #294. New resource named ‘timer’ is defined for setting the ‘timer_mode’ setting in the configuration file of a guest domain. The default value of this resource is set to 4.
* Fixing bug #289. Raw LVM partitions now use the xvd[X][Y] string where X the device letter and Y the numbering sequence. In order to make use of this, the disk type must be defined as “lvm”.

RPM
Source RPM
Schema

Advertisement

lcfg-xen-1.0.9

New lcfg-xen release available fixing some issues.

* Black device for additional virtual disks would overlap with that of the cd/dvd-drive i.e. hdc – bug #267. This has been fixed as any block device hdc will be only allocated for the cd/dvd-drive of the VM and not for additional virtual disks.
* Correcting documentation for ‘vncunused’ resource.
* Updating man with all of the available resources.
* Renaming resource ‘vncused’ to ‘vncunsed’ are this is the correct name. Code has been changed accordingly.
* Explanation for resources ‘sdl’, ‘vnc’ and ‘vncdisplay’ added in man page.

lcfg-xen-1.0.9-1.noarch.rpm
lcfg-xen-1.0.9-1.src.rpm

Latest schema file is taken from lcfg-xen-1.0.7:
lcfg-xen-defaults-s1-1.0.7-1.noarch.rpm

Nagios remote resources monitoring using SSH (check_by_ssh)

Recently I have been setting up Nagios as the increasing number of machines and services per machines can make it difficult to monitor and tell what’s wrong and what’s not or when you should pay more attention at a system or a service.

Following Nagios documentation is pretty much straight forward to set up the monitoring server. Start monitoring exposed services such as SSH, HTTP, FTP, MySQL, PostgreSQL is also straight forward. Plugins such as check_tcp and check_udp provide also an easy way to see if a service is actually running. For instance, for a CVS pserver, you can use the check_tcp script to check if port 2401 is open or not. Not the best way you should actually test a service but works OK when you want to do a check.

The systems I had to get monitored regarding their local resources were of three types: LCFG Linux, self-managed Linux and self-managed Solaris. This differentiation brings a bit of complexity on its own as they need different ways of sorting monitoring with SSH but still of course using the same principles and techniques. The easiest one is the LCFG ones as a configuration header was created and “included” in every system that needed to be monitored. That looks something like the following:

/** Configuration for monitored remote hosts.
*   This header will allow Nagios server to monitor
*   services on remote system that use this header by
*   running check_by_ssh.
**/

/** Nagios will fail to run remote command if an SSH banner is displayed **/
!openssh.sshdopts       mREMOVE(Banner)

!tcpwrappers.allow_sshd mCONCATQ(" <Nagios_server_hostname_goes_here")

!auth.extrapasswd       mADD(nagios)
auth.pwent_nagios       nagios:*:007:007:Nagios:/home/nagios:/bin/bash
!auth.extragroup        mADD(nagios)
auth.grpent_nagios      nagios:*:007:apache

/** You may add "nagios" user to the user access list of the machine depending the
authentication method **/

/** Public key authentication for 'nagios' user **/
!file.files             mADD(nagiosKey)
file.file_nagiosKey     /localdisk/home/nagios/.ssh/authorized_keys
file.type_nagiosKey     literal
file.mode_nagiosKey     0644
!file.tmpl_nagiosKey    mCONCATQ("<hey_goes_here>")

!profile.packages       mEXTRA(+nagios-plugins-1.4.13-4.el5)

/** List of plugins to be installed remotely **/
!profile.packages       mEXTRA(+nagios-plugins-disk-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-load-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-procs-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-swap-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-users-1.4.13-4.el5)

The self managed systems would make use of either a local or network “nagios” account using public key authentication and each remote system would need to have installed manually its own set of required plugins. A single compile of the plugins in the NFS home directory of the network “nagios” account might not work when you have multiple different *NIX operating systems.

I have configured the Nagios config files for remote services based on this *very* helpful and clear guide http://wiki.nagios.org/index.php/Howtos:checkbyssh_RedHat

The key point with the remote commands is to define the right commands for Nagios, pointing at the right location of the plugins remotely and passing the correct arguments. So, five remote services have been defined, as can be seen from the RPMs above: check_disk, check_load, check_procs, check_swap, check_users.

To call each remote plugin, new command definitions need to be added in /etc/nagios/commands.cfg

define command{
        command_name    check_remote_disk
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_disk \
        -w $ARG2$ -c $ARG3$ -p $ARG4$'
        }

define command{
        command_name    check_remote_users
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_users \
        -w $ARG2$ -c $ARG3$'
        }

define command{
        command_name    check_remote_load
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
       -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_load \
       -w $ARG2$ -c $ARG3$'
        }

define command{
        command_name    check_remote_procs
        command_line    $USER1$/check_by_ssh -p $ARG1$ 
       -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_procs \
       -w $ARG2$ -c $ARG3$ -s $ARG4$'
        }

define command{
        command_name    check_remote_swap
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_swap \
        -w $ARG2$ -c $ARG3$'
        }

Depending on the setup you might need to change the location of the plugins or use more options such as desirable user to login, location of keys, IPv4 or IPv6 connection, use of SSH1 or SSH2 etc… Once having defined the commands, they can be used to define services within host configuration files.

The main reason I wanted to avoid using NRPE was the fact that one more services should be exposed, even internally, from system that you want to expose only what is necessary. NRPE would be useful if Windows servers should be monitored for their resources.

lcfg-xen and lcfg-libvirt updates

I have spent some time the last couple of weeks doing some corrections on lcfg-xen and finalising the first release of lcfg-libvirt. Changelog notes follow. New lcfg-xen release and the first lcfg-libvirt release should happen within a couple of weeks or so. Source can be accessed via Web SVN. You’ll need Informatics iFriend account to access it.

lcfg-xen:

* Correcting typo in example in the man page that causes conflict when applied in a machine’s profile.
* Code added for powering off all the guests in case they are still on when the physical hosts reboots or is being powered off and the ‘contorldomains’ resource is set to ‘no’.

lcfg-libvirt:

* Check added when the BootVM() and ShutVM() methods are called. Each method will determine if a guest is powered on or powered off and will act accordingly.
* Runlevel check added. Only when runlevel issued is ‘0’ or ‘6’ the ShutVM() method will be called for powering off the guests.
* Information for resources added in man page.
* New resource added, ‘logfilter’, for filter logging.
* New list of resources for defining storage pools.
* New method added, ‘ConfStoragePools’, for adding and configuring storage pools – still in development and is not yet functional.

lcfg-libvirt

The last few months, when I was getting free time off other projects, I was working on a new LCFG component, lcfg-libvirt. The main idea behind lcfg-libvirt is not just to manage libVirt itself, but use libVirt via the component to manage multiple virtualisation platforms without the need to use multiple components.

At first stage, the goal was to generalise the resources that could be used by both Xen and KVM guests, as well as other platform candidates that are supported by libVirt.

The second stage was to migrate all the existing lcfg-xen functionality into the component, using the new resources and manage the Xen guests via libVirt.

At the third stage, KVM support was added at the same level as the pre-existing Xen support. At this stage, network management via virsh was implemented as well. In order to get networking sorted I had to create a new patch for the lcfg-network component in order to support bridge interfaces at an OS level.

The man page is still missing. The lcfg-xen(8) will be used as the basis for this as well.

The functionality so far can be summarised as bellow:

– Support for Xen, hardware virtualised, guests (migrated from lcfg-xen).
– Support for Xen, paravirtualised, guests (migrated from lcfg-xen).
– Support for Xen specific networking (migrated from lcfg-xen).
– Support for KVM guests for both Intel and AMD processors.
– Support for KVM specific networking.
– Guest cloning for both Xen and KVM guests (migrated from lcfg-xen).
– Support for NAT, Bridge and Routed interfaces for both Xen and KVM.
– Use of virsh to manage guests and generic networking.

KVM guest example:

!libvirt.hosttype               mSET(kvm)

!libvirt.vms    mADD(pe2900x1)

!libvirt.name_pe2900x1                  mSET(pe2900x1)
!libvirt.type_pe2900x1                  mSET(hvm)
!libvirt.uuid_pe2900x1                  mSET(56bcea35-a598-4ce8-97f1-02cba34e3451)
!libvirt.disks_pe2900x1                 mADD(root test)
!libvirt.diskname_pe2900x1_root         mSET(pe2900x1)
!libvirt.disksize_pe2900x1_root         mSET(32)
!libvirt.diskpath_pe2900x1_root         mSET(/guests)
!libvirt.diskname_pe2900x1_test         mSET(test)
!libvirt.disksize_pe2900x1_test         mSET(10)
!libvirt.diskpath_pe2900x1_test         mSET(/guests)
!libvirt.boot_pe2900x1                  mSET(no)
!libvirt.opts_pe2900x1                  mADD(vnc monitor)
!libvirt.optvalue_pe2900x1_vnc          mSET(1)
!libvirt.optvalue_pe2900x1_monitor      mSET(pty)
!libvirt.nethost_pe2900x1               mADD(vif1 vif2)
!libvirt.hostmac_pe2900x1_vif1          mSET(12:28:aa:02:1e:4d)
!libvirt.bridge_pe2900x1_vif1           mSET(br0)
!libvirt.netmode_pe2900x1_vif1          mSET(bridge)
!libvirt.hostmac_pe2900x1_vif2          mSET(23:12:cb:af:1a:cf)
!libvirt.bridge_pe2900x1_vif2           mSET(default)
!libvirt.netmode_pe2900x1_vif2          mSET(network)

Xen guest example:

!libvirt.hosttype               mSET(xen)

!libvirt.vms    mADD(pe2900x1)

!libvirt.name_pe2900x1                  mSET(pe2900x1)
!libvirt.type_pe2900x1                  mSET(hvm)
!libvirt.uuid_pe2900x1                  mSET(56bcea35-a598-4ce8-89f87-02cba34e7205)
!libvirt.disks_pe2900x1                 mADD(root test)
!libvirt.diskname_pe2900x1_root         mSET(pe2900x1)
!libvirt.disksize_pe2900x1_root         mSET(32)
!libvirt.diskpath_pe2900x1_root         mSET(/guests)
!libvirt.diskname_pe2900x1_test         mSET(test)
!libvirt.disksize_pe2900x1_test         mSET(10)
!libvirt.diskpath_pe2900x1_test         mSET(/guests)
!libvirt.boot_pe2900x1                  mSET(no)
!libvirt.nethost_pe2900x1               mADD(vif1)
!libvirt.hostmac_pe2900x1_vif1          mSET(12:28:ad:12:ac:2a)
!libvirt.bridge_pe2900x1_vif1           mSET(xenbr0)
!libvirt.script_pe2900x1_vif1           mSET(vif-bridge)
!libvirt.netmode_pe2900x1_vif1          mSET(bridge)

Network configuration example:

!libvirt.networking             mADD(routed)
!libvirt.nettype_routed         mSET(interface)
!libvirt.netname_routed         mSET(routed)
!libvirt.netuuid_routed         mSET(56bcea35-a598-4ce8-97f1-02acd24s6985)
!libvirt.bridgename_routed      mSET(virbr9)
!libvirt.mode_routed            mSET(route)
!libvirt.modedev_routed         mSET(eth0)
!libvirt.ipaddr_routed          mSET(192.168.1.0)
!libvirt.netmask_routed         mSET(255.255.255.0)
!libvirt.dhcpstart_routed       mSET(192.168.1.1)
!libvirt.dhcpend_routed         mSET(192.168.1.254)
!libvirt.nethost_routed         mSET(host1 host2)
!libvirt.hostname_routed_host1  mSET(test)
!libvirt.hostmac_routed_host1   mSET(00:1E:C9:53:29:AD)
!libvirt.hostip_routed_host1    mSET(1.1.1.1)
!libvirt.hostname_routed_host2  mSET(test2)
!libvirt.hostmac_routed_host2   mSET(00:1F:B9:65:12:AB)
!libvirt.hostip_routed_host2    mSET(2.2.2.2)

Source code available on LCFG SVN. You’ll need an Informatics iFriend account to see the contents. RPMs should follow sooner or later.