Sei sulla pagina 1di 28

High-Availability oVirt-Cluster with iSCSI-Storage

High-Availability oVirt-Cluster with iSCSI-Storage


Benjamin Alfery <benjamin.alfery@linbit.com>, Philipp Richter <philipp.richter@linbit.com> Copyright 2013 LINBIT HA-Solutions GmbH

Trademark notice
DRBD and LINBIT are trademarks or registered trademarks of LINBIT in Austria, the United States, and other countries. Other names mentioned in this document may be trademarks or registered trademarks of their respective owners.

License information
The text and illustrations in this document are licensed under a Creative Commons Attribution-Noncommercial-NoDerivs 3.0 Unported license ("CC BY-NC-ND"). A summary of CC BY-NC-ND is available at http://creativecommons.org/licenses/by-nc-nd/3.0/. The full license text is available at http://creativecommons.org/licenses/by-nc-nd/3.0/legalcode. In accordance with CC BY-NC-ND, if you distribute this document, you must provide the URL for the original version.

Table of Contents
1. Introduction ............................................................................................................................. 1 1.1. Goal of this guide .......................................................................................................... 1 1.2. Limitations .................................................................................................................... 2 1.3. Conventions in this document ......................................................................................... 2 2. Software ................................................................................................................................. 3 2.1. Software repositories ..................................................................................................... 3 2.1.1. LINBIT DRBD and pacemaker repositories .............................................................. 3 2.1.2. Enable EPEL repository ........................................................................................ 4 2.2. DRBD and pacemaker installation .................................................................................... 4 2.3. Optional: Install csync2 .................................................................................................. 4 3. Backing devices ........................................................................................................................ 5 4. oVirt Manager preparation ........................................................................................................ 6 4.1. DRBD resource for the oVirt Manager virtual machine ....................................................... 6 4.2. Create a network bridge ................................................................................................. 6 4.3. Install libvirt/qemu ......................................................................................................... 6 4.4. KVM definition and system installation ............................................................................. 7 5. Heartbeat configuration ............................................................................................................ 8 6. Pacemaker rules for KVM-DRBD resource and virtual machine ....................................................... 9 7. iSCSI preparation .................................................................................................................... 10 7.1. DRBD resource for the iSCSI target ............................................................................... 10 8. Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSI service IP address ....................... 11 8.1. Portblock for the iSCSI target ....................................................................................... 11 8.2. iSCSI resource group .................................................................................................... 12 8.3. Constraints for the iSCSI resources ................................................................................ 12 9. oVirt Manager, hypervisors and iSCSI storage ............................................................................ 13 9.1. oVirt Manager installation ............................................................................................. 13 9.1.1. Reconfigure oVirt machinetypes ......................................................................... 13 9.2. Reconfigure LVM ......................................................................................................... 14 9.3. Extend udev rule for DRBD ........................................................................................... 14 9.4. Hypervisor installation .................................................................................................. 14 9.4.1. Adjust libvirt access .......................................................................................... 15 9.5. Second node hypervisor installation ............................................................................... 16 9.6. Storage setup .............................................................................................................. 16 10. Test, fence and backup ......................................................................................................... 19 11. Further documentation and links ............................................................................................ 20 12. Appendix ............................................................................................................................. 21 12.1. Configurations ........................................................................................................... 21 12.1.1. DRBD ............................................................................................................ 21 12.1.2. KVM .............................................................................................................. 22 12.1.3. Heartbeat ...................................................................................................... 23 12.1.4. Pacemaker ..................................................................................................... 24 12.1.5. Others ........................................................................................................... 25

iii

Chapter1.Introduction
oVirt1 is a management application for virtual machines that uses the libvirt interface. It consists of a webbased userinterface (oVirt Manager), one or more hypervisors, and data storage for the virtual guests. DRBD2 is a distributed storage system for the Linux OS. It is included in the Linux kernel since 2.6.333. LINBIT4, the authors of DRBD, actively supports and develops DRBD, which is the world leading Open Source shared-nothing storage solution for the Linux ecosystem. LINBIT is a premier business partner of Red Hat and DRBD is an accepted 3rd party solution by Red Hat. This means that you wont lose your Red Hat support when you are using DRBD with a LINBIT support subscription.

1.1.Goal of this guide


To provide a highly avialable virtualization environment with oVirt we are going to use two physical machines providing a bare KVM (hosting the oVirt Manager) and an iSCSI target as storage for the virtual guests. Furthermore, the two physical nodes will be used as oVirt hypervisors. We will use DRBD for data replication of the KVM and the iSCSI-storage between the nodes. Pacemaker and heartbeat will serve as the cluster management system.

ovirt-hyp1

ovirt-hyp2

oVirtm VM

oVirtm VM

oVirt VMs oVirt LVM

VM

VM

VM

VM

VM

VM

oVirt iSCSI initiator

oVirt iSCSI initiator

iSCSI DRBD LVM LV kvm_oVirtm kvm-ovirtm

store1 iscsi LV iscsi kvm-ovirtm LV kvm_oVirtm

store1 iscsi LV iscsi

local disk

local disk

1 2

http://www.ovirt.org http://www.drbd.org 3 http://www.drbd.org/download/mainline/ 4 http://www.linbit.com/

Introduction

1.2.Limitations
This guide covers only the important steps to set up a highly available oVirt-Cluster with iSCSI-storage using DRBD for data-replication. It does not cover additional important topics that should be considered in a production environment: Performance tuning of any kind (DRBD, iSCSI, oVirt, ) oVirt power management configuration Fencing

WARNING This guide does not cover the configuration of your clusters fencing strategy. This is vitally important in production environments. If you are uncertain of how to setup fencing in your environment or any other topic within this document you may want to consult with the friendly experts at LINBIT beforehand.

1.3.Conventions in this document


This guide assumes two machines named ovirt-hyp1 and ovirt-hyp2. They are connected via a dedicated cross-over 1 gigabit-ethernet link, using the IP addresses 192.168.10.10 and 192.168.10.11. DRBD will use the minor numbers 0 (resource name: kvm-ovirtm) and 1 (resource name: iscsi) for the replicated volumes. This document describes a oVirt/iSCSI/DRBD/Pacemaker installation on a x86_64 machine running Linux kernel version 2.6.32-358.14.1.el6.x86_64 with Scientific Linux 6.4 user space, up-to-date as of August 2013. The DRBD kernel module and user space version is 8.4.3. It is also assumed that for the backing devices logical volumes are used. While not necessarily needed, logical volumes are highly recommended for flexibility. This guide assumes basic Linux administration, DRBD and Pacemaker knowledge. All configuration files used are available in Chapter12, Appendix [21].

Chapter2.Software
Its assumed that the base system is already setup. Most of the needed packages are already installed on SL6. Pacemaker is a cluster resource management framework which you will use to automatically start, stop, monitor, and migrate resources. This technical guide assumes that you are using at least pacemaker 1.1.6. Heartbeat is the cluster messaging layer that pacemaker uses. This guide assumes at least heartbeat version 3.0.5. Using the LINBIT pacemaker repository this should come bundled with pacemaker. DRBD is a kernel block-level synchronous replication facility which serves as an imported shared-nothing cluster building block. Pre-compiled packages are available in official repositories from LINBIT. You will install the drbd-utils and drbd-kmod packages. These comprise the DRBD administration utilities and kernel module. libvirt is an open source management tool for virtualization. It provides a unique API to virtualization technologies such as KVM, QEMU, Xen and VMware ESX. oVirt/oVirtm is a management application for virtual machines. This guide assumes oVirt engine Version 3.2 Csync2, while not strictly necessary, is a highly recommended tool to keep configuration files synchronized on multiple machines. Its sources can be downloaded on LINBITs OSS pages1. A paper providing an overview and describing the use is available as well2.

2.1.Software repositories
Assuming the operating system is fully installed and up-to-date, the network interfaces (cross-link and service-link) are configured and operational, the first step is to add some missing repositories to the system. (Be sure to add them on both nodes)

2.1.1.LINBIT DRBD and pacemaker repositories


# cat > /etc/yum.repos.d/drbd-8.repo <<EOF [drbd-8] name=DRBD 8 baseurl=http://packages.linbit.com/<hash>/8.4/rhel6/$basearch gpgcheck=0 enabled=1 EOF # cat > /etc/yum.repos.d/pacemaker.repo <<EOF [pacemaker] name=pacemaker RHEL - $basearch baseurl=http://packages.linbit.com/<hash>/pacemaker/rhel6/$basearch enabled=1 gpgcheck=0 EOF

Make sure to replace the hashes with the ones provided to you by LINBIT. If you are not using the LINBIT repository, you have to handle the installation of pacemaker, heartbeat and DRBD yourself.

1 2

http://oss.linbit.com/csync2/ http://oss.linbit.com/csync2/paper.pdf

Software

2.1.2.Enable EPEL repository


There may be some packages or dependencies that need the EPEL3 repository.
# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/<arch>/epel-release-6-8.noarch.rpm

Replace <arch> by your architecture (i386, x86_64, )

2.2.DRBD and pacemaker installation


If you are using LINBIT repositories you can easily install DRBD and pacemaker by executing
# yum -y install drbd kmod-drbd drbd-pacemaker pacemaker-hb pacemaker-hb-cli heartbeat

As we will use pacemaker to manage DRBD we need to disable it on startup (on both nodes):
# chkconfig drbd off

2.3.Optional: Install csync2


If you want install csync2 (this will need the EPEL respository) issue
# yum install csync2

Configuration and usage of csync2 is not covered in this guide. Please consult the corresponding paper4 for usage information.

3 4

https://fedoraproject.org/wiki/EPEL http://oss.linbit.com/csync2/paper.pdf

Chapter3.Backing devices
We will need two resources that will be replicated with DRBD: one for the KVM hosting the oVirt Manager, and one containing the iSCSI target. To create the backing devices for DRBD, create two logical volumes (size and naming may vary in your installation):
# lvcreate -L10G -n kvm_oVirtm system # lvcreate -L30G -n iscsi system

Note Be sure to create identical logical volumes on both nodes.

Chapter4.oVirt Manager preparation


4.1.DRBD resource for the oVirt Manager virtual machine
Configure the DRBD resource on both nodes:
# cat > /etc/drbd.d/kvm-ovirtm.res <<EOF resource kvm-ovirtm { net { protocol C; } volume 0 { device disk meta-disk } on ovirt-hyp1 { address } on ovirt-hyp2 { address } } EOF

minor 0; /dev/system/kvm_oVirtm; internal;

192.168.10.10:7788;

192.168.10.11:7788;

Make sure to initialize the resource and make it Primary on one of the nodes.

4.2.Create a network bridge


(Do this on both nodes) To give the virtual machine access to the outside network (as we want to access it from there), we need to configure a bridge. Therefor we need to install the bridge-utils:
# yum install bridge-utils

As the hypervisor installation (see later in this document) is going to name its bridge ovirtmgmt anyway, we are going to take this name:
# brctl addbr ovirtmgmt

Make your outgoing (public) interface slave of this interface. WARNING Be careful with this configuration, as you could easily lock yourself out, if accessing the servers via a remote connection.

4.3.Install libvirt/qemu
To install our oVirt Manager KVM we are going to need the user space-tools libvirt and qemu (on both nodes).
# yum install qemu-kvm libvirt

oVirt Manager preparation

4.4.KVM definition and system installation


Prepare a KVM definition file (identical on both nodes) for your oVirt Manager KVM. This definition should contain the configured DRBD resource as hard disk. It should look similar to this:
<domain type='kvm'> <name>oVirtm</name> <uuid>34ad3032-f68e-734a-8e84-47af69e7848a</uuid> <memory unit='KiB'>1572864</memory> <currentMemory unit='KiB'>1572864</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/drbd/by-res/kvm-ovirtm/0'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' </controller> <interface type='bridge'> <mac address='52:54:00:08:10:da'/> <source bridge='ovirtmgmt'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' </memballoon> </devices> </domain>

function='0x0'/>

function='0x2'/>

function='0x0'/>

function='0x0'/>

function='0x0'/>

You may now start your virtual machine on the node that holds the primary role for the used resource and install the base operation system. (we will come back to this later on).

Chapter5.Heartbeat configuration
To enable cluster communication we need to configure heartbeat (again on both nodes). In /etc/ha.d/ create the file ha.cf with the heartbeat parameters. This should contain something like:
autojoin none node ovirt-hyp1 node ovirt-hyp2 bcast eth1 mcast ovirtmgmt 239.192.0.51 694 1 0 use_logd yes initdead 120 deadtime 20 warntime 10 keepalive 1 compression bz2 crm respawn

Make sure to also create the authkeys file in this directory, containing something like:
auth 1 1 sha1 sdrsdfrgaqerbqerbq34bgaebaqejrbnSDFQ23Fwe

The string in the second line is the shared secret for cluster communication. Be sure to set the proper ownership and permissions for the authentication files on both nodes:
# chown root: /etc/ha.d/authkeys # chmod 600 /etc/ha.d/authkeys

Set the heartbeat service to be started on system startup (on both nodes):
# chkconfig heartbeat on

Now start heartbeat:


# service heartbeat start

Check your firewall settings to allow the cluster communication.

Chapter6.Pacemaker rules for KVMDRBD resource and virtual machine


Prepare your initial pacemaker configuration and add the primitive for the DRBD resource (these actions are done via the CRM shell):
$ primitive p_drbd_kvm-ovirtm ocf:linbit:drbd \ params drbd_resource="kvm-ovirtm" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"

Add a master-slave statement as this resource spans over two cluster nodes:
$ ms ms_drbd_kvm-ovirtm p_drbd_kvm-ovirtm \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"

For the virtual machine set the following primitive:


$ primitive p_kvm-ovirtm ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/oVirtm.xml" \ op start interval="0" timeout="180s" \ op stop interval="0" timeout="300s" \ op monitor interval="60s" timeout="60s"

To bind the two primitives together we need to set two constraints. The first is a colocation constraint to make the primary side of DRBD run with the virtual machine:
$ colocation co_kvm-ovirtm_with_drbd +inf: p_kvm-ovirtm:Started ms_drbd_kvm-ovirtm:Master

The second rule defines the order. The DRBD resource must be promoted before the KVM can start:
$ order o_drbd-kvm-ovirtm_before_kvm +inf: ms_drbd_kvm-ovirtm:promote p_kvm-ovirtm:start

Test and commit the changes.

Chapter7.iSCSI preparation
To configure an iSCSI target, we need the iSCSI user space tools. As we are going to to use a tgt target, this will be:
# yum install scsi-target-utils

Make sure its started on system startup:


# chkconfig tgtd on

Then start the service:


# service tgtd start

(Again, do all these steps on both nodes.)

7.1.DRBD resource for the iSCSI target


Configure the DRBD resource on both nodes:
# cat > /etc/drbd.d/iscsi.res <<EOF resource iscsi { net { protocol C; } volume 0 { device disk meta-disk } on ovirt-hyp1 { address } on ovirt-hyp2 { address } } EOF

minor 1; /dev/system/iscsi; internal;

192.168.10.10:7789;

192.168.10.11:7789;

Make sure to initialize the resource and make it Primary on one of the nodes.

10

Chapter8.Pacemaker rules for iSCSIDRBD resource, iSCSI target and iSCSI service IP address
In the CRM shell add a primitive for the iSCSI-DRBD resource:
$ primitive p_drbd_iscsi ocf:linbit:drbd \ params drbd_resource="iscsi" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"

A master-slave statement as this resource also spans over two cluster nodes:
$ ms ms_drbd_iscsi p_drbd_iscsi \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"

Add a primitive for the iSCSI target:


$ primitive p_iscsi_store1 ocf:heartbeat:iSCSITarget \ params implementation="tgt" iqn="iqn.2013-08.linbit.ovirtiscsi:store1" tid="1" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60"

As we need a logical unit (lun) for the target that refers to the backing device, we need to set another primitive:
$ primitive p_iscsi_store1_lun1 ocf:heartbeat:iSCSILogicalUnit \ params implementation="tgt" target_iqn="iqn.2013-08.linbit.ovirtiscsi:store1" lun="1" \ path="/dev/drbd/by-res/iscsi/0" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60"

To access the target independently from the node it is running on, we configure a service IP address for it:
$ primitive p_ip_iscsi ocf:heartbeat:IPaddr2 \ params ip="192.168.10.50" \ op start interval="0" timeout="20" \ op stop interval="0" timeout="20" \ op monitor interval="30" timeout="20"

8.1.Portblock for the iSCSI target


As some clients might have problems receiving a tcp-reject from the iSCSI service during a switch- or failover, we are going to set a cluster managed rule to set a DROP policy in the firewall during the transfer from one node to the other. This is considered safer as the clients dont receive a response from the server during this action and do not drop their connection (but try it again for some time). This gives the cluster some time to start all the resources. The portblock resource agent is designed to achieve this kind of requirement:
$ primitive p_portblock-store1-block ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="block"

11

Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSI service IP address

To unblock the port again set the primitive:


$ primitive p_portblock-store1-unblock ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="unblock" \ op monitor interval="30s"

8.2.iSCSI resource group


As we configured some primitives that must always run together, we will define a group for them:
$ group g_iscsi p_portblock-store1-block p_ip_iscsi p_iscsi_store1 \ p_iscsi_store1_lun1 p_portblock-store1-unblock

8.3.Constraints for the iSCSI resources


As the DRBD resource and services in the iSCSI group have to run on the same node we define a colocation constraint specifying the two resource should always run together on the same node:
$ colocation co_g_iscsi_with_drbd +inf: g_iscsi:Started ms_drbd_iscsi:Master

Of course, the DRBD resource has to be promoted before the other services can access it. We must then set an order constraint to that effect:
$ order o_drbd_iscsi_before_g_iscsi +inf: ms_drbd_iscsi:promote g_iscsi:start

Test and commit the changes.

12

Chapter9.oVirt Manager, hypervisors and iSCSI storage


As pacemaker is already fully configured by now, its time to install the oVirt Manager inside the virtual machine, install the hypervisor components on the physical nodes, and enable the iSCSI storage for use in oVirt.

9.1.oVirt Manager installation


Assuming the operating system inside the VM is already installed and up to date, connect to it and enable the oVirt repository:
# cat > /etc/yum.repos.d/ovirt.repo <<EOF [ovirt] name=oVirt 3.2 repo baseurl=http://resources.ovirt.org/releases/3.2/rpm/EL/6/ enabled=1 gpgcheck=0 EOF

Also enable the EPEL repository (as shown in Section2.1.2, Enable EPEL repository [4]). To install the oVirt Manager we need java-openjdk (which comes from EPEL):
# yum install java-1.7.0-openjdk

With this in place, we can install the ovirt-engine (and all its depending components):
# yum install ovirt-engine

Note Before starting the oVirt engine-setup, make sure the virtual machine has a static IP address configured and DNS (as well as rDNS) for it is in place. The oVirt Manager setup will not continue if the host is not resolvable. To start the oVirt Manager configuration run:
# engine-setup

Walk through the setup accordingly. Make sure to answer the following questions as shown:
The engine can be configured to present the UI in three different application modes. virt [Manage virtualization only], gluster [Manage gluster storage only], and both [Manage virtualization as well as gluster storage] ['virt'| 'gluster'| 'both'] [both] : virt The default storage type you will be using [NFS] : ISCSI ['NFS'| 'FC'| 'ISCSI'| 'POSIXFS']

9.1.1.Reconfigure oVirt machinetypes


For some reason the setup might not set the correct machine types for virtual machines. If this is the case set the machine type manually.

13

oVirt Manager, hypervisors and iSCSI storage To get the current machine types, type:
# engine-config -g EmulatedMachine

If this show something like:


EmulatedMachine: pc-0.14 version: 3.1 EmulatedMachine: pc-0.14 version: 3.2 EmulatedMachine: pc-0.14 version: 3.0

Then we need to set it manually. As we use hypervisor version 3.2 the EmulatedMachine should be rhel6.4.0.
# engine-config -s EmulatedMachine=rhel6.4.0 --cver=3.2

Restart the ovirt engine:


# service ovirt-engine restart

9.2.Reconfigure LVM
Before we can begin installing the hypervisors on the physical nodes, we need to reconfigure the LVM configuration on the physical nodes, as the hypervisor installation will use LVM as well. Do the following on both physical nodes. In /etc/lvm/lvm.conf, set:
write_cache_state = 0

Extend the preferred_names parameter by your volume group name, e.g. if your volume group name is system this looks similar to the following:
preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", ... , "^/dev/system"]

Finally, extend (or set) a filter for the DRBD devices:


filter = [ "r|^/dev/drbd.*|" ]

9.3.Extend udev rule for DRBD


As the hypervisor installation changes the qemu configuration and pacemaker has to access the DRBD device, we need to extend the DRBD udev rule. In /etc/udev/rules.d/65-drbd.rules extend the existing rule (on both nodes) by setting:
OWNER="vdsm", GROUP="kvm"

The rule should look now something like:


SUBSYSTEM=="block", KERNEL=="drbd*", IMPORT{program}="/sbin/drbdadm sh-udev minor-%m", NAME="$env{DEVICE}", SYMLINK="drbd/by-res/$env{RESOURCE} drbd/by-disk/$env{DISK}", OWNER="vdsm", GROUP="kvm"

9.4.Hypervisor installation
As we want our physical nodes to act as hypervisors, we need to install the corresponding components on them. The installation process itself is done via the oVirt Manager webfront-end, except for minor adjustments as we have a special setup. 14

oVirt Manager, hypervisors and iSCSI storage First we have to enable the oVirt repositories on both physical nodes (as shown in Section9.1, oVirt Manager installation [13]). Second, we must set one node (the one that gets installed first) to "standby" in the pacemaker cluster manager, preferably the one that runs the virtual machine, as we then dont have to wait for it while it restarts. Wait until all resources are switched over to the remaining online node and login into the oVirt Manager webinterface. In the left navigation tree click on "System" followed by "Hosts" tab in the top navgiation bar and then on "New". Fill in the from with informations of the host in the form that is currently in standby: "Name", "Address" and "Root Password". Uncheck "Automatically configure host firewall".

Click "OK". The hypervisor installation is now performed. You can watch the process in the logarea (grey bar at the bottom of the page). When the installation is finished the node gets rebooted (as part of the installation process). The hypervisor should now show up in the webinterface.

9.4.1.Adjust libvirt access


Because the hypervisor installation protects the access to libvirt by password, we need to enable the access again, as pacemaker has to manage the oVirt Manager KVM. This can be done by setting a password for pacemaker to the libvirt application:
# saslpasswd2 -a libvirt pcmk

In /etc/libvirt/auth.conf set:
[credentials-pcmk] authname=pcmk password=your-password-from-saslpasswd2 [auth-libvirt-ovirt-hyp1] credentials=pcmk [auth-libvirt-ovirt-hyp2] credentials=pcmk [auth-libvirt-localhost] credentials=pcmk

15

oVirt Manager, hypervisors and iSCSI storage Set the password accordingly and deploy this file on the other node also. You can test if the access works by running:
# virsh list

The above command should not ask for a password anymore.

9.5.Second node hypervisor installation


To install the second hypervisor on the remaining node, we need to set the standby node online again. Before setting the other node to standby, make sure that all DRBD resources are in sync again. Set the remaining node to standby and wait for the resources to switch-over. If the oVirt Manager KVM is back up again, login to the web interface and perform the installation for the second node (as previously described). When the node is rebooted, dont forget to set a password via:
# saslpasswd2 -a libvirt pcmk

If /etc/libvirt/auth.conf is not already in place set it as shown above. Set the standby node online again to ensure full cluster functionality.

9.6.Storage setup
We can now setup the iSCSI Storage in the oVirt Manager. Login into the web interface. In the left navigation tree click on "System", in the top navigation bar click on the "Storage" tab and then on "New Domain". Fill in the "Name" of the storage and select a host ("Use Host") - it actually does not matter which host you select, as this is only used for setup. In the "Discover Targets" area fill in the "Address" and the "Port". The address is the service IP address we configured in pacemaker that should always point to the active iSCSI target. The port is "3260" if not specified otherwise.

Click on "Discover" to discover the iSCSI target. If its discovered correctly, click "Login". 16

oVirt Manager, hypervisors and iSCSI storage

Select the "LUN ID" you want to use (in this case there is only one available).

Click "OK". The new storage is being initialized and should go up after a while.

17

oVirt Manager, hypervisors and iSCSI storage

Now all the components should be in place. You should test the whole setup extensively and create test virtual machines.

18

Chapter10.Test, fence and backup


A word of warning As mentioned at the beginning of this document this guide describes only the basic steps of how to enable a high-available oVirt-Cluster with iSCSI-Storage and DRBD. Testing and fencing (such as STONITH) are vitally important for production usage of this setup. Without extensive tests and fencing strategies in place, it can easily destroy your environment and data set on top. Also think of a proper backup policy, as DRBD is not a replacement for backups. If you are unsure on one or more of these topics (or any other within this document), consult with the friendly experts at LINBIT.

19

Chapter11.Further documentation and links


oVirt documentation Building oVirt engine RHEL V2V Guide The oVirt documentation page http://www.ovirt.org/Documentation Building oVirt Engine from scratch http://www.ovirt.org/Building_oVirt_engine Red Hat guide how to import virtual machines from foreign hypervisors to Red Hat Enterprise Virtualization and KVM managed by libvirt https://access.redhat.com/site/documentation/en-US/ Red_Hat_Enterprise_Linux/6/html-single/V2V_Guide/index.html DRBD users guide LINBIT Tech Guides The reference guide for DRBD http://www.drbd.org/docs/about/ LINBIT provides a lot of in-depth knowledge via the tech guides http://www.linbit.com/en/education/tech-guides/

20

Chapter12.Appendix
12.1.Configurations
12.1.1.DRBD
/etc/drbd.d/kvm-ovirtm.res
resource kvm-ovirtm { net { protocol C; } volume 0 { device disk meta-disk } on ovirt-hyp1 { address } on ovirt-hyp2 { address } }

minor 0; /dev/system/kvm_oVirtm; internal;

192.168.10.10:7788;

192.168.10.11:7788;

/etc/drbd.d/iscsi.res
resource iscsi { net { protocol C; } volume 0 { device disk meta-disk } on ovirt-hyp1 { address } on ovirt-hyp2 { address } }

minor 1; /dev/system/iscsi; internal;

192.168.10.10:7789;

192.168.10.11:7789;

21

Appendix

12.1.2.KVM
/etc/libvirt/qemu/oVirtm.xml (be sure to take this configuration only as a guideline)
<domain type='kvm'> <name>oVirtm</name> <uuid>34ad3032-f68e-734a-8e84-47af69e7848a</uuid> <memory unit='KiB'>1572864</memory> <currentMemory unit='KiB'>1572864</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/drbd/by-res/kvm-ovirtm/0'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' </controller> <interface type='bridge'> <mac address='52:54:00:08:10:da'/> <source bridge='ovirtmgmt'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' </memballoon> </devices> </domain>

function='0x0'/>

function='0x2'/>

function='0x0'/>

function='0x0'/>

function='0x0'/>

22

Appendix

12.1.3.Heartbeat
/etc/ha.d/ha.cf
autojoin none node ovirt-hyp1 node ovirt-hyp2 bcast eth1 mcast ovirtmgmt 239.192.0.51 694 1 0 use_logd yes initdead 120 deadtime 20 warntime 10 keepalive 1 compression bz2 crm respawn

/etc/ha.d/authkeys
auth 1 1 sha1 sdrsdfrgaqerbqerbq34bgaebaqejrbnSDFQ23Fwe

(adapt the key in this configuration)

23

Appendix

12.1.4.Pacemaker
CRM config:
primitive p_drbd_iscsi ocf:linbit:drbd \ params drbd_resource="iscsi" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" primitive p_drbd_kvm-ovirtm ocf:linbit:drbd \ params drbd_resource="kvm-ovirtm" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" primitive p_ip_iscsi ocf:heartbeat:IPaddr2 \ params ip="192.168.10.50" \ op start interval="0" timeout="20" \ op stop interval="0" timeout="20" \ op monitor interval="30" timeout="20" primitive p_iscsi_store1 ocf:heartbeat:iSCSITarget \ params implementation="tgt" iqn="iqn.2013-08.linbit.ovirtiscsi:store1" tid="1" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60" \ meta is-managed="true" primitive p_iscsi_store1_lun1 ocf:heartbeat:iSCSILogicalUnit \ params implementation="tgt" target_iqn="iqn.2013-08.linbit.ovirtiscsi:store1" \ lun="1" path="/dev/drbd/by-res/iscsi/0" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60" primitive p_kvm-ovirtm ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/oVirtm.xml" \ op start interval="0" timeout="180s" \ op stop interval="0" timeout="300s" \ op monitor interval="60s" timeout="60s" primitive p_portblock-store1-block ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="block" primitive p_portblock-store1-unblock ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="unblock" \ op monitor interval="30s" group g_iscsi p_portblock-store1-block p_ip_iscsi p_iscsi_store1 p_iscsi_store1_lun1 \ p_portblock-store1-unblock ms ms_drbd_iscsi p_drbd_iscsi \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true" ms ms_drbd_kvm-ovirtm p_drbd_kvm-ovirtm \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true" colocation co_g_iscsi_with_drbd +inf: g_iscsi:Started ms_drbd_iscsi:Master colocation co_kvm-ovirtm_with_drbd +inf: p_kvm-ovirtm:Started ms_drbd_kvm-ovirtm:Master order o_drbd-kvm-ovirtm_before_kvm +inf: ms_drbd_kvm-ovirtm:promote p_kvm-ovirtm:start order o_drbd_iscsi_before_g_iscsi +inf: ms_drbd_iscsi:promote g_iscsi:start property $id="cib-bootstrap-options" \ dc-version="1.1.6-0c7312c689715e096b716419e2ebc12b57962052" \ cluster-infrastructure="Heartbeat" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ default-resource-stickiness="200" \ maintenance-mode="off" \ rsc_defaults $id="rsc-options" \ resource-stickiness="200"

24

Appendix

12.1.5.Others
/etc/libvirt/auth.conf
[credentials-pcmk] authname=pcmk password=your-password-from-saslpasswd2 [auth-libvirt-ovirt-hyp1] credentials=pcmk [auth-libvirt-ovirt-hyp2] credentials=pcmk [auth-libvirt-localhost] credentials=pcmk

25

Potrebbero piacerti anche