Cluster 20. Install & setup environment needed for clustered virtualization
KVM Installation & initial setup
Install packages needed for KVM:
yum install –y kvm virt-manager virt-install libvirt libvirt-python libguestfs-tools syslinux pciutils
Verify that the packages were installed correctly:
lsmod | grep kvm # to see if kvm anf kvm_intel modules are loaded
Packages descripted:
Check and disable libvirtd:
systemctl status libvirtd
systemctl stop libvirtd
systemctl disable libvirtd
The servers I'm using to write this tutorial are a little modest in the RAM department with only 16 GiB of RAM. We need to subtract at least 2 GiB for the host nodes, leaving us with a total of 14 GiB. That needs to be divided up among all your servers. Now, nothing says you have to use it all, of course. It's perfectly fine to leave some RAM unallocated for future use. This is really up to you and your needs.yum install –y kvm virt-manager virt-install libvirt libvirt-python libguestfs-tools syslinux pciutils
Verify that the packages were installed correctly:
lsmod | grep kvm # to see if kvm anf kvm_intel modules are loaded
Packages descripted:
- kvm - hypervisor
- virt-manager - package contains several command-line utilities (also GUI tools) for building and installing new virtual machines, and virt-clone for cloning existing virtual machines
- libvirt - is a C toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes). The library aims at providing a long term stable C API for different virtualization mechanisms. It currently supports QEMU, KVM, XEN, OpenVZ, LXC, and VirtualBox.
- libvirt-python - package provides a module that permits applications written in the Python programming language to call the interface supplied by the libvirt library, to manage the virtualization capabilities of recent versions of Linux (and other OSes).
- libguestfs-tool - This package contains the guestfish (shell and command-line tool for examining and modifying virtual machine filesystems) and various virtualization tools, including virt-cat, virt-df, virt-edit, virt-filesystems, virt-inspector, virt-ls, virt-make-fs, virt-rescue, virt-resize, virt-tar, and virt-win-reg
- syslinux - is a suite of bootloaders, currently supporting DOS FAT filesystems, Linux ext2/ext3 filesystems (EXTLINUX), PXE network boots (PXELINUX), or ISO 9660 CD-ROMs (ISOLINUX). It also includes a tool, MEMDISK, which loads legacy operating systems from these media.
- pciutils - The pciutils package contains various utilities for : inspecting and setting devices connected to the PCI bus.
Check and destroy the default libvirtd bridge (By default, VMs will only have network access to other VMs on the same server (and to the host itself)
systemctl start libvirtd
systemctl status libvirtd
via private network 192.168.122.0. If you want the VMs to have access to your LAN, then you must create a network bridge on the host.):
systemctl start libvirtd
systemctl status libvirtd
via private network 192.168.122.0. If you want the VMs to have access to your LAN, then you must create a network bridge on the host.):
ip route | grep virbr0
virsh net-destroy default
virsh net-autostart default --disable
virsh net-undefine default
ip route | grep virbr0
systemctl status libvirtd
systemctl stop libvirtd
systemctl disable libvirtd
Provision Planning
Let's put together a table with the RAM we plan to allocate and summarizing the LV we're going to create for each server. The LV will be named after the server they'll be assigned to with the suffix _0. Later, if we add a second "hard drive" to a server, it will have the suffix _1 and so on.
As you see, we'll use 13 GiB of RAM, so remaining RAM amount will be 3 GiB (16-13=3). And we'll use 500GB of storage, so remaining amount of VM dedicated storage (both DRBD r0+r1=1000GB) will be 500GB (1000-500=500)
The same approach can be used for CPU - read this blog-post - how-many-vCPU-per-pCPU
Before we can install the OS, we need to copy the installation media and our driver disk, if needed, and put them in the /shared/files.
<network>
<name>ovs-network</name>
<forward mode='bridge'/>
<bridge name='ovs_kvm_bridge'/>
<virtualport type='openvswitch'/>
<portgroup name='vlan-51'>
<vlan>
<tag id='51'/>
</vlan>
</portgroup>
</network>
To add networ to a KVM (from both nodes):
Server | RAM (GiB) | Storage Pool (VG) | LV name | LV size |
---|---|---|---|---|
vm01-nagios | 2 | agrp-c01n01 | vm01-nagios_0 | 150 GB |
vm02-www | 4 | agrp-c01n01 | vm02-www_0 | 150 GB |
vm03-mysql | 3 | agrp-c01n02 | vm03-mysql_0 | 100 GB |
vm04-asterisk | 4 | agrp-c01n02 | vm04-asterisk_0 | 100 GB |
Total | 13 GiB | -------------- | --------------- | 500 GB |
As you see, we'll use 13 GiB of RAM, so remaining RAM amount will be 3 GiB (16-13=3). And we'll use 500GB of storage, so remaining amount of VM dedicated storage (both DRBD r0+r1=1000GB) will be 500GB (1000-500=500)
The same approach can be used for CPU - read this blog-post - how-many-vCPU-per-pCPU
Provision Shared CentOS ISOs
For our needs we'll install CentOS6 & CentOS7 machines (for Windows machines, please visit: AN!Cluster_Tutorial - alteeve.com). So download both CentOS 6 & 7 Minimal images from one of the nodes and then send it to the other (I'll be using one of our office machines):
pcs cluster start --all # if didn't start previously
rsync -av --progress CentOS-7-x86_64-Minimal-1708.iso root@172.16.51.1:/shared/files/
rsync -av --progress CentOS-6.9-x86_64-minimal.iso root@172.16.3.235:/shared/files
or you can use bytes-count (ie 150GB=150*1024*1024*1024bytes=161061273600b):
lvcreate -L 161061273600b -n vm01-nagios_0 agrp-c01n01_vg0
rsync -av --progress CentOS-6.9-x86_64-minimal.iso root@172.16.3.235:/shared/files
Creating Storage for VMs
Earlier, we used parted to examine our free space and create our DRBD partitions. Unfortunately, parted shows sizes in GB (base 10) where LVM uses GiB (base 2). If we used LVM's "xxG size notation, it will use more space than we expect, relative to our planning in the parted stage. LVM doesn't allow specifying new LV sizes in GB instead of GiB, so here we will specify sizes in MiB to help narrow the differences.
Storage creating is the same for all VM. So I'll show only one LV creation:
lvcreate -L 150000M -n vm01-nagios_0 agrp-c01n01_vg0or you can use bytes-count (ie 150GB=150*1024*1024*1024bytes=161061273600b):
lvcreate -L 161061273600b -n vm01-nagios_0 agrp-c01n01_vg0
lvdisplay /dev/agrp-c01n01_vg0/vm01-nagios_0
To remove lv:
lvremove /dev/agrp-c01n01_vg0/vm01-nagios_0
To remove lv:
lvremove /dev/agrp-c01n01_vg0/vm01-nagios_0
Creating OpenVSwitch group for VMs
Find name of the bridge:
ovs-vsctl list Bridge | grep name
Add port group to the file /shared/provision/ovs-network.xml (if more than one vlan is needed – add <portgroup>..</portgroup> for every vlan)
<network>
<name>ovs-network</name>
<forward mode='bridge'/>
<bridge name='ovs_kvm_bridge'/>
<virtualport type='openvswitch'/>
<portgroup name='vlan-51'>
<vlan>
<tag id='51'/>
</vlan>
</portgroup>
</network>
To add networ to a KVM (from both nodes):
systemctl start libvirtd
virsh net-define /shared/provision/ovs-network.xml
net-list --all
virsh net-start ovs-network
virsh net-autostart ovs-network
virsh net-define /shared/provision/ovs-network.xml
net-list --all
virsh net-start ovs-network
virsh net-autostart ovs-network
virsh net-list
systemctl stop libvirtd
To delete network from KVM (if needed - from both nodes):
virsh net-list
virsh net-destroy ovs-network
virsh net-autostart --disable ovs-network
virsh net-undefine ovs-network
To delete network from KVM (if needed - from both nodes):
virsh net-list
virsh net-destroy ovs-network
virsh net-autostart --disable ovs-network
virsh net-undefine ovs-network
Virtio
So-called "full virtualization" is a nice feature because it allows you to run any operating system virtualized. However, it's slow because the hypervisor has to emulate actual physical devices such as RTL8139 network cards . This emulation is both complicated and inefficient.
Virtio is a virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.
Creating virt-install call
touch /shared/provision/vm01-nagios.sh
chmod 755 /shared/provision/vm01-nagios.sh
vim /shared/provision/vm01-nagios.sh
virt-install --connect qemu:///system \
--name=vm01-nagios \
--ram=2048 \
--arch=x86_64 \
--vcpus=2 \
--location=/shared/files/CentOS-6.9-x86_64-minimal.iso \
--os-variant=centos6.9 \
--network network=ovs-network,portgroup=vlan-51,model=virtio \
--disk path=/dev/agrp-c01n01_vg0/vm01-nagios_0,bus=virtio \
--graphics none \
--extra-args 'console=ttyS0'
virt-install --connect qemu:///system \
--name=vm01-nagios \
--ram=2048 \
--arch=x86_64 \
--vcpus=2 \
--location=/shared/files/CentOS-6.9-x86_64-minimal.iso \
--os-variant=centos6.9 \
--network network=ovs-network,portgroup=vlan-51,model=virtio \
--disk path=/dev/agrp-c01n01_vg0/vm01-nagios_0,bus=virtio \
--graphics none \
--extra-args 'console=ttyS0'
Options Described:
- --connect qemu:///system - This tells virt-install to use the QEMU hardware emulator (as opposed to Xen, for example) and to install the server on to local node.
- --name vm01-nagios - This sets the name of the server. It is the name we will use in the cluster configuration and whenever we use the libvirtd tools, like virsh.
- --ram 2048 - This sets the amount of RAM, in MiB, to allocate to this server. Here, we're allocating 2 GiB, which is 2048 MiB.
- --arch x86_64 - i386 – 32bit old CPUs, i686 – 32bit new CPUs, x86-64 – 64bit CPUs
- --vcpus 2 - This sets the number of CPU cores to allocate to this server. Here, we're allocating two CPUs.
- --location /shared/files/CentOS-6.9-x86_64-minimal.iso - Distribution tree installation source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.
- --os-variant centos6.9 - This tweaks the virt-manager's initial method of running and tunes the hypervisor to try and get the best performance for the server. There are many possible values here for many, many different operating systems. If you run osinfo-query os on your node, you will get a full list of available operating systems. If you can't find your exact operating system, select the one that is the closest match.
- --network network=ovs-network,portgroup=vlan-51,model=virtio - This tells the hypervisor that we want to create a network card using the virtio "hardware" and that we want it plugged into the ovs-network bridge's vlan-51 portgroup. We only need one network card, but if you wanted two or more, simply repeat this command. If you create two or more bridges, you can have different network devices connect to different bridges.
- --disk path=/dev/agrp-c01n01_vg0/vm01-nagios_0,bus=virtio - This tells the hypervisor what LV to use for the server's "hard drive". It also tells it to use the virtio emulated SCSI controller.
- --graphics none - we'll use only CLI without any GUI (also for installation)
- --extra-args 'console=ttyS0' - this is needed to see installation process from console
Installing VM on the node
We can install any server from either node. However, we know that each server has a preferred node, so it's sensible to use that host for the installation stage. In this case of vm01-nagios, the preferred host is agrp-c01n01, so we'lluse it to start the installation.
Steps to perform on a VM after installation (if you need them):
- ssh to the agrp-c01n01
- systemctl start libvirtd
- /shared/provision/vm01-nagios.sh
- Go through steps of text-mode installation
- To exit installed VM hit Ctrl+5 (remote connect) or Ctrl+] (local connect)
- To connect to the installed VM virsh console vm01-nagios
- To list installed systems and their operating mode virsh list --all
- To start "shut-offed" VM virsh start vm01-nagios
For CentOS6:
For CentOS7 (this version by default does self-restart on kernel panicking):
- chkconfig ip6tables off
- service ip6tables stop
- cat /etc/sysconfig/network
- NETWORKING=yes
- NETWORKING_IPV6=no
- vi /etc/sysctl.conf
- net.ipv6.conf.all.disable_ipv6 = 1
- net.ipv6.conf.default.disable_ipv6 = 1
- kernel.panic = 5 # self-reboot in 5 seconds when panicking
- sysctl -p
- vi /etc/sysconfig/network-scripts/ifcfg-eth0
- NM_CONTROLLED=no
- ONBOOT=yes
- service network restart
- ip route
- systemctl stop NetworkManager
- systemctl disable NetworkManager
- chkconfig network on
- systemctl start network
- vi /etc/sysconfig/network-scripts/ifcfg-eth0
- ONBOOT=yes
- systemctl restart network
- ip route
for name in $(virsh list | awk '{print $2}' | grep -v '^$\|Name'); do echo $name;virsh domiflist $name; echo""; done
vm02-www
Interface Type Source Model MAC
----------------------------------------------------------------------
vnet0 bridge ovs-network virtio 52:54:00:77:3a:a0
nagios
Interface Type Source Model MAC
----------------------------------------------------------------------
vnet1 bridge ovs-network virtio 52:54:00:77:3d:19
VM shutdown test
To test if node can be shutdown:
virsh shutdown vm01-nagios
If shutdown is not performed and node remains active (mostly this is problem on CentOS6):
virsh console vm01-nagios
yum -y install acpid
service acpid start
chkconfig --level 235 acpid on
chkconfig --list acpid
chkconfig --list acpid
Test again:
virsh shutdown vm02-www
ACPI (Advanced Configuration and Power Interface) — enhanced interface for power supply management. ACPI is the component of many modern computers. It gives PC users ability to manage power supply programmatically and also query batteries state and parameters.
This tutorials were used to understand and setup clustering:
ACPI (Advanced Configuration and Power Interface) — enhanced interface for power supply management. ACPI is the component of many modern computers. It gives PC users ability to manage power supply programmatically and also query batteries state and parameters.
This tutorials were used to understand and setup clustering:
No comments:
Post a Comment