Pages

Sunday, December 29, 2013

Slackware as a VM host with qemu

With this article I want you to show how to use Slackware as a host for virtual machine(s) using qemu and kvm. The physical machine I'm using is an old AMD 4000+ single core CPU, has 1GB of RAM, a single 1GBit network interface and is running Slackware64-14.0. I assume that you've basic Slackware and qemu knowledge because this article does not focus to much on Slackware or qemu. The focus of this article is to get both working with each other.
This article covers the following topics:

1. Prepare your physical network interface for bridging
2. Load the kvm module
3. Install qemu
4. Setup a little Slackware sample VM
5. Start/Stop a VM during booting and shutting down the host

Alright let's roll!

1. Prepare your physical network interface for bridging

As mentioned in the beginning the physical machine has a single network interface eth0. Currently it is configured as follows:

Device: eth0
Static IP: 192.168.1.20
Netmask: 255.255.255.0
Gateway: 192.168.1.69

As you can see a very simple configuration. All configurations for eth0 are stored in /etc/rc.d/rc.inet1.conf. To get the eth0 device ready for bridging the follwoing tasks has to be performed:

- remove all configurations for eth0
- add or keep the information for the default gateway
- add all configuration for br0 which will be the bridging device

Luckily all changes can be done by editing one single file /etc/rc.d/rc.inet1.conf:

# vi /etc/rc.d/rc.inet1.conf
...
# Config information for eth0:
IPADDR[0]=""
NETMASK[0]=""
USE_DHCP[0]=""
DHCP_HOSTNAME[0]=""
...
# Default gateway IP address:
GATEWAY="192.168.1.69"
...
# Example of how to configure a bridge:
# Note the added "BRNICS" variable which contains a space-separated list
# of the physical network interfaces you want to add to the bridge.
IFNAME[0]="br0"
BRNICS[0]="eth0"
IPADDR[0]="192.168.1.20"
NETMASK[0]="255.255.255.0"
USE_DHCP[0]=""
DHCP_HOSTNAME[0]=""
...


As you can see above the configurations for eth0 were removed, the default gateway is 192.168.1.69 and configuration for br0 was added. Next reboot your machine to make sure that the configuration works. Before you reboot you probably should make sure that you still can login locally in case that the above network configuration does not work for you:

# shutdown -r now
...


After reboot, login and run brctl:

# brctl show br0
bridge name     bridge id               STP enabled     interfaces
br0             8000.001bfcd285c9       no              eth0


The brctl command shows clearly the br0 device with the attached physical eth0 device. Run ifconfig:

# ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.20  netmask 255.255.255.0  broadcast 192.168.1.255
...

eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
...


And netstat for checking your default route:

# netstat -rn
Kernel IP routing table
Destination  Gateway       Genmask         Flags   MSS Window  irtt Iface
0.0.0.0      192.168.1.69  0.0.0.0         UG        0 0          0 br0
127.0.0.0    0.0.0.0       255.0.0.0       U         0 0          0 lo
192.168.1.0  0.0.0.0       255.255.255.0   U         0 0          0 br0


At this point the bridging device is ready. Next you need two extra scripts for qemu.
- qemu-ifup will be executed when a VM starts. It creates another network device eg. tap0 which will be added to the briding device br0
- qemu-ifdown will be executed when a VM is halted. It removes the tap0 device from the bridge and finally removes the tap0 from the system
Start with the first device qemu-ifup:

# vi /etc/qemu-ifup
#!/bin/sh
/usr/sbin/openvpn --mktun --dev $1 --user `id -un`
/sbin/ifconfig $1 0.0.0.0 promisc up
/usr/sbin/brctl addif br0 $1


And the second script qemu-ifdown:

# vi /etc/qemu-ifdown
#!/bin/sh
brctl delif br0 $1
/usr/sbin/openvpn --rmtun --dev $1


Both scripts need to be executable:

# chmod 755 /etc/qemu-ifup
# chmod 755 /etc/qemu-ifdown


All VM's will communicate via the br0 device now.

2. Load the kvm module

First try to load the kvm module manually. In case that you've an ADM processor (like me) run:

# modprobe kvm-amd

In case that you've an Intel processor (unlike me) run:

# modprobe kvm-intel

After the kvm module has loaded run dmesg and look for an output like this:

# dmesg
...
[61400.570649] kvm: Nested Virtualization enabled


To autoload the kvm module during boot edit /etc/rc.d/rc.modules-3.2.29 and append the following lines at the end of the rc script:

# vi /etc/rc.d/rc.modules-3.2.29
...
# KVM
/sbin/modprobe kvm-amd


Reboot your machine:

# shutdown -r now

And after the reboot check if the kvm module was loaded during boot:

# lsmod | grep kvm
kvm_amd                49306  0
kvm                   346407  1 kvm_amd


3. Install qemu

Download the source for qemu and store it under eg. /usr/src. Then change the directory to /usr/src and extract the source package:

# tar xf qemu-1.3.0.tar.bz2

Change into the new directory qemu-1.3.0:

# cd qemu-1.3.0

And run configure and make to compile the source and to install the binaries:

# ./configure --prefix=/opt/qemu/1.3.0
...
# make && make install
...


Create a symbolic link to latest:

# ln -s /opt/qemu/1.3.0 /opt/qemu/latest

And change the PATH variable for root in /etc/profile (just add /opt/qemu/latest/bin):

# vi /etc/profile
...
# For root users, ensure that /usr/local/sbin, /usr/sbin, and /sbin are in
# the $PATH.  Some means of connection don't add these by default (sshd comes
# to mind).
if [ "`id -u`" = "0" ]; then
  echo $PATH | grep /usr/local/sbin 1> /dev/null 2> /dev/null
  if [ ! $? = 0 ]; then
    PATH=/usr/local/sbin:/usr/sbin:/sbin:/opt/qemu/latest/bin:$PATH
  fi
fi
...


Logoff and re-login to the host machine and check if eg. qemu-system-x86_64 is accessable in your PATH environment:

# which qemu-system-x86_64
/opt/qemu/latest/bin/qemu-system-x86_64


At this point qemu has been installed and is ready to create a VM!

4. Setup a little Slackware sample VM

Now it's time to create a little sample VM. As usual I'll install Slackware. Yeah - Slackware inside Slackware. Hope this sounds awesome!
First create a space for your VM. I like to use /local/qemu where all my VM's reside. Create the directory and and change into it:

# mkdir /local/qemu && cd /local/qemu
Next create a virtual harddisk where the VM can be installed in. I just named the file nagios01 because the VM will be a nagios VM later and the hostname will be nagios01 (I won't install nagios in this article). You can name it whatever you want. A name based on the purpose of the VM or based on the hostname should be fine for now:

# qemu-img create nagios01.qcow 20G
Formatting 'nagios01.qcow', fmt=raw size=21474836480


Run ls to check if the file was created:

# ls -lah
total 8.0K
drwxr-xr-x  2 root root 4.0K Dec 28 10:41 ./
drwxrwxr-x 11 root root 4.0K Dec 28 10:41 ../
-rw-r--r--  1 root root  20G Dec 28 10:41 nagios01.qcow


Now start qemu-system-x86_64 with the following options:

# qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/nagios01.qcow -cdrom /local/qemu/slackware64-14.0-install-dvd.iso -boot d -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap0 -enable-kvm -curses

The above command will start a VM with a qemu64 processor, 256MB RAM, uses /local/qemu/nagios01.qcow as harddisk, uses /local/qemu/slackware64-14.0-install-dvd.iso as DVD (adjust your path to your DVD iso), boots from DVD, setup a network interface i82251 which will be attached to br0 (our bridging device we created at the beginning) as tap0, enable supports for kvm and uses curses as graphics mode (since the Slackware installer runs in text mode curses is perfectly fine).
As the Slackware pro which you are (otherwise you wouldn't try to install Slackware inside Slackware or even read this article) you can install Slackware now as usual. Just a few hints during the installation of the VM:

- the menus look a little strange but that shouldn't matter (in most cases the borders only)
- if you have trouble with your keymap then try out qwerty/us.map - I had less issues with that except for some special characters but luckily I didn't need them
- when you configure your network use any free IP to avoid duplicate IP's and the usual other settings for your network configuration
- during lilo configuration and installation use the normal vga options (no framebuffer) because we will stay in text mode
- when setting the root password: in case that you've trouble with your keymap before use a very simple password like 12345678 and change it later via SSH

After the installation of the VM has finished stop the VM by run running halt:

# halt
...


The VM will be halted but qemu is still running. Open a second terminal and kill qemu:

# pkill qemu

We will make that much more elegant later.
Rerun the above qemu-system-x86_64 command withouth the DVD and boot from harddisk:

# qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/nagios01.qcow -boot c -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap0 -enable-kvm -curses

Lilo will automatically load the background image and qemu responds with '640x480 Graphics mode'. Pressing enter is the safe choice here otherwise you'll have to wait 2 minutes until lilo begins to autoboot.
After the VM has started do the following inside the VM:

- login into your VM, check your network settings and try to login via ssh
- change the root password for the VM (if you had trouble with your keymap before)
- reconfigure lilo inside the VM: change the lilo timeout from 2 minutes to 3 seconds, remove the bitmap and reuse the message file and finally reinstall lilo, eg:

# vi /etc/lilo
append=" vt.default_utf8=0"
boot = /dev/sda
compact
lba32
message = /boot/boot_message.txt
prompt
timeout = 30
change-rules
  reset
vga = normal
image = /boot/vmlinuz
  root = /dev/sda2
  label = Linux
  read-only

# lilo
Added Linux *


Again stop the VM by shutting down the operating system:

# shutdown -h now

And in another terminal kill qemu after the VM has shutdown:

# pkill qemu

A VM was created but with the current method it always uses the terminal and shutting it down is everything else but comfortable. To change that run the VM in background with the following qemu-system-x86_64 command:

# qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/nagios01.qcow -boot c -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap0 -enable-kvm -nographic -daemonize -serial telnet:localhost:7000,server,nowait,nodelay -monitor telnet:localhost:7100,server,nowait,nodelay

The qemu monitor is now accessible via telnet. The VM can now be controlled by connecting to the monitor on port 7100. After the VM has booted connect to the qemu monitor and shutdown the VM one more time. Inside the VM /etc/rc.d/rc.acpid must be activated. Don't deactivate the rc script because qemu sends an ACPI event!

# telnet localhost 7100
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
QEMU 1.3.0 monitor - type 'help' for more information
(qemu) system_powerdown
...


After a few seconds the VM shuts down and the qemu monitor for this VM is inaccessable again. The last step is to start a VM during booting of the host and to stop a VM during shutting down of the host.

5. Start/Stop a VM during booting and shutting down the host

We know already that we can start VM's by executing the qemu-system-x86_64 command. That's so easy we could put it directly into /etc/rc.d/rc.local to start the VM when the host is booting. For shutting down the VM's we need expect. expect is very handy if you need to control a program that needs user input like telnet. So the first thing you need to do is to create a expect script that connects to localhost and sends the system_powerdown qemu command:

# vi /ect/rc.d/rc.qemu.exp
set port [lindex $argv 0]
spawn telnet localhost "$port"
expect "(qemu)"
send "system_powerdown\r"
expect "(qemu)"


You can test the above script like this:

# expect /etc/rc.d/rc.qemu.exp 7100
Stopping all VM's...
spawn telnet localhost 7100
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
QEMU 1.3.0 monitor - type 'help' for more information
(qemu) system_powerdown
(qemu)


The next thing you need is a rc.qemu script that starts and stops the VM's. Create one like this (I've just copied the rc.ntpd script and made some changes):

# vi /etc/rc.d/rc.qemu
#!/bin/sh

qemu_start() {
  echo "Starting all VM's..."
  /opt/qemu/latest/bin/qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/nagios01.qcow -boot c -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap0 -enable-kvm -nographic -daemonize -serial telnet:localhost:7000,server,nowait,nodelay -monitor telnet:localhost:7100,server,nowait,nodelay > /dev/null 2>&1
  /opt/qemu/latest/bin/qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/mail01.qcow -boot c -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap1 -enable-kvm -nographic -daemonize -serial telnet:localhost:7000,server,nowait,nodelay -monitor telnet:localhost:7101,server,nowait,nodelay > /dev/null 2>&1
  /opt/qemu/latest/bin/qemu-system-x86_64 -cpu qemu64 -m 256 /local/qemu/db01.qcow -boot c -net nic,vlan=0,model=i82551 -net tap,vlan=0,ifname=tap2 -enable-kvm -nographic -daemonize -serial telnet:localhost:7000,server,nowait,nodelay -monitor telnet:localhost:7102,server,nowait,nodelay > /dev/null 2>&1
}

qemu_stop() {
  echo "Stopping all VM's..."
  expect /etc/rc.d/rc.qemu.exp 7100 > /dev/null 2>&1
  expect /etc/rc.d/rc.qemu.exp 7101 > /dev/null 2>&1
  expect /etc/rc.d/rc.qemu.exp 7102 > /dev/null 2>&1
  while [[ `pgrep -fl qemu-system-` != "" ]]; do sleep 1; done
}

case "$1" in
'start')
  qemu_start
  ;;
'stop')
  qemu_stop
  ;;
*)
  echo "usage $0 start|stop"
esac


Don't forget to make it executable:

# chmod 755 /etc/rc.d/rc.qemu

Just a quick explaination. qemu_start() starts all VM's: nagios01 with telnet port 7100, mail01 with telnet port 7101 and db01 with telnet port 7102. qemu_stop() calls the rc.qemu.exp script and passes the ports to it so the script can find the qemu monitor for each VM. Then expect sends the system_powerdown command to each VM. At the end a while loop waits until no more qemu-system- processes are running prevending the rc.qemu scripts from finishing before all VM's are shutdown.
Finally add the start and stop commands to rc.local and rc.local_shutdown:

# echo "/etc/rc.d/rc.qemu start" >> /etc/rc.d/rc.local
# echo "/etc/rc.d/rc.qemu stop" >> /etc/rc.d/rc.local_shutdown
# chmod 755 /etc/rc.d/rc.local
# chmod 755 /etc/rc.d/rc.local_shutdown


Reboot the host to check that all VM's start and stop regulary.
Enjoy your Slackware inside Slackware.

No comments:

Post a Comment