As I’ve been start looking at KVM recently I realised that Scientific Linux 5.2/5.3 x86_64 wouldn’t built on the VM due to Kernel panic at the initial boot process while i386 would finish installation but would hang on the boot right after the installation (wouldn’t even load Grub – I double checked I was using the right disk image and boot order). Fedora 8 worked fine but I’d prefer to have the same OS version deployed on the different platforms.
I had a go with the existing Xen images I already had. At first place, the Xen VMs were paravirtualised VMs and therefore they were using the Xen kernel, something that doesn’t work with KVM (at least I’m not aware of). The easy work around was to install a normal Linux kernel on the Xen disk image and then use this disk image with a KVM virtual machine. I presume that if the Xen VMs used a normal Linux kernel, there would be no need to do anything except than using the existing image for the KVM VM.
The process has as follows:
At first place create a device mapper for the disk image:
kpartx -a node1.img
You can check which is the device by ‘kpartx -l node1.img’.
# kpartx -l node1.img loop0p1 : 0 8177022 /dev/loop0 63
Then mount the filesystem:
mount /dev/mapper/loop0p1 /media/xenvm
Chroot to the the Xen VM filesystem:
I had to edit only resolve.conf and add the site’s domain resolver:
Performed an update on the existing package to avoid any package conflicts during kernel installation and then installed the normal Linux kernel:
yum update yum install kernel
One of the last touches on the VM’s filesystem was to configure Grub by adding the following as the default boot option:
title Scientific Linux 5 root (hd0,0) kernel /boot/vmlinuz-2.6.18-128.1.10.el5 root=LABEL=/ ro quiet initrd /boot/initrd-2.6.18-128.1.10.el5.img boot
Then exit the chroot environment:
Umount the disk image and remove the mapper:
umount /media/xenvm kpartx -d node1.img
The disk could then be used with KVM with no problems at all, both via libVirt and direct KVM/Qemu environment.