Real hardware is full of bugs, therefore nothing will replace real world testing. However, virtual machines (also called VMs) can be of great help. A virtual machine is an emulation of a computer. Those that run inside an existing Operating System may prove particularly useful when developing low level softwares such as kernels and boot loaders. Among other things virtual machines reduce the need for a physical machine dedicated to testing, they boot and reboot fast and allow non-intrusive debugging.
The software that emulate the hardware for virtual machines is sometime called a "hypervisor" or a "virtual machine monitor". QEMU (Open Source), Bochs (Open Source), VirtualBox (mostly Open Source) and VMware (Closed Source) are a few examples of such software. Those tools are modular and allow the user to attach several peripherals to the virtual machine. In order to run syslinux, a hard drive and / or a network interface is required. With those hypervisors, the operating system that runs inside the virtual machine is called the guest operating system and the one that runs the hypervisor itself is called the host operating system.
Since it is probably the most common free hypervisor at the time, this page mainly focus on using QEMU, also sometime erroneously called KVM. Everything presented here can also be done with any other hypervisor or tool set. Reading the appropriate tool documentation is highly recommended.
The command to run a recent version of QEMU is either qemu-system-i386 or qemu-system-x86_64 depending on the processor architecture wanted. With older versions of QEMU the command might just be qemu. This document assume the command to run is qemu-system-i386.
- 1 Disk image preparation
- 2 Networking
- 3 Special case for UEFI
- 4 User Contribution: Gene
- 5 Resources
Disk image preparation
A virtual disk attached the VM can be used to test the disk-installed Syslinux versions.
Disk image creation
With QEMU, the hard disk content can be stored and retrieved from a regular file. Most if not all the hypervisors provide tools to create disk images. QEMU provide the tool qemu-img to work with disk images. The following command create a 10MB disk image using the raw file format.
qemu-img create -f raw testing.img 10M
The raw file format is simple and easily exploitable by most hypervisors. Other formats providing more features are also available and documented in the man page qemu-img. The file disk image can be partitioned or not. When not partitioned, the disk is sometime called superfloppy. The UEFI firmwares actually require a partition to look for the boot files. Most partitioning tools can operate directly on the image file, like fdisk and parted. Here is a partitioning example that can be used for EFI. A GPT label (partition table) is created, then a 4MB partition named ESP is created, and finally a partition named linux which span from 4MB to 10MB is created.
parted -s testing.img mklabel gpt parted -s -a none testing.img mkpart ESP fat32 0 4M parted -s -a none testing.img mkpart linux ext4 4M 10M
Without partitions, the file system can be created directly by the mkfs utility.
mkfs -t ext4 testing.img
With partitions, it can be done with losetup and partx utilities which require root privileges. losetup create and manage loop devices. A loop device is device node whose content is mapped onto a regular file. partx tells the kernel to create more loop devices for the partitions of a give device node.
losetup /dev/loop0 testing.img partx -a /dev/loop0 mkfs -t fat /dev/loop0p1 mkfs -t ext4 /dev/loop0p2 partx -d /dev/loop0 losetup -d /dev/loop0
The first two commands associate the device node /dev/loop0 to the file testing.img and create the additional device nodes /dev/loop0p1 and /dev/loop0p2 for the two partitions. Then both mkfs commands are the usual way to create a file system. Finally, the commands partx -d and losetup -d destroy the loop devices in reverse order of creation.
There are more complex ways to do these steps without requiring root privileges. Note that in partition-less mode there is no need for loop device at all as mkfs can operate on a regular file directly.
There are several ways to install SYSLINUX depending on host operating system and the variant to be installed. Some of them may require the file system to be mounted. Again, without partitions the file can be used directly instead of a special device file. But using the command mount require root privileges.
mount testing.img /media/syslinux extlinux --install /media/syslinux
It is possible to mount a file system without root privileges is by using FUSE. However, as of 2015 the installer extlinux doesn't work correctly on a file system mounted with FUSE.
To mount the partitions of a virtual disk, the same method as for the creation of the file systems with losetup and partx has to be used in addition to the mount utility.
losetup /dev/loop0 testing.img partx -a /dev/loop0 mount /dev/loop0p1 /media/syslinux
Then the files can be copied to the appropriate directory if needed. For instance, for a minimal EFI64 installation, syslinux.efi and ldlinux.e64 can be copied to the directory EFI/BOOT.
mkdir -p /media/syslinux/EFI/BOOT cp efi64/efi/syslinux.efi /media/syslinux/EFI/BOOT/BOOTX64.EFI cp efi64/com32/elflink/ldlinux/ldlinux.e64 /media/syslinux/EFI/BOOT
The file system should always be unmounted before running a VM using the virtual disk. Not doing so would likely result in a corrupted file system. The way to unmount the file system depends on the way it has been mounted. If it has been mounted as shown previously, it should be unmounted with the following commands in that specific order.
umount /media/syslinux partx -d /dev/loop0 losetup -d /dev/loop0
The -hda option of QEMU can be used to specify the disk image to use. On more recent versions of QEMU, the -drive option can be used and would avoid the generation of a warning message.
qemu-system-i386 -hda testing.img qemu-system-i386 -drive file=testing.img,format=raw
A virtual network device can be attached to the VM. It can be used for booting using the PXE capability, or if the network is required some time later. QEMU has several ways of emulating the network. This page show only two practical configuration examples. The QEMU networking documentation and the man page qemu contains more information about the supported features.
In any case, the addition of a network device can be specified with the -net option.
qemu-system-i386 -net nic
Several models of network interface are supported and its mac address can be chosen by adding some arguments to the -net option. Note that to help with debugging the network setup, the -net dump option can be added, it generates a file that hold all the exchanged data, named qemu-vlan0.pcap by default. Note also that the -net option is the old way to configure the network. Newer versions of QEMU supports the -device option.
The network device inside the virtual machine is really just the network device connected on the mother board. It is not connected to anything, no wire is plugged in its socket. There are several ways to do this with QEMU, the following sections present one way to do this.
The user network in QEMU put the guest system inside a NAT. As a result the DHCP server (which can only operate on a LAN) is emulated by QEMU and its configuration is very limited. The basic usage to just setup the network is the following option.
qemu-system-i386 -net nic -net user
In order to provide PXE booting, the DHCP server must send an option signaling which server to download the files from. However, the user QEMU network does not allow to set such option manually; the way it is intended to work is by emulating a TFTP server. QEMU can take the content of a directory and serve it over the TFTP protocol. This is done by adding the arguments tftp and bootfile to the second -net option.
qemu-system-i386 -net nic -net user,tftp=~/tftp,bootfile=pxelinux.0
In this example, the directory ~/tftp must contain at least pxelinux.0, ldlinux.c32 and a configuration file in the directory pxelinux.cfg/. This is explained in greater details in the page dedicated to PXELINUX.
The simulated network has several limitations. The configuration of the simulated DHCP and TFTP servers are really limited. Having the guest operating system behind a gateway may also be an annoyance as the LAN it belongs to is not directly accessible. One alternative is to use the TUN/TAP Linux infrastructure. The TUN/TAP system creates a virtual device usually named tap0. Then QEMU virtually connect a wire between that tap0 interface and the network device it created for the guest operating system. The following commands (to be run as root) create a tap0 interface that is owned by the user with uid 1000, it is then configured with an IP address.
tunctl -t tap0 -u 1000 ifconfig tap0 up 10.0.0.254
Then the following QEMU options can be used to connect the virtual machine network interface to that tap0 interface.
qemu-system-i386 -net nic -net tap,ifname=tap0,script=no
Note 1: A helper program to do these things without actually requiring the root privileges exists. It is documented in the man page of qemu or qemu-system. However, not all distributions always included it for security reason. It is now shipped in Debian but it doesn't have the suid bit set by default that would avoid requiring root privileges. Should the suid bit be set manually, it would disappear on every update. So the most reliable way to configure the tap network is by doing it manually as shown here.
Note 2: This TUN/TAP mechanism is most useful together with the so-called Linux bridges. Bridges are virtual network interfaces that act as a network switch for all the interfaces that belong to it. If, for instance, tap0, tap1 and eth0 are added to a bridge, they are now all part of the same LAN, thus effectively including the virtual machine using those TAP interfaces into the same LAN as eth0.
Special case for UEFI
Everything explained above is still valid for emulating a system with an UEFI firmware. If a disk image is used, care should be taken when creating the partitions to make a GPT label (partition table) instead of a DOS label. The main difference is that the UEFI firmware must be explicitly mentioned on the command line with the option -bios.
OVMF is the UEFI firmware implemented by the EDK2 project. Some Linux distributions may provide a build of OVMF, like the package qemu-efi in Debian. However, the Debian version is still broken as of November 2015. If it exists, it can be tested with the following command.
qemu-system-i386 -bios /usr/share/qemu-efi/QEMU_EFI.fd
If QEMU crash, a newer version may be downloaded from the official website. The IA32 version should run in qemu-system-i386 and the X64 version in qemu-system-x86_64. An hybrid system x86_64 + efi32 can be emulated as well.
If it still doesn't work or if additional features are needed (like PXE or Secure Boot), a recompilation from scratch may be needed. The compilation process of EDK2 is a bit exotic and not covered in this page. Here are a few links to help in this process.
- A tutorial for building OVMF for ubuntu https://wiki.ubuntu.com/UEFI/EDK2
- The main wiki page to build OVMF http://tianocore.sourceforge.net/wiki/How_to_build_OVMF
- The most useful page to actually build OVMF http://tianocore.sourceforge.net/wiki/Using_EDK_II_with_Native_GCC
Bugs with older iPXE
In order to boot from the network using PXE, QEMU uses iPXE as option ROM for its virtual ethernet devices. An Option ROM is a kind of add-on to the firmware (BIOS or UEFI) which, in this case, provide the network features through a standard programming interface.
Unfortunately, the version of iPXE shipped with QEMU as of end of 2015 has a bug with the UEFI firmware that provokes the following error message at the early boot stage of Syslinux.
Failed to read blocks: 0xC
This bug has been fixed in iPXE by the commit 3376fa5 on the 1st of september 2015. This bug is due to iPXE replacing the exiting network stack with its own, but not providing the PxeBaseCode interface that Syslinux uses. There are two solutions to this: either recompile the iPXE option ROM, or use the virtio network device for which OVMF has its own PxeBaseCode implementation.
Workaround using virtio
virtio is a kernel API for virtual machines designed to achiever better IO performance. When using virtio for the network, OVMF provide several low level protocol implementations directly, so the iPXE option ROM can be disabled so that it doesn't replace the default one.
Unfortunately, disabling the option ROM cannot be done with the old fashioned -net option. The newer -device and -netdev must be used.
qemu-system-x86_64 -bios path/to/OVMF.fd \ -device virtio-net-pci,netdev=netdevname,romfile= \ -netdev tap,id=netdevname,ifname=tap0,script=no
With those options, the argument netdev is mandatory for the -device option and must match with the id argument of the -netdev option. This allows to create several network devices linked to the outside world in different ways. The romfile argument with no value disable the option ROM.
In order to update the option ROM containing iPXE, it has to be rebuilt from sources with the following commands. This has been tested with the git revision ed0d7c4.
git clone git://git.ipxe.org/ipxe.git cd ipxe/src make bin-x86_64-efi/realtek.efirom
The make command may take some time. The option -j 4 may be added to make the compilation parallel. It then can be used with the romfile argument as previously.
qemu-system-x86_64 -bios path/to/OVMF.fd \ -device virtio-net-pci,netdev=netdevname,romfile=path/to/bin-x86_64-efi/realtek.efirom \ -netdev tap,id=netdevname,ifname=tap0,script=no
When iPXE starts, it should write something like iPXE 1.0.0+ (ed0d7) on top of the screen. The string in parentheses is the first five digits of the commit id used. Thus indicating whether the right version of iPXE is in use or not.
User Contribution: Gene
Currently, I'm utilizing VMware Server as a desktop virtualization platform. Unlike VMware Player, it allows one to create VMs from scratch. Unlike VMware Workstation, their desktop application targeted towards developers, it is limited to 1 snapshot and no 3D acceleration. On occasion, the remote management capabilities of VMware Server have proven useful.
This is merely a brief comparison based on my use.
- QEMU: Doesn't idle the processor
- BOCHS: Doesn't idle the processor; a nice GUI with buttons for controlling some characteristics (remove/install floppy, power off)
- DOSEMU: Idles the processor; not a full virtualization platform
- VMware Server: Doesn't idle the processor; compatible (depending on VMware hardware version) with the enterprise products (VMware ESX) I manage at my job; Been using VMware products for over 8 years
I've also been using raw disk images but rather than creating a full hard drive, I've used standard sized floppy images (as floppies, of course). Most systems today understand both "1.44MB" (1440kiB, technically) and "2.88MB" floppy images. For most uses, I use "2.88MB" floppy images as they give you more space while still being standardized. I did notice some issues with a "2.88MB" image and DOSEMU but I'm not sure if DOSEMU can be utilized in this manner effectively.
Using a floppy disk allows me to mount the disk in Linux, copy files in or edit existing files, umount then use it with VMware Server, BOCHS or QEMU.