Xen PVH: Bringing Hardware to Paravirtualization
When Xen was first created, it wasn't possible to fully virtualize x86. Popek and Goldberg famously proposed the criteria for a system to support virtualization back in the 1974s, and x86 failed to meet them. A very small number of privileged instructions would not trap if they were executed in unprivileged mode.
Part of the motivation for Xen was to work around this limitation by defining something that was sufficiently similar to x86 that it was a trivial porting target for operating systems that ran on x86, but it was sufficiently different that virtualization was actually possible.
What is Paravirtualization?
Paravirtualization is almost virtualization. It's creating a virtual machine that is almost like the host, but not quite. In particular in Xen, all page table management functions are calls to the hypervisor instead of being instructions.
There are a number of other differences. For example, a paravirtualized Xen kernel has an entry point that expects to already be in protected mode. All devices, including the programmable interrupt controller, are replaced by abstract equivalents.
Paravirtualization was the only option for the first versions of Xen because the hardware didn't support native virtualization. Because you needed to make invasive changes to the guest kernel's virtual memory subsystem to get it running on Xen, making some device driver changes at the same time seemed sensible.
Typically, devices are exposed to software via memory mapped registers. x86 also has a separate I/O space, but we'll ignore this for now. When you want to send a command to a device, you write a value to a specific device register by writing to the address where it's been mapped into the physical address space. The value is then sent across the bus.
This is fine for real hardware, where an uncached memory access is really just a message to the memory controller, and sending it to another device is trivial. It's a lot harder to emulate in software: You have to mark the page as not having write permission; then disassemble the store instruction, find out the value, and pass it back into the rest of the code.
This is somewhat annoying, because you have to take an interrupt, save an entire register set, and disassemble and emulate an instruction just to pass one word of data from the guest to the host. In the Xen case, you're passing it from the guest to the domain 0 guest, so you then also have transitions back in the other direction.
To avoid this overhead, Xen device drivers use a shared memory ring between the front and back halves in the domU and dom0 guests, respectively (see chapter 4 in my book The Definitive Guide to the Xen Hypervisor). Doing so enables communication without the need to go via the hypervisor and allows it to be asynchronous.
Why Hardware?
Hardware assistance for virtualization comes in a variety of forms. In particular, AMD and Intel both extended x86 to support virtualization in different (and, of course, incompatible) ways, although they're now slowly converging.
The most important form of hardware assistance for virtualization comes in the form of nested page table support, which allows the guest VM to maintain page tables mapping from virtual to pseudo-physical addresses and for the hypervisor to maintain the mapping from pseudo-physical to physical pages.
If the guest VM just reallocates a page from one virtual address space to another, and the pseudo-physical to physical mapping is still valid, the hypervisor doesn't need to be involved at all in the operation. This is significantly faster than having to do a hypercall to tell Xen to update page tables as PV guests are required to do. It's especially important on x86-64.
On 32-bit x86, Xen demoted the PV guest kernel from ring 0 to ring 1 and used the segmentation mechanism to protect the hypervisor from the guests. x86-64 removed rings 1 and 2 (leaving ring 0 for kernels and ring 3 for userspace) and eliminated the segmentation mechanism.
This means that on x86-64, the guest kernel and userspace both run in ring 3, requiring a complete TLB flush for transitions between them. This is very expensive and is part of the reason why HVM can outperform PV in a number of workloads on x86-64.
There are some other advantages to the hardware mechanism. In particular, there is a hypervisor privilege mode, a conceptual ring -1, below the ring 0 where the OS lives, and a mechanism for fast calls from the guest OS to the hypervisor.
The downside of the pure HVM approach is that the hardware doesn't emulate everything, so some parts must be implemented using software emulation. This is typically provided by QEMU in dom0. It implements emulated network and block device drivers, as you'd expect, but it also implements a number of other standard devices that are part of the PC platform. This includes the APIC, which is responsible for interrupt delivery, and various other BIOS services. In fact, an HVM guest needs an entire emulated BIOS, as it can call into BIOS routines at any point. All of this incurs some overhead.
The Simple Hybrid and PVHVM
Most Xen guests that boot in HVM mode actually use a hybrid, known as PVHVM, in which the guest boots as it would normally, but then detects that it is running atop Xen using PV drivers. This provides some significant speedups because the devices get most of the performance of the fully paravirtualized versions.
There's still some overhead from the fact that interrupts (and timers) are emulated. There's also the issue that because the emulated BIOS and so on are still required, dom0 must run a QEMU instance for each HVM guest, even though it's not required for very much after the OS boots.
This QEMU instance is part of the reason for the "Windows tax" on several Xen hosting procedures. It's much harder to do accurate accounting when some of the time is allocated directly to the guest VM and some to the host, and some is consumed within dom0 by the QEMU instance. It's also much easier for the QEMU instances to become a bottleneck.
The PVHVM mode is a slight extension of this; emulated interrupts are replaced with the Xen callback mechanism, where each virtual CPU has an array of function pointers that can be invoked via the upcall mechanism to deliver events. This is, again, faster, but still has the overhead of the emulated BIOS and other motherboard devices.
What PVH Brings
The first difference between PVH and existing HVM-related virtualization modes is the entry point. A PV guest starts running in a very similar way to an ELF binary on an operating system. It has an entry point at a specific address and it is run as a protected mode program, with the function at the entry point being passed all the information it needs at boot as a function parameter.
In contrast, HVM guests boot in the same way that operating systems have booted on x86 hardware since the original IBM PC: They start as real mode programs and must set up the protected mode environment themselves and query the (emulated) BIOS to find out about the system.
Because PVH guests boot via the same mechanism as PV guests, they don't need the QEMU instance for boot. In fact, they don't need it for anything. They work almost exactly as PV guests with one major exception: a PVH guest runs in ring 0 and has direct control over its page tables.
This has several advantages. One of the biggest efforts when porting an operating system to Xen is the page table management code. In FreeBSD, for example, the platform-specific page table management code in FreeBSD is in the pmap.c file. This is 5,551 lines of code for i386 and 4,496 lines for Xen. The Xen code is far less widely tested and is about as complex as the non-Xen version. This code is one of the hardest parts of the OS to get right, and small errors can cause big problems.
In contrast, there is no special pmap code for a PVH guest; it simply uses the normal x86-64 pmap. It does need some support code for interrupts and devices, but it means that the difference in low-level code between native and PVH is closer to the difference between two ARM11 SoCs, whereas the difference between native and full PV is closer to the difference between x86-32 and x86-64.
A New dom0
As well as being easier to implement and faster, PVH brings one additional benefit: It supports running as domain 0. All the other HVM modes require QEMU to run in dom0 to provide some emulated services. Because dom0 (obviously) isn't yet running when dom0 boots, this prevents them from being used in this capacity.
PVH, in contrast, requires no support from anything other than the hypervisor, so it can boot with no other guests running and can take on the responsibilities of dom0. This simplifies the process of bringing up a new dom0-capable VM on platforms that support hardware virtualization (most 64-bit x86 chips).
It doesn't give you dom0 support for free, of course. There's more to dom0 than just being the first VM to boot: You must also provide the XenStore, network, and disk back-end drivers and so on.