- The Partitioning Continuum at a Glance
- nPartitions (Electrically Isolated Hardware Partitions)
- Virtual Partitions (Peak Performance Virtualization)
- HP Integrity Virtual Machines (Fully Virtualized Partitioning)
- Secure Resource Partitions (Partitioning Inside a Single Copy of HP-UX)
- Summary
HP Integrity Virtual Machines (Fully Virtualized Partitioning)
The newest solution in HP's partitioning continuum is called HP Integrity Virtual Machines. This is a fully virtualized environment for running applications. You can run what is called the virtual machine host on any Integrity system or nPartition. On top of the VM host, you run virtual machines, which present themselves to the operating system inside the VM as a physical server. However, all of the resources of that system are virtualized. The physical CPUs, memory, and I/O devices are managed by the VM and what the VMs see is a virtual resource that is mapped on top of the physical devices in the system. This allows the physical resources to be shared by multiple OS images.
The virtualization that is provided by Integrity VM is so complete that the operating systems running inside the VMs run without modification. This means that all the operating systems that are supported on Integrity hardware will run inside VMs. This includes HP-UX initially and future versions will support unmodified versions of Linux, Windows, and OpenVMs.
Features
The major features of Integrity VM include:
- OS isolation: Each partition runs its own full copy of the operating system. This means that the OS can be patched and tuned specifically for the applications that will be running there.
- Sub-CPU or whole-CPU granularity: Since the system is virtualized, each virtual CPU inside a VM can represent a portion of a CPU or a whole CPU on the physical system.
- Differentiated CPU controls: You can give differentiated access to the physical CPUs to specific VMs. You will be able to define specific CPU entitlements for each VM. For example, you can assign a four-CPU VM 50% of four physical CPUs, another VM can get 25%, and a third 25%.
- I/O device sharing: Integrity VM provides fully virtualized I/O, which means multiple virtual SCSI cards can represent a single physical SCSI or fibre channel card.
- Supports HP-UX initially, and will eventually support Linux, Windows, and OpenVMS: Because the system is fully virtualized, it is possible to run any of the operating systems that are supported on the Integrity platform inside a VM.
- Support for the full line of Integrity Servers: From 1 to 128 processor systems are supported for use with Integrity VM.
- Software-fault isolation: Software faults in one VM can't impact other VMs.
- Security isolation: It is not possible for a process inside a VM to access the memory or devices allocated to another VM.
High-Level Architecture
HP Integrity VM is implemented by running what is called the VM host on top of the hardware rather than running a standard operating system. Figure 2-13 shows the high-level architecture of a system running Integrity VM.
Figure 2-13 High-Level Architecture of Integrity VM
The VM host runs on top of the hardware and boots the various VMs, which each run their own copy of an operating system, which boot normally and start up whatever application workloads are intended to run in that VM.
Resource Virtualization
Each VM will have a set of virtual CPUs, a block of memory, and a set of virtual I/O interface cards. The host maps each of these resources to physical resources available on the system.
CPU Virtualization
The physical CPUs on the system are shared by the VMs, which each have one or more virtual CPUs. It is possible to have more virtual CPUs inside the VMs than there are physical CPUs in the system. In fact, this is desirable. You can get better utilization of the physical CPUs when you have several virtual CPUs for each physical CPU. However, it is not possible to have a single VM with more virtual CPUs than physical CPUs on the system. Figure 2-14 shows how the VM host manages physical CPU allocation to virtual CPUs.
Figure 2-14 CPU-Sharing by Integrity VMs
As you can see from Figure 2-14, a specialized VM scheduler has been built into the VM host. This allows you to specify how much of a physical CPU each of the virtual CPUs should be able to consume. In the example in the figure, each of the virtual CPUs in the VMs on the right is guaranteed a minimum of half of a physical CPU. It may be able to get more if the resources are not being consumed by other VMs, but it can't ever get less than its entitlement if it has processes that can use it.
Another interesting thing to note here is that there are five virtual CPUs in the VMs in this figure but only four physical CPUs. This means that some of the virtual CPUs will be sharing physical CPUs and others will get dedicated physical CPUs. This results in an interesting phenomenon in a VM with multiple CPUs: the various CPUs can be running at dramatically different speeds. The scheduler in the VM host randomly shuffles the virtual CPUs across the physical CPUs to ensure that all the CPUs get equal access to the physical CPU resources. Except when you have explicitly said that you want them to be different, of course. In this case, the virtual CPUs are still shuffled, but the ticks are allocated at the ratio you have assigned. One reason this is very important is that if you are running multiple workloads inside a VM and you want to be able to encapsulate those workloads inside of Secure Resource Partitions, it is critical that the virtual CPUs all be running at approximately the same speed in order for the resource allocation to be accurate.
Memory Virtualization
Integrity VM provides virtualized physical memory. Basically, what this means is that a block of physical memory is allocated to each VM and is presented to the OS running inside the VM as if it were all the physical memory that was available. This is similar in concept to how CPUs are virtualized. The OS sees four CPUs and doesn't realize that they are only virtual CPUs and may be sharing a set of physical CPUs. In the case of memory, you allocate 4GB of memory to a VM, for example. To the OS, it looks like it is running on a system with 4GB of physical memory, even though it may really be running on a system with 128GB of physical memory.
In the first release of Integrity VM, memory will not be shared by the different VMs. The reason is that allowing the VMs to share memory would, in effect, require a type of virtual memory and swapping in the VM host. This would be in addition to the swapping that is being done by the OS images running inside the various VMs. The double swapping could have a significant performance impact on the applications running in the VM. This issue isn't insurmountable, but it is not a first-release feature. The bottom line is that, with the first release of VMs you will need to ensure that there is enough physical memory in the system to satisfy the needs of all the workloads.
I/O Virtualization
Integrity VM provides I/O virtualization by allowing users to define I/O interface cards inside each VM. These interface cards are then mapped to one or more physical cards in the VM host. Although the interface cards in the VM appear to the OS as standard SCSI interface cards, they can be mapped to any SCSI or fibre channel interface card on the system. There is no performance impact when running fibre channel on the VMM and SCSI in the VM. The SCSI interfaces in the VMs just run really fast!
Figure 2-15 shows graphically how the I/O packet translation is done in the VM host. Each VM has at least one virtual SCSI interface; these are mapped to physical interface cards on the system.
Figure 2-15 Physical I/O Cards Can Be Shared or Dedicated to Virtual I/O Interfaces in the VMs
In this diagram, the two VMs on the left are sharing one physical I/O interface card. The VM on the right has a dedicated physical interface mapped to its virtual interface. What this means is that you can dedicate all of the bandwidth of a physical interface card to a single VM if that is desired. The interface presented to the VM is still a virtual interface and there is still packet translation done in the VM host, but there is no competition for the bandwidth of the physical interface. This can provide significant performance benefits.
Integrity VM also supports technologies like Auto Port Aggregation in the VM host. This allows you to configure multiple physical interfaces to act as one, providing increased bandwidth as well as higher availability because no one physical interface is a single point of failure. As an example, this is useful for providing a set of ethernet interfaces that have sufficient bandwidth to support a large number of VMs. Then you specify that at least one LAN interface in each VM shares the bandwidth available to the network. Even if one of the physical interfaces fails, the other ones will continue to function, ensuring that mission-critical applications can continue to run uninterrupted while the failing device is diagnosed and repaired.
Security Isolation
In most operating systems, there are two modes of operation. The kernel runs in "privileged mode" and processes run in "unprivileged mode." This ensures that no processes running on the system can execute privileged operations unless the kernel approves them. Only code running in ring 0 can perform privileged operations such as enabling or disabling system interrupts or managing virtual memory translations. If a process attempts to execute a privileged operation (which requires ring 0 privilege), the kernel is notified and can respond appropriately, typically by shutting down the offending process. The Itanium architecture provides four privilege levels, with ring 0 being the most privileged and ring 3 being the least privileged. Figure 2-16 shows that these privilege levels as concentric rings.
Figure 2-16 The Itanium Architecture Supports Four Privilege Rings
When a single operating system is running on an Itanium platform, the kernel typically runs in ring 0 and the remaining processes running on the system run in ring 3. Rings 1 and 2 are not used, which is the key Itanium feature that Integrity VM exploits. Figure 2-17 shows how Integrity VMs are able to run multiple kernels and ensure that none of them are able to interact with each other.
Figure 2-17 Integrity VM Runs the VM host at Ring 0 and Each of the VM Kernels Run at Ring 1
With Integrity VM, the VM host is the only thing that runs at ring 0. The VM kernels are run at ring 1, but are tricked into thinking they are running at ring 0 (sometimes called "ring compression"). Any privileged-mode operations that are executed by the kernel inside a VM are trapped and performed by the host on behalf of the VM. In this way, the host can ensure that even if the security of one of the VMs is compromised, nothing can be done on that VM to affect any of the other VMs.