Automating Virtualization
- 7.1 Oracle Enterprise Manager Ops Center
- 7.2 OpenStack
- 7.3 Summary
Learn how virtualization management tools Oracle Enterprise Manager Ops Center and OpenStack can facilitate the process of automating virtualization.
Early computers were expensive, prompting their owners to squeeze all possible value out of them. This drive led to the introduction of time-share operating systems, on which many workloads may run at the same time. As per-unit cost dropped, single-user, single-workload operating systems became popular, but their adoption created the mindset of “one workload per computer,” even on servers. The result was an explosion of under-utilized servers. The high costs of maintaining so many servers led to the widespread embrace of virtualization, with the goal of reducing the quantity of servers owned by organizations. Consolidation via virtualization may have reduced a company’s hardware acquisition costs, but it did nothing to improve the organization’s maintenance costs. Ultimately, managing VEs one at a time is no easier than managing one server at a time.
Many virtualization management tools exist on the market that can facilitate the process of automating virtualization. This chapter discusses two of them: Oracle Enterprise Manager Ops Center and OpenStack.
7.1 Oracle Enterprise Manager Ops Center
Oracle Enterprise Manager Ops Center 12c is part of the broader Oracle Enterprise Manager product. Whereas Enterprise Manager Cloud Control focuses on the higher end of the stack (i.e., database, middleware, and applications), Ops Center addresses the lower end (i.e., storage, operating systems, hardware, and virtualization).
Ops Center is designed for full life-cycle management of the infrastructure layer, which includes both Oracle hardware and operating systems. From a hardware perspective, it is capable of functions such as the following:
Discovery of new and existing hardware
Upgrading server firmware
Installing the “bare metal” operating system
Monitoring hardware components and opening service requests automatically if a hardware fault occurs
Providing console access to the system
Other management actions such as power-off/on, set locator lights, and others
Paramount in Ops Center’s functionality portfolio is managing the two primary virtualization technologies: Oracle Solaris Zones and Kernel Zones, and Oracle VM Server for SPARC. Provisioning virtual environments (VEs) including those types includes performing any required preparation of the hardware and operating system.
7.1.1 Architecture
The architecture of Ops Center consists of three main sections:
Enterprise Controller: The main server component of Ops Center. The enterprise controller delivers the user interface and stores the enterprise-wide configuration information. An organization that uses Ops Center will have at least one enterprise controller system that provides communication back to Oracle for service requests, automated patch and firmware downloads, contract validation, and other activities. However, many disaster recovery sites include their own enterprise controller so that they can continue operations management, if needed, during service outages that affect the rest of the system.
Proxy Controller: The component that communicates to the managed assets, including hardware assets, operating system assets, storage assets, virtualized assets, and others. If all of the systems being managed by Ops Center are in one data center, only one proxy controller is needed, and it can run in the same server as the enterprise controller. Alternatively, you can install multiple proxy controllers per enterprise controller. Standard configurations use one or more proxy controllers per data center, to expand the reach of the Ops Center environment to other data centers, networks, or DMZs.
Agent: A proxy controller typically manages deployed software components via a software agent installed on the system. When an agent is not appropriate, an operating system can be managed without one. The Ops Center agent supports Solaris 8, 9, 10, and 11.
Figure 7.1 depicts the Ops Center architecture.
Figure 7.1 Ops Center Architecture
7.1.2 Virtualization Controllers
The Ops Center administrator can choose from two types of virtual environments. One type uses Solaris Zones; this type is simply called a global zone. The other type is a control domain, whose name refers to the use of OVM Server for SPARC. All systems that can be managed by Ops Center can be the global zone type. On modern SPARC systems, you can choose either a control domain or a global zone.
After you make that choice, Ops Center deploys the appropriate type of agent software in the management space, either the computer’s control domain or global zone. This agent is called the virtualization controller (VC). Once its installation is complete, you can create the appropriate type of VEs on that server: logical domains for a control domain, or Solaris Zones for a global zone.
7.1.3 Control Domains
Control domains (CDoms) manage Oracle VM Server for SPARC logical domains (LDoms) on a computer. When you use Ops Center to provision a CDom, you choose the operating system, CDom hardware configuration (RAM, cores, and I/O), and names of virtual services provided to other domains. The service names include those for virtual disk services, network services, and console services. You can also initialize advanced Solaris features at the network layer for improved redundancy and performance, such as link aggregation. Advanced configurations, such as SR-IOV, service domains, and root complex domains, are also supported.
Once the CDom is provisioned, the Ops Center user can begin building guests. The guests must boot from either a virtual or physical disk. Using virtual disks provides the greatest flexibility at an extremely minimal performance cost. Virtual disks can reside on a number of physical media available to the CDom:
A local file system
A local disk
An iSCSI LUN
A NAS file system
A FibreChannel LUN
When creating LDom guests, the Ops Center user creates one or more logical domain profiles that define the make-up of the guest:
Name, CPU, core, and memory allocation
Full core or vCPU allocation
Architecture of the CPUs
Networks
Storage (local, iSCSI, NAS, FC)
When this information is combined with an operating system provisioning profile, the user can both create and provision one or more LDom guests quickly and easily by supplying a small amount of information, such as an IP address.
Further, the user can create a deployment plan to create multiple LDom guests with a single flow through the Ops Center user interface. After the deployment plan has been created, it can be used very easily to quickly create a large number of VEs, each ready to run a workload. Each of these guests will include all of the configuration details of the library image that was deployed, ensuring similarity for applications.
7.1.4 Global Zones
Global zones can be used to host applications, Solaris Zones, or any combination of those. Within the context of Ops Center, for a logical domain to include zones, the “global zone” agent must be installed in the LDom.
The Ops Center user may create a Solaris Zone profile that defines how zones will be created. Configuration options include the following:
Dedicated or shared memory and CPU resources
Type of zone (native or branded)
Source of installation (e.g., operating system archive or network-based package source)
Storage configuration (FC, iSCSI, or local disk)
IP/Network configuration (exclusive or shared)
DNS/Naming Services
Time zone
Root and administration passwords
Again, the user can create a deployment plan, based on a zone profile, to create multiple similar zones.
7.1.5 Storage Libraries
Ops Center tracks which LUNs and file systems are allocated to which guests, and ensures that more than one guest does not access the same LUN simultaneously. This constraint applies to both environments created with Ops Center and existing environments that are discovered by, and integrated into, Ops Center.
Ops Center manages this storage by using an underlying storage concept called storage libraries. Storage libraries are shared storage that is used for VEs, either for boot or data storage. Three types of storage can be used for storage libraries:
NAS
A static library, using LUNs created ahead of time:
FibreChannel
iSCSI
A dynamic library, using a ZFS storage appliance, creating LUNs as needed
7.1.6 Server Pools
Ops Center includes another feature for virtualization that greatly enhances the automation, mobility, and recoverability of both LDom and zone environments—namely, a server pool. A server pool is a collection of similar VEs. It can be a group of zones or LDom hosts (CDoms), but not both types. A server pool of Solaris Zones must include servers with the same CPU technology, either SPARC or x86.
For a control domain server pool, Ops Center manages the placement of LDoms into physical computers using its own rules, guided by configuration information that the user provides and the current load on those computers. Ops Center can also dynamically balance the load periodically, among the servers in the pool.
A global zone server pool is treated the same way: Ops Center runs the zones in the servers, or LDoms, according to its rules and configuration information.
A server pool consists of the following components:
Similar VEs
Shared storage libraries (FC, NAS, iSCSI)
Shared networks—a very small NAS share used to store guest metadata
The metadata comprises all of the information and resources for the guest. It is used for both migration and recovery scenarios.
Server pools enable two main mobility and recoverability features to be used in conjunction with virtualization—migration and automatic recovery.
7.1.7 Migration
Guests can migrate between hosts within a server pool. Depending on the underlying virtualization technology, this migration will either be “live” or “cold.” In live migration, the guest VE is moved to the destination without any interruption of the VE’s operation. In contrast, cold migration requires stopping the guest and restarting it on another host in the pool. Ops Center provides a simple way to automate the safe migration of guests from the central browser interface. It performs preflight checks to ensure that a guest can migrate and that the migration will succeed prior to initiating the actual migration step.
7.1.8 Automatic Recovery
Automatic recovery resolves a software or hardware failure without any user interaction. In the event of a server failure, guests on that member of the pool are automatically restarted on remaining, healthy hosts in the pool. Each guest that is no longer running will be automatically restarted on a healthy host in the pool.
For example, in a pool of five servers, imagine that Server 1 suffers a hardware fault and stops responding. Ops Center will restart the guest(s) that had been running on Server 1 on the remaining servers in the pool. Ops Center uses internal algorithms to determine which hosts are healthy and have sufficient resources. It uses placement rules provided when the pool was constructed to select the host on which each guest is restarted.
7.1.9 Layered Virtualization
Ops Center supports and helps automate a very popular “layered” virtualization technology. In this technology, one layer of virtualization runs underneath another layer.
The pool administrator can create a CDom server pool, where multiple LDoms are part of the pool. You can then use Ops Center to create multiple zones in one or more LDoms (see Figure 7.2). If you use layered virtualization, instead of migrating or automatically recovering at the zone level, those operations are handled at the LDom layer.
Figure 7.2 Layered Virtualization
When you live migrate an LDom that has zones, the zones are automatically migrated with the LDom, and do not experience any downtime. When an LDom is automatically recovered, the zones will also be recovered and restarted automatically.
7.1.10 Summary
Virtualization technologies enable efficient consolidation, but require efficient management tools. Data center staff can use Oracle Enterprise Manager Ops Center to easily manage hundreds or thousands of VEs in multiple data centers, leveraging its efficient architecture to provision, monitor, and manage those guest VEs.