- 7.1 Oracle Enterprise Manager Ops Center
- 7.2 OpenStack
- 7.3 Summary
7.2 OpenStack
A structured implementation of a private cloud would benefit from well-defined services, which are consumed by the virtual environments that self-service users deploy. One popular implementation of those services, along with the management tools necessary to deploy and use a private cloud, is OpenStack. The following subsections describe OpenStack briefly, and then discuss the integration of Oracle Solaris and OpenStack.
7.2.1 What Is OpenStack?
OpenStack is a community-based open-source project to form a comprehensive management layer to create and manage private clouds. This project was first undertaken as a joint effort of Rackspace and NASA in 2010, but is now driven by the OpenStack Foundation. Since 2010, OpenStack has been the fastest-growing open-source project on a worldwide basis, with thousands of commercial and individual contributors spread across the globe. The community launches two OpenStack releases per year.
OpenStack can be considered an operating system for cloud environments. It provides the foundation for Infrastructure as a Service (IaaS) clouds. Some new modules add features required in Platform as a Service (PaaS) clouds. OpenStack should not be viewed as layered software, however, but rather as an integrated infrastructure component. Thus, although the OpenStack community launches OpenStack releases, infrastructure vendors must integrate the open-source components into their own platforms to deliver the OpenStack functionality. Several operating system, network, and storage vendors offer OpenStack-enabled products.
OpenStack abstracts compute, network, and storage resources for the user, with those resources being exposed through a web portal with a single management pane. This integrated approach enables administrators to easily manage a variety of storage devices and hypervisors. The cloud services are based on a series of OpenStack modules, which communicate through a defined RESTful API between the various modules.
If a vendor plans to offer support for certain OpenStack services in its products, it must implement the functionality of those services and provide access to the functionality through the REST APIs. This can be done by delivering a service plugin, specialized for the product, that fills the gap between the REST API definition and the existing product feature.
7.2.2 The OpenStack General Architecture
Figure 7.3 depicts the general architecture of an OpenStack deployment. It consists of services provided by the OpenStack framework, and compute nodes that consume those services. This section describes those services.
Figure 7.3 OpenStack Architecture
Several OpenStack services are used to form an OpenStack-based private cloud. The services are interconnected via the REST APIs and depend on each other. But not all services are always needed to form a cloud, however, and not every vendor delivers all services. Some services have a special purpose and are configured only when appropriate; others are always needed when setting up a private cloud.
Because of the clearly defined REST APIs, services are extensible. The following list summarizes the core service modules.
Cinder (block storage): Provides block storage for OpenStack compute instances and manages the creation, attaching, and detaching of block devices to OpenStack instances.
Glance (images): Provides discovery, registration, and delivery services for disk and server images. The stored images can be used as templates for the deployment of VEs.
Heat (orchestration): Enables the orchestration of complete application stacks, based on heat templates.
Horizon (dashboard): Provides the dashboard management tool to access and provision cloud-based resources from a browser-based interface.
Ironic (bare-metal provisioning): Used to provision bare-metal OpenStack guests, such as physical nodes.
Keystone (authentication and authorization): Provides authentication and high-level authorization for the cloud and between cloud services. It consists of a central directory of users mapped to those cloud services they can access.
Manila (shared file system): Allows the OpenStack instances to access shared file systems in the cloud.
Neutron (network): Manages software-defined network services such as networks, routers, switches, and IP addresses to support multitenancy.
Nova (compute): The primary service that provides the provisioning of virtual compute environments based on user requirement and available resources.
Swift (object storage): A redundant and scalable storage system, with objects and files stored and managed on disk drives across multiple servers.
Trove (database as a service): Allows users to quickly provision and manage multiple database instances without the burden of handling complex administrative tasks.
7.2.3 Oracle Solaris and OpenStack
Oracle Solaris 11 includes a full distribution of OpenStack as a standard, supported part of the platform. The first such release was Oracle Solaris 11.2, which integrated the Havana OpenStack release. The Juno release was integrated into Oracle Solaris 11.2 Support Repository Update (SRU) 6. In Solaris 11.3 SRU 9, the integrated OpenStack software was updated to the Kilo release.
OpenStack services have been tightly integrated into the technology foundations of Oracle Solaris. The integration of OpenStack and Solaris leveraged many new Solaris features that had been designed specifically for cloud environments. Some of the Solaris features integrated into OpenStack include:
Solaris Zones driver integration with Nova to deploy Oracle Solaris Zones and Solaris Kernel Zones
Neutron driver integration with Oracle Solaris network virtualization, including Elastic Virtual Switch
Cinder driver integration with the ZFS file system
Unified Archives integration with Glance image management and Heat orchestration
Bare-metal provisioning implementation using the Oracle Solaris Automated Installer (AI)
Figure 7.4 shows the OpenStack services implemented in Oracle Solaris and the related supporting Oracle Solaris features.
Figure 7.4 OpenStack Services in Oracle Solaris
All services have been integrated into the Solaris Service Management Framework (SMF) to ensure service reliability, automatic service restart, and node dependency management. SMF properties enable additional configuration options. Oracle Solaris Role-Based Access Control (RBAC) ensures that the OpenStack services, represented by their corresponding SMF services, run with minimal privileges.
The OpenStack modules are delivered in separate Oracle Solaris packages, as shown in this example generated in Solaris 11.3:
# pkg list -af | grep openstack cloud/openstack 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/cinder 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/glance 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/heat 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/horizon 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/ironic 0.2015.2.1-0.175.3.9.0.2.0 i-- cloud/openstack/keystone 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/neutron 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/nova 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/openstack-common 0.2015.2.2-0.175.3.9.0.2.0 i-- cloud/openstack/swift 2.3.2-0.175.3.9.0.2.0 i--
To easily install the whole OpenStack distribution on a system, the cloud/openstack group package may be installed. It automatically installs all of the dependent OpenStack modules and libraries, plus additional packages such as rad, rabbitmq, and mysql.
The integration of OpenStack with the Solaris Image Packaging System (IPS) greatly simplifies updates of OpenStack on a cloud node, through the use of full package dependency checking and rollback. This was accomplished through integration with ZFS boot environments. Through a single update mechanism, an administrator can easily apply the latest software fixes to a system, including the virtual environments.
7.2.4 Compute Virtualization with Solaris Zones and Solaris Kernel Zones
Oracle Solaris Zones and Oracle Solaris Kernel Zones are used for OpenStack compute functionality. They provide excellent environments for application workloads and are fast and easy to provision in a cloud environment.
The life cycle of Solaris Zones as compute instances in an OpenStack cloud is controlled by the Solaris Nova driver for Solaris Zones. The instances are deployed by using the Nova command-line interface or by using the Horizon dashboard. To launch an instance, the cloud user selects a flavor, a Glance image, and a Neutron network. Figures 7.5 and 7.6 show the flavors available with Oracle Solaris OpenStack and the launch screen for an OpenStack instance.
Figure 7.5 OpenStack Flavors
Figure 7.6 OpenStack Instance Launch Screen
Oracle Solaris options specify the creation of a Solaris native zone or a Solaris kernel zone. Those special properties are assigned as extra_specs, which are typically set through the command line. The property’s keys comprise a set of zone properties that are typically configured with the zonecfg command and that are supported in OpenStack.
The following keys are supported in both kernel zones and non-global zone flavors:
zonecfg:bootargs
zonecfg:brand
zonecfg:hostid
zonecfg:cpu-arch
The following keys are supported only in non-global zone flavors:
zonecfg:file-mac-profile
zonecfg:fs-allowed
zonecfg:limitpriv
The list of current flavors can be displayed on the command line:
+----+-----------------------------------------+-----------------------------------+ | ID | Name | extra_specs | +----+-----------------------------------------+-----------------------------------+ | 1 | Oracle Solaris kernel zone - tiny | {u'zonecfg:brand': u'solaris-kz'} | | 10 | Oracle Solaris non-global zone - xlarge | {u'zonecfg:brand': u'solaris'} | | 2 | Oracle Solaris kernel zone - small | {u'zonecfg:brand': u'solaris-kz'} | | 3 | Oracle Solaris kernel zone - medium | {u'zonecfg:brand': u'solaris-kz'} | | 4 | Oracle Solaris kernel zone - large | {u'zonecfg:brand': u'solaris-kz'} | | 5 | Oracle Solaris kernel zone - xlarge | {u'zonecfg:brand': u'solaris-kz'} | | 6 | Oracle Solaris non-global zone - tiny | {u'zonecfg:brand': u'solaris'} | | 7 | Oracle Solaris non-global zone - small | {u'zonecfg:brand': u'solaris'} | | 8 | Oracle Solaris non-global zone - medium | {u'zonecfg:brand': u'solaris'} | | 9 | Oracle Solaris non-global zone - large | {u'zonecfg:brand': u'solaris'} |
The sc_profile key can be modified only from the command line. This key is used to specify a system configuration profile for the flavor—for example, to preassign DNS or other system configurations to each flavor. For example, the following command will set a specific system configuration file for a flavor in the previously given list (i.e., “Oracle Solaris kernel zone – large”):
$ nova flavor-key 4 set sc_profile=/system/volatile/profile/sc_profile.xml
Launching an instance initiates the following actions in an OpenStack environment:
The Nova scheduler selects a compute node in the cloud, based on the selected flavor, that meets the hypervisor type, architecture, number of VCPUs, and RAM requirements.
On the chosen compute node, the Solaris Nova implementation will send a request to Cinder to find suitable storage in the cloud that can be used for the new instance’s root file system. It then triggers the creation of a volume in that storage. Additionally, Nova obtains networking information and a network port in the selected network for an instance, by communicating with the Neutron service.
The Cinder volume service delegates the volume creation to the storage device, receives the related Storage Unified Resource Identifier (SURI), and communicates that SURI back to the selected compute node. Typically this volume will reside on a different system from the compute node and will be accessed by the instance using shared storage such as FibreChannel, iSCSI, or NFS.
The Neutron service assigns a Neutron network port to the instance, based on the cloud networking configuration. All instances instantiated by the compute service use an exclusive IP stack instance. Each instance includes an anet resource with its configure-allowed-address property set to false, and its evs and vport properties set to UUIDs supplied by Neutron that represent a particular virtualized switch segment and port.
After the Solaris Zone and OpenStack resources have been configured, the zone is installed and booted, based on the assigned Glance image. This uses Solaris Unified Archives.
The following example shows a Solaris Zones configuration file, created by OpenStack for an iSCSI Cinder volume as boot volume:
compute-node # zonecfg -z instance-00000008 info zonename: instance-00000008 brand: solaris tenant: 740885068ed745c492e55c9e1c688472 anet: linkname: net0 configure-allowed-address: false evs: a6365a98-7be1-42ec-88af-b84fa151b5a0 vport: 8292e26a-5063-4bbb-87aa-7f3d51ff75c0 rootzpool: storage: iscsi://st01-sn:3260/target.iqn.1986-03.com.sun:02:... capped-cpu: [ncpus: 1.00] capped-memory: [swap: 1G] rctl: name: zone.cpu-cap value: (priv=privileged,limit=100,action=deny) rctl: name: zone.max-swap value: (priv=privileged,limit=1073741824,action=deny)
7.2.5 Cloud Networking with Elastic Virtual Switch
OpenStack networking creates virtual networks that interconnect VEs instantiated by the OpenStack compute node (Nova). It also connects these VEs to network services in the cloud, such as DHCP and routing. Neutron provides APIs to create and use multiple networks and to assign multiple VEs to networks, which are themselves assigned to different tenants. Each network tenant is represented in the network layer via an isolated Layer 2 network segment—comparable to VLANs in physical networks. Figure 7.7 shows the relationships among these components.
Figure 7.7 OpenStack Virtual Networking
Subnets are properties that are assigned much like blocks of IPv4 or IPv6 addresses—that is, default-router or nameserver. Neutron creates ports in these subnets and assigns them together with several properties to virtual machines. The L3-router functionality of Neutron interconnects tenant networks to external networks and enables VEs to access the Internet through source NAT. Floating IP addresses create a static one-to-one mapping from a public IP address on the external network to a private IP address in the cloud, assigned to one VE.
Oracle Solaris Zones and Oracle Solaris Kernel Zones, as OpenStack instances, use the Solaris VNIC technology to connect to the tenant networks. All VNICs are bound with virtual network switches to physical network interfaces. If multiple tenants use one physical interface, then multiple virtual switches are created above that physical interface.
If multiple compute nodes have been deployed in one cloud and multiple tenants are used, virtual switches from the same tenant are spread over multiple compute nodes, as shown in Figure 7.8.
Figure 7.8 Virtual Switches
A technology is needed to control these distributed switches as one switch. The virtual networks can be created by, for example, VXLAN or VLAN. In the case of Oracle Solaris, the Solaris Elastic Virtual Switch (EVS) feature is used to control the distributed virtual switches. The back-end to OpenStack uses a Neutron plugin.
Finally, EVS is controlled by a Neutron plugin so that it offers an API to the cloud. In each compute node, the virtual switches are controlled by an EVS plugin to form a distributed switch for multiple tenants.
7.2.6 Cloud Storage with ZFS and COMSTAR
The OpenStack Cinder service provides central management for block storage volumes as boot storage and for application data. To create a volume, the Cinder scheduler selects a storage back-end, based on storage size and storage type requirements, and the Cinder volume service controls the volume creation. The Cinder API then sends the necessary access information back to the cloud.
Different types of storage can be used to provide storage to the cloud, such as FibreChannel, iSCSI, NFS, or the local disks of the compute nodes. The type used depends on the storage requirements. These requirements include characteristics such as capacity, throughput, latency and availability, and requirements for local storage or shared storage. Shared storage is required if migration of OpenStack instances between compute nodes is needed. Local storage may often be sufficient for short-term, ephemeral data. The cloud user is not aware of the storage technology that has been chosen, because the Cinder volume service represents the storage simply as a type of storage, not as a specific storage product model.
The Cinder volume service is configured to use an OpenStack storage plugin, which knows the specifics of a storage device. Example characteristics include the method to create a Cinder volume, and a method to access the data.
Multiple Cinder storage plugins are available for Oracle Solaris, which are based on ZFS to provide volumes to the OpenStack instances:
The ZFSVolumeDriver supports the creation of local volumes for use by Nova on the same node as the Cinder volume service. This method is typically applied when using the local disks in compute nodes.
The ZFSISCSIDriver and the ZFSFCDriver support the creation and export of iSCSI and FC targets, respectively, for use by remote Nova compute nodes. COMSTAR allows any Oracle Solaris host to become a storage server, serving block storage via iSCSI or FC.
The ZFSSAISCSIDriver supports the creation and export of iSCSI targets from a remote Oracle ZFS Storage Appliance for use by remote Nova compute nodes.
In addition, other storage plugins can be configured in the Cinder volume service, if the storage vendor has provided the appropriate Cinder storage plugin. For example, the OracleFSFibreChannelDriver enables Oracle FS1 storage to be used in OpenStack clouds to provide FibreChannel volumes.
7.2.7 Sample Deployment Options
The functional enablement of Oracle Solaris for OpenStack is based on two main precepts. The first aspect is the availability and support of the OpenStack API with various software libraries and plugins in Oracle Solaris. The second aspect is the creation and integration of OpenStack plugins to enable specific Oracle Solaris functions in OpenStack. As discussed earlier, those plugins have been developed and provided for Cinder, Neutron, and Nova, as well as for Ironic.
Deploying an OpenStack-based private cloud with OpenStack for Oracle Solaris is similar to the setup of other OpenStack-based platforms.
The design and setup of the hardware platform (server systems, network and storage) for the cloud are very important. Careful design pays off during the configuration and production phases for the cloud.
Oracle Solaris must be installed on the server systems. The installation of Oracle Solaris OpenStack packages can occur with installation of Solaris—a process that can be automated with the Solaris Automated Installer.
After choosing between the storage options, the storage node is installed and integrated into the cloud.
The various OpenStack modules must be configured with their configuration files, yielding a full functional IaaS private cloud with OpenStack. The OpenStack configuration files are located in the /etc/[cinder, neutron, nova, ..] directories. The final step is the activation of the related SMF services with their dependencies.
The design of the hardware platform is also very important. Besides OpenStack, a general cloud architecture to be managed by OpenStack includes these required parts:
One or multiple compute nodes for the workload.
A cloud network to host the logical network internal to the cloud. Those networks link together network ports of the instances, which together form one network broadcast domain. This internal logical network is typically composed with VxLAN or tagged VLAN technology.
Storage resources to boot the OpenStack instances and keep application data persistent.
A storage network, if shared storage is used, to connect the shared storage with the compute nodes.
An internal control network, used by the OpenStack API’s internal messages and to drive the compute, network, and storage parts of the cloud; this network can also be used to manage, install, and monitor all cloud nodes.
A cloud control part, which runs the various OpenStack control services for the OpenStack cloud like the Cinder and Nova scheduler, the Cinder volume service, the MySQL management database, or the RabbitMQ messaging service.
Figure 7.9 shows a general OpenStack cloud, based on a multinode architecture with multiple compute nodes, shared storage, isolated networks and controlled cloud access through a centralized network node.
Figure 7.9 Single Public Network Connection
7.2.8 Single-System Prototype Environment
You can demonstrate an OpenStack environment in a single system. In this case, a single network is used, or multiple networks are created using etherstubs, to form the internal network of the cloud. “Compute nodes” can then be instantiated as kernel zones. However, if you use kernel zones as compute nodes, then OpenStack instances can be only non-global zones. This choice does not permit application of several features, including Nova migration. This single-node setup can be implemented very easily with Oracle Solaris, using a Unified Archive of a comprehensive OpenStack installation.
Such single-system setups are typically implemented so that users can become familiar with OpenStack or to create very small prototypes. Almost all production deployments will use multiple computers to achieve the availability goals of a cloud.
There is one exception to this guideline: A SPARC system running Oracle Solaris (e.g., SPARC T7-4) can be configured as a multinode environment, using multiple logical domains, connected with internal virtual networks. The result is still a single physical system, which includes multiple isolated Solaris instances, but is represented like a multinode cloud.
7.2.9 Simple Multinode Environment
Creating a multinode OpenStack cloud increases the choices available in all parts of the general cloud architecture. The architect makes the decision between one unified network or separate networks when choosing the design for the cloud network, the internal network, and the storage network. Alternatively, those networks might not be single networks, but rather networks with redundancy features such as IPMP, DLMP, LACP, or MPXIO. All of these technologies are part of Oracle Solaris and can be selected to create the network architecture of the cloud.
Another important decision to be made is how to connect the cloud to the public or corporate network. The general architecture described earlier shows a controlled cloud access through a centralized network node. While this setup enforces centralized access to the cloud via a network node, it can also lead to complicated availability or throughput limitations. An alternative setup is a flat cloud, shown in Figure 7.10, in which the compute nodes are directly connected to the public network, so that no single access point limits throughput or availability. It is the responsibility of the cloud architect to decide which option is the most appropriate choice.
Figure 7.10 Multiple Public Network Connections
For the compute nodes, the decision can be made between SPARC nodes (SPARC T5, T7, S7, M7, or M10 servers), x86_64 nodes, or a mixed-node cloud that combines both architectures. Oracle Solaris OpenStack will handle both processor architectures in one cloud. Typically, compute nodes with 1 or 2 sockets with medium memory capacity (512 GB) are chosen. More generally, by using SPARC systems, compute nodes ranging from very small to very large in size can be combined in one cloud without any special configuration efforts.
The cloud storage is typically shared storage. In a shared storage architecture, disks storing the running instances are located outside the compute nodes. Cloud instances can then be easily recovered with migration or evacuation, in case of compute node downtime. Using shared storage is operationally simple because having separate compute hosts and storage makes the compute nodes “stateless.” Thus, if there are no instances running on a compute node, that node can be taken offline and its contents erased completely without affecting the remaining parts of the cloud. This type of storage can be scaled to any amount of storage. Storage decisions can be made based on performance, cost, and availability. Among the choices are an Oracle ZFS storage appliance, shared storage through a Solaris node as iSCSI or FC target server, or shared storage through a FibreChannel SAN storage system.
To use local storage, each compute node’s internal disks store all data of the instances that the node hosts. Direct access to disks is very cost-effective, because there is no need to maintain a separate storage network. The disk performance on each compute node is directly related to the number and performance of existing local disks. The chassis size of the compute node will limit the number of spindles able to be used in a compute node. However, if a compute node fails, the instances on it cannot be recovered. Also, there is no method to migrate instances. This omission can be a major issue for cloud services that create persistent data. Other cloud services, however, perform processing services without storing any local data, in which case no local persistent data is created.
The cloud control plane, implemented as an OpenStack controller, can consist of one or more systems. With Oracle Solaris, typically the OpenStack controller is created in kernel zones for modular setups. Scalability on the controller site can then be achieved just by adding another kernel zone. The OpenStack control services can all be combined in one kernel zone. For scalability and reliability reasons, the services can be grouped into separate kernel zones, providing the following services:
RabbitMQ
MySQL management database
EVS Controller
Network Node
The remaining OpenStack Services
7.2.10 OpenStack Summary
Running OpenStack on Oracle Solaris provides many advantages. A complete OpenStack distribution is part of the Oracle Solaris Repository and, therefore, is available for Oracle Solaris without any additional cost. The tight integration of the comprehensive virtualization features for compute and networking—Solaris Zones, virtual NICs and switches, and the Elastic Virtual Switch—in Oracle Solaris provide significant value not found in other OpenStack implementations. The integration of OpenStack with Oracle Solaris leverages the Image Packaging System, ZFS boot environments, and the Service Management Facility. As a consequence, an administrator can quickly start an update of the cloud environment, and can quickly update each service and node in a single operation.