- Challenges of Software Integration
- Integrated Software Stacks
- Terminology
- Stacks and System Architectures
- Requirements of Software Integration Architectures
- Software Integration Architectures
- Software Stack Management and Deployment Frameworks
- About the Author
- Acknowledgements
- Ordering Sun Documents
- Accessing Sun Documentation Online
Software Integration Architectures
A software integration architecture, or stack factory, is useful for assembling, maintaining, and deploying complex integrated software stacks. The stack factory is responsible for creating or provisioning the software stack, as well as deploying and binding a software stack onto a target set of hardware components. The hardware target might be taken from a free pool, or appropriated from a project or task that is under utilizing its hardware resources.
The following subsections describe the components and technology that are necessary to the architecture of a stack factory.
Standardized Software Installation Mechanism
It is certainly preferable to have all slabs or software applications provided in a single format for installation, such as the Solaris or System V Release 4 (SVR4) package or the Red Hat Linux package format. In reality, legislating this requirement is nearly useless and a stack factory must be able to deal with package formats, as well as all commonly used installation formats such as tarballs, cpio archives, and shell script installers.
Standardized, Documented, and Recreatable Installations
To meet the requirements of reproducible and standardized installations, the stack factory must be built on an automated installation mechanism. An automated installation mechanism such as JumpStart technology for the Solaris OE or Kickstart for Linux provide the reproducibility and standardization necessary while being self-documenting.
Creating Rapidly Deployable Software Stacks
A mechanism to create and rapidly deploy software stacks must be the at the heart of any integration architecture. Solaris Flash technology provides a mechanism by which you can easily clone a reference installation of the Solaris OE). You can then deploy that software stack by a mechanism described in Section 4, "Software Stack Management and Deployment Frameworks," on page 14.
Software stacks are created by cloning the on-disk Solaris OE, including all installed software. The system that the software stack is cloned from is designated as the master machine. The reference installation can be a Solaris OE installed by any means, including JumpStart software, CD, or interactive installation.
To create a cloned software stack, identify the master machine and capture the reference installation in a Solaris Flash archive using the flarcreate(1m) command on the master machine. A central feature of Solaris Flash software, this archive is essentially a point-in-time snapshot of the Solaris OE, software patches, and applications on the master machine.
Solaris Flash extensions enable you to install an archive from a Network File System (NFS) server, a Hypertext Transfer Protocol (HTTP) server, or a traditional JumpStart server. Additionally, you can access the archive from a disk device (including CD-ROM) or from a tape device that is local to the installation client. When you install an archive, it is transmitted over the network to the installation client and is written to the disk. After the archive is written to the installation client's disk, any necessary archive modifications are performed. For example, configuration files on the installation client, such as the /etc/nsswitch.conf file, might need to vary from the file on the master machine. The Solaris Flash mechanism enables you to automate modifications and allows for differences in kernel architecture or device differences between the master machine and the installation client.
Additionally, Solaris Flash software enables the automatic resolution of partitioning differences between the master machine and the installation client. For example, if an archive was created on a system with a single root (/) partition, and the installation client has separate / and /var partitions, the archive automatically customizes itself to the installation client. Remember, installation client partitioning must be correctly specified in the JumpStart software profile.
A flash archive is snapshot of a system and, as such, includes all specified files on a system. If an archive is created from a system that is in use, you will need to clean up or zero out some files after the flash archive is installed. Examples of these types of files include log files, such as those found in /var/adm, and any files in the /var/tmp directory.
Modify the finish script to zero out log files after installing the Solaris Flash archive. To exclude temporary directories, such as the /var/tmp directory, exclude the directory when you create the flash archive.
Create the flash archive after installing all software, but before placing the system into production. Depending on the software installed and the intended use for the system, you might need to create the flash archive after installing the software, but before configuring it. For example, you should create archives for database servers or Lightweight Directory Access Protocol (LDAP) servers after installing the database management software, but before creating and populating the databases.
Installing the Solaris OE with a flash archive can be dramatically faster than with other mechanisms, depending on factors such as network traffic and disk speeds.
When selecting a system to be used as the master system and when building the software stack, pay attention to the types of hardware where the stack will be deployed. All software that may be necessary on the installation clients must be contained in the software stack.
For example, consider a Peripheral Component Interconnect (PCI)-based system selected as a master system. Depending on the choices made at installation time, the SBus driver software may not have been installed. Consequently, any resulting software stacks created from this system will not have the SBus drivers available and any SBus hardware will be unavailable to the installation client.
As another example, consider a flash archive created on a Sun Fire™ 15K domain. In most instances, the domain will not have a graphics frame buffer installed, and consequently, no drivers for any graphic frame buffers. This will not prohibit the flash archive from being correctly deployed onto a wide range of platforms. However, if one of those platforms is a Sun Blade™ 1000 workstation, the graphics monitor and frame buffer will not be available due to the lack of frame buffer drivers in the flash archive. To avoid this issue, use one of the following recommendations:
Ensure that all possible drivers and Solaris OE software that may be needed on any potential client are on the master system (and in the flash archive) or that any missing software is installed from a JumpStart finish script after the flash archive is installed.
Deploy the software stack to only those systems that are appropriate for that stack.
The first approach, of installing all Solaris OE software, is the recommended solution and easily performed by installing the Entire Distribution plus OEM Software (SUNWCXall) package metacluster, as well as any third-party or specialized device drivers, on the master system.
Further details on the use of Solaris Flash software can be found in the Solaris 9 OE Advanced Installation Guide and the Sun BluePrints book, JumpStart Technology: Effective Use in the Solaris Operating Environment by John S. Howard and Alex Noordergraaf (ISBN 0-13-062154-4).
Safe Upgrades, Including Roll-Back Plans
Live Upgrade (LU) software provides a mechanism for upgrading and managing multiple on-disk copies of the Solaris OE. Using LU, you can upgrade an environment without taking a system down. LU enables you to upgrade and work within multiple on-disk environments and then reboot into the new Solaris OE after you complete the changes to the on-disk software images.
You can also use LU to provide a safe "fall-back" environment for quickly recovering from problems or failures that occur following an upgrade. This fall-back environment helps minimize the risk associated with a software upgrade. Additionally, you can use LU for patch testing and rollout, as well as sidegrades (the large scale reorganization of on-disk OEs).
LU 2.0 was introduced with the Solaris 8 10/01 OE (Update 6). LU 2.0 works with, and can be installed on, all releases of the Solaris OE versions 2.6, 7, and 8.
Upgrade Philosophy
To upgrade to a particular release of the Solaris OE, you must install the version of LU that is bundled with the release of the Solaris OE to which you want to upgrade. Then, use that version of LU to upgrade to the desired release of the Solaris OE.
For example, if you are running the Solaris 2.6 OE and you want to upgrade to the Solaris 9 09/02 OE, you should install LU from the Solaris 9 09/02 OE distribution onto the Solaris 2.6 OE system, and then use the Solaris 9 09/02 OE version of LU to upgrade the Solaris 2.6 system to the Solaris 9 09/02 OE.
The following information explains how to upgrade specific platforms and versions of the Solaris OE:
For the SPARC™ platform edition of the Solaris OE, you can use LU 2.0 to upgrade from Solaris 2.6 OE and later versions.
For the Intel platform edition of the Solaris OE, you can use LU 2.0 to upgrade from Solaris 2.7 OE Intel platform edition and later versions.
NOTE
For both the SPARC and Intel platform editions, the minimum supported version of the Solaris OE to which you can upgrade is Solaris 8 01/01 OE (Update 3).
To upgrade to the Solaris 8 01/01 OE (Update 3), the Solaris 8 04/01 OE (Update 4), or the Solaris 8 07/01 OE (Update 5), install and use the LU 2.0 08/01 Web Release at http://www.sun.com/solaris/liveupgrade.
To upgrade to the Solaris 8 10/01 OE (Update 6), install and use the LU 2.0 10/01 OE located in the EA area of the Solaris 8 10/01 OE distribution.
To upgrade to the Solaris 8 01/02 OE (Update 7) or later versions, install and use the LU 2.0 software that is integrated into the operating system package area.
Boot Environments
The concept of a boot environment (BE) is central to the operation and implementation of LU. A BE is a group of file systems and their associated mount points. LU uses the term "boot environment" instead of "boot disk" because a BE can be contained on one disk or can be spread over several disks. LU provides a command-line interface and a character-based user interface (CUI) to create, populate, manipulate, and activate BEs.
The CUI has a few restrictions. The CUI is neither localized nor internationalized. Also, the existing CUI does not provide access to the full functionality of LU.
You can create BEs on separate disks or you can create them on the same disk; however, a single root (/) file system is the recommended layout for the Solaris OE.
The active BE is the one that is currently booted and active; all other defined BEs are considered inactive. Inactive BEs are also referred to as alternate boot environments (ABEs).
BEs can be completely self-contained, or they can share file systems. Only file systems that do not contain any OE-specific data and that must be available in any OE should be shared among BEs. For example, users' home directories on the /export/home file system would be good candidates to share among several BEs.
If you used multiple file systems for the Solaris OE, such as separate file systems for /kernel, /usr, /etc, /, and the like, do not share these OE-specific file systems among BEs. In addition, do not split certain file systems from / (such as /kernel, /etc, /dev, or /devices). If you split them off onto a separate file system from /, the BE that is created might not be bootable.
Additionally, LU provides a mechanism to synchronize individual files among several BEs. This feature is especially useful for maintaining files such as /etc/passwd in one BE and then propagating changes to all BEs.
To back up BEs created with LU, use the ufsdump or fssnap commands. Consult the man pages for information about the uses of these commands.
Implementing and Using Live Upgrade
To appreciate the value of using LU to upgrade a system, consider the common situation of having to upgrade a production server from the Solaris 8 OE to the Solaris 9 OE. Most likely, you could not take the server down to do the upgrade. Additionally, the site change control procedures likely require you to provide a back-out plan to restore the initial Solaris 8 OE in the case of any unforeseen upgrade failures or software incompatibilities. Using LU, you can complete this type of upgrade while the Solaris 8 OE is up and "live," while maintaining the Solaris 8 OE as a fallback in case a failure occurs during the upgrade procedure.
The following tasks outline the process required to upgrade a system using LU:
Create and populate a new BE by cloning the current OE.
Upgrade the new BE.
Install (or upgrade) unbundled software, patching as necessary, in the new BE.
When you are ready to cut over to the new version of the OE, activate the new BE and reboot into the new BE.
When using Solaris Flash with LU 2.0 software, the specified ABE is not upgraded. Instead, the contents of the flash archive are extracted and installed in the specified ABE.
Beyond Upgrades: Performing Sidegrades
The ability to create multiple BEs and populate them with live OE data provides greater flexibility for reacting to changing system or user needs with minimal downtime.
LU enables you to perform sidegrades (the large scale reorganization of the OE) with minimal impact to the user. This section details methods for using LU to perform sidegrades.
Over the course of time, the on-disk data of systems and OEs tend towards a state of greater disorder, as work arounds and special cases are implemented, and then never recreate an architecture to the site standard. Work arounds and special cases are usually left in place because the downtime to resolve them is not available. Using LU, you can reinforce a site standard for BEs on systems that have suffered at the hands of entropy and work arounds.
For example, consider a system installed with an undersized root (/) file system. If / is sized such that it is large enough for the initial installation of the OE, however, over the course of time, several patches are installed, the disk space requirements of the patches (and the space needed to save previous versions of the files) might cause / to become 100 percent full. To alleviate space constraints on /, it is common practice to move the contents of /var/sadm to a different file system (for example, /opt2/var/sadm), and then either create a symbolic link from /var/sadm to /opt2/var/sadm or mount /opt2/var/sadm on /var/sadm.
LU can be used to consolidate these separate file systems back onto a single file system. Using the lucreate command, clone the current BE with a / that is large enough for future patch needs. Then, using the luactivate command, select the new BE and reboot when it is convenient.