- Overview of Tru64
- 64-Bit Computing
- Clustering
- Justifying the Migration
- Architecting the Migration
- Implementing the Migration to the Solaris Environment
- Managing the New Solaris Environment
- Related Resources
- About the Authors
- Ordering Sun Documents
- Accessing Sun Documentation Online
Implementing the Migration to the Solaris Environment
During the previous phase of the project, you produced a work breakdown structure that was used to develop a plan for this phase of the project. Implementation activities can be broken into tasks that relate to hardware and tasks that relate to software. Where appropriate, independent tasks can be overlapped in the schedule, depending on the required completion date, budget, and skill sets available.
In the following sections, we examine the tasks involved with implementing the solution in a Solaris environment.
Modifying the Facilities
Typically, the facilities must be modified before hardware is installed. Depending on the modifications required, considerable lead time can be required. In the example, you need to reroute a power cable, install the appropriate receptacle, and clear space for the new hardware. These activities are coordinated to reduce their impact on the existing environment.
Creating the Networking Infrastructure
This task involves preparing the network for the addition of the new platform. Here, you decide things like IP addresses, routing, and network masks. If warranted, new load balancers, switches, hubs, cable drops, and the like will be deployed and tested. Care must be taken to minimize the impact of these activities on the existing environment.
In the example, the networking infrastructure requires minimal change because a cable must be routed from the switch to the machine location. However, if routers or load balancers were required, significantly more time and effort would go into this task.
Deploying the Compute and Storage Platforms
Because this activity requires specialized skills, it is typically performed by the service organization of hardware vendors (compute and storage). Before installing platforms, the supporting infrastructure (for example, facilities and networking) should be in place. If the compute platform and storage platforms are provided by different vendors, their activities must be coordinated.
In the example, the compute hardware and storage platforms are provided by the same vendor. Once the power and networking are in place, you can arrange for the equipment to be delivered and installed.
Implementing the Application Infrastructure
When the development environment is in place, you can begin transforming the application source code and any of the third-party scripts that support the application.
With the production hardware platform in place, you can begin creating the application infrastructure. The activities in this task include:
Installing third-party products used at runtime. These can include the database and any tools or scripts used to administer the database or to produce reports.
Installing the modified scripts that manage the application.
Configuring the platform to support the application.
Implement the Build Environment
The analysis of the build log identified the tools and utilities that were used to create the application executable. Where possible, you should acquire and install the same tools. In certain cases (most likely, the compiler), you might have to acquire a different product with similar functionality.
When installing these build tools, examine the old build log to determine where the tools were located in the old environment. Putting your tools in the same location will minimize the changes that have to be made to the make files.
Modify the make files to use the new tools, utilities, and libraries. Translate the tool options, when required, such that they provide the same functionality. Our usual and preferred methodology is to port and redesign the entire build environment before the application source is modified. If the build environment is well designed, only a few key make files and setup scripts need to be changed. The general approach is as follows:
Understand the key files that affect the whole build system, and port those first.
Do a global search for hard-coded values in make files, and change them so they benefit from the new design if applicable.
Port the disconnected hard-coded instances if any are left.
Release the build environment for code porting work.
These steps might have to be performed on a per-module basis because of the project schedule, code availability, and resources.
Distributing this work across a development team can create efficiencies in the project schedule, but if a large number of code-porting specialists make changes to the make files in the module they are porting, then the make files can become inconsistent with hard-coded values that might conflict with each other.
Translate Scripts
During the assessment process, a number of issues were identified that made the shell scripts that were originally developed for a Tru64 environment incompatible with the Solaris environment. These differences were usually related to the location or options that are used by the shell script. The following example presents the types of changes that will have to be made.
#! /bin/ksh echo "Generated on ´date´" >> $longreport echo "Hostname : ", ´hostname´ >> $longreport # __sun: change tr 'a-z' 'A-Z' to tr '[a-z]' '[A-Z]' # NEW_TARGET=´echo ${TARGET_NAME} | tr 'a-z' 'A-Z'´ NEW_TARGET=´echo ${TARGET_NAME} | tr '[a-z]' '[A-Z]'´ echo "New target: ${NEW_TARGET}" >> $longreport echo "Current environment" >> $longreport # __sun: add path #printenv >> $longreport /usr/ucb/printenv >> $longreport echo "Print who's logged in" >> $longreport #__sun: 'f' is 'finger' on Solaris #f \@´hostname´ >> $longreport finger \@´hostname´ >> $longreport echo "Check if $filename is a link?" >> $longreport #__sun: change -h to -L #if [ -h $filename ]; the if [ -L $filename ]; then echo "$filename is a link" >> $longreport fi echo "Extract all names from $filename " >> $longreport #__sun: add path because of option -F grep -F "^NAME^" $filename >> $longreport /usr/xpg4/bin/grep -F "^NAME^" $filename >> $longreport echo "Extract all tasks from $filename " >> $longreport grep "TASK:" $filename >> $longreport echo "Extract all TOTALs from $filename " >> $longreport grep "^TOTAL:" $filename >> $longreport #__sun: change -w to -m. Send mail on completion. #lp -w -d ${laser_printer} $longreport lp -m -d ${laser_printer} $longreport mv $longreport $LOG/${longreport}.old exit
Go through each shell script and make the appropriate changes. As you can see from the preceding example, comments should be inserted to provide history and context.
Integrate Databases
As with shell scripts, the application might also depend on third-party products for support. In the example, the only third-party products involved with the migration were related to the database technology, so you will be replacing a Sybase implementation with Oracle technology, as shown in the following figure.
FIGURE 2 Replacing Sybase With Oracle
This assessment indicated that you should address issues associated with the following components:
Stored procedures and triggers (box 15).
C programs that use embedded SQL (box 2).
Report programs that use third-party reporting tools (box 11).
DBA maintenance scripts (box 12).
As illustrated in the preceding figure, the replacement environment is almost a one-to-one mapping of component technology. When implementing the Oracle environment on the Sun platform, you must acquire and install the appropriate products with their respective licenses as follows:
The database communication layer (box 14). This is supplied by the database vendor and is usually composed of database client and server libraries.
The embedded SQL precompiler (box 3). This is supplied by the database vendor, in this case, Oracle.
The C compiler and linker (box 5). This is usually supplied by the hardware vendor, in this case, Sun.
The database engine (box 17). This is supplied by the database vendor, Oracle.
Other third-party products you should acquire and install include database reporting tools (box 11) and database management tools (box 13). Versions of these tools exist for the Solaris platform, reducing the amount of change that must be introduced. However, changes will most likely have to be made to the products if they issue SQL statements. These products might have to be rewritten altogether. Additional configuration of these products might also have to be introduced, reflecting different environment variables and path names.
Any API changes introduced in the database technology will have to be reflected in the source code of the applications. At this stage, you are only addressing issues with the database components of the application. The outcome of this stage of the process is new embedded SQL programs that will be compiled with the application source after it has been modified to conform to the new operating environment. These changes are described later in this chapter.
Data Extraction, Transformation, and Loading (ETL)
After you install the supporting database environment, you can create the database objects to accept the data. You can then extract the data from the old Sybase system by using vendor-provided utilities. Finally, you can load the data into the Oracle database, using utilities that are provided by the vendor.
There are times when database vendor-provided tools can be used. This is usually the case when the database structure is simple and data types can be mapped one-to-one. In such cases, the bulk copy (bcp) utility from Sybase can be used to extract data from Sybase. The output of this command can be an ASCII delimited file. You can then feed this file to an Oracle utility called SQL*Loader.
For more complicated scenarios, the use of third-party extraction, transformation, and loading (ETL) tools such as Hummingbird's ETL or Embarcadero's DT/Studio might be appropriate. Of course, you can also write scripts to perform these tasks.
Although our example requires no data translation, data types that existed in the Sybase implementation that cannot be reproduced in the new Oracle implementation might require that the data be transformed or translated to conform the new data type. Depending on the data type in question and the extent to which it is used throughout the application, this simple change can create significant complexity when attempting to modify the application source code. All references to that data type might require change. In certain cases, depending on the change to the data type, the application logic might also have to change.
Transforming Source Code
In the assessment process, you identified a number of APIs that were incompatible with the target Solaris environment. Rather than modifying the source code in-line to effect these changes, you should create a compatibility library to implement any changes that have to be implemented to rectify incompatibility issues. You can limit source code modifications to conditional compilation directives that are used to ensure backward compatibility.
The following example shows how to use conditional compilation to ensure backward compatibility.
#ifdef _sun SunVersion(); #else OriginalVersion(); #endif }
The function SunVersion() that will emulate Tru64 functionality will exist in a compatibility library.
In the example, you create a compatibility library (box 18) that is linked in when the application executable (box 6) is created. You then modify the source code for the application (box 1) to conform to the application infrastructure and to use the functions provided by the compatibility library, using conditional compilation as discussed above.