- Objectives
- Prerequisites
- Introduction
- Enterprise Installation Services: Standard Installation Practices
- Hardware Configuration
- Solaris Configuration (clustadm)
- Install SUNWccon package on SC 3.0 Admin Workstation
- Patch Installation - Administration Workstation
- Configure Management Server for Administering Cluster Nodes
- Configure the Terminal Concentrator
- Configure Cluster Control Panel
- Configure Solaris OE (Each Cluster Node)
- Solaris OE —Post Installation and Configuration
- References
Section 1.7: Configure Solaris OE (Each Cluster Node)
This section describes the steps necessary to configure the Solaris OE on each cluster node.
Key Practice: Ensure all firmware is installed with most recent (supported) versions, for all systems and subsystems, including all servers, disk arrays, controllers, terminal concentrators, etc. For example; On each node in the SunPlex platform, ensure the system EEPROM contains the most recent (supported) version of OpenBoot Prom (OBP). For example, OpenBoot 3.15
Step 1.7.1
For local (manual) installations, it is a good idea to ensure that each cluster node is configured with the most recent version of the OBP. Information about downloading can be obtained from SunSolve Online, at http://sunsolve.sun.com.
NOTE
After upgrading the OBP successfully, you must manually turn system power off, then on. If you have just upgraded the OBP on each cluster node, you may skip the next step.
Step 1.7.2
For local (manual) installations, it is a good idea to ensure that no previous EEPROM settings exist by setting the EEPROM to a known state (that is, factory defaults). It is recommended that this be performed only once, at this point in the procedure, prior to customizing the system EEPROM to meet cluster requirements (and BEFORE installing any software). Reset the system EEPROM to its factory default.
For local (manual) installations, enter the following OBP command on each cluster node:
ok set-defaults
Using the set-defaults command at this step establishes a consistent, known (default) state of all OBP variables prior to customizing the OBP environment.
CAUTION
Resetting the system EEPROM should only be performed at this time, during the initial preparation for the Solaris OE installation. This command resets all EEPROM (OBP) variables to their factory default values. All subsequent steps assume the EEPROM has been reset (at this point in the exercise). During the next few steps, the EEPROM will be modified (customized).
Key Practice: Ensure a consistent state on each cluster node before proceeding to configure site-specific (customized) OBP settings. Prior to implementing any configuration changes, and as part of initial Solaris OE installation preparations, reset the EEPROM to the factory defaults. This is done only once, and at this point in the procedure, and will easily and quickly ensure that a consistent state is achieved before further customization occurs.
NOTE
For local (manual) installations, prior to installing Solaris , we will re-configure the OBP settings for each cluster node. This is achieved by executing commands at the OBP 'ok' prompt (the OBP ok prompt should be viewable through the Cluster Control Panel windows).
Step 1.7.3
On each cluster node, execute the OBP banner command to verify system information, such as the system model number, OBP version, Ethernet address, hostid, and serial number.
ok banner
Each node will respond with configuration information.
Document system information of each cluster node.
Key Practice: Until the EEPROM configuration has been completed, you should disable the auto-boot EEPROM feature on each cluster node. Disabling the auto-boot feature will alleviate any problems which could arise if both systems attempted to boot their Solaris OEs while, at the same time, both systems are set with the same, and therefore, conflicting SCSI-initiator ID settings.
We temporarily disable auto-boot? on each cluster node, during this phase of the installation. We do this because, as yet, the system has not been configuredif there is an accidental reboot of a node or nodes, and the system auto-boot? variable has been set to FALSE, the system will reset to the OBP prompt instead of attempting to boot from disk. At this phase, any attempts to boot from disk may require an administrator to manually put the system back to the OBP for further configuration changes.
NOTE
You will be instructed to re-enable auto-boot? at the end of this procedure.
Step 1.7.4
Disable auto-boot? by entering the following command into each cluster node:
ok setenv auto-boot? false auto-boot? = false
Step 1.7.5
On each cluster node, set the following OBP variables, as indicated:
ok setenv local-mac-address? false local-mac-address? = false ok setenv diag-level min diag-level = min ok setenv diag-switch? false diag-switch? = false
Step 1.7.6
For this two-node cluster, set the global SCSI-initiator ID on the second cluster node only. On clustnode2, set scsi-initiator-id to a value of '6'. By changing the SCSI-initiator-ID on clustnode2, we are making a global variable change which will impact other devices attached to clustnode2's internal SCSI controller. Specifically, setting the global SCSI-initiator-ID to 6 will create a conflict between the internal SCSI controller and the internal CD-ROM.
NOTE
To solve this conflict, in the NEXT step we will explicitly set the SCSI-initiator-ID of clustnode2's internal SCSI controller to a value of '7' , by entering a simple script into the clustnode2's non-volatile RAM, or 'nvramrc'.
At this time, enter the following command into the cconsole: host clustnode2 window:
ok setenv scsi-initiator-id 6 scsi-initiator-id = 6
NOTE
SCSI-initiator-ID modification. Refer to Figure 1 and Table 1 through Table 5, specifically noting the disk subsystem cabling and configuration. Because two cluster nodes (both Sun Enterprise Model 220R servers) are connected to the same pair of Sun StorEdge D1000s, the OBP settings require modification. We will set the SCSI-initiator-ID on one of the cluster nodes (clustnode2 in this exercise) to a value of 6 and insert a script into clustnode2's nvramrc (non-volatile memory) to maintain a SCSI-initiator-id of 7 on the clustnode2 internal SCSI controller. Setting the clustnode2 global SCSI-initiator-id to 6 will prevent a conflict on the shared SCSI bus that connects both Sun Enterprise 220Rs to the Sun StorEdge D1000s.
Use the OBP nvedit command in the following procedure. The nvram editor is always set to insert mode. Use the following keystrokes when editing (refer to the following).
Using nvedit: Keystrokes
Keystroke |
Action |
Ctrl+B |
Move backward one character |
Ctrl+C |
Exit the nvramrc editor, returning to the Open Boot PROM command interpreter. The temporary buffer is preserved, but is not written back to the nvramrc editor. (Use nvstore afterwards to write it back.) |
Delete |
Delete previous character. |
Ctrl+F |
Move forward one character. |
Ctrl+K |
From current position in a line, deletes all text after cursor and joins the next line to the current line (that is, deletes the new line). |
Ctrl+L |
List all lines. |
Ctrl+N |
Move to the next line of the nvramrc editing buffer. |
Ctrl+O |
Insert a new line at the cursor position and stays on the current line. |
Ctrl+P |
Move to the previous line of the nvramrc editing buffer. |
<CR> |
Insert a new line at the cursor position and advance to the next line. |
NOTE
Using nvedit can be tricky because there is no delete command. If you want to delete a line you must delete all the characters for the line. Use Ctrl+K to join the empty line with the subsequent line.
Step 1.7.7
On clustnode2, set the internal SCSI controller (example: /pci@1f,4000/scsi@3) SCSI-initiator-ID value to 7 by using the nvedit command. Enter the following commands into the cconsole: host clustnode2 window:
ok printenv nvramrc nvramrc = {{ensure that no previous commands/entries exist in nvram, before proceeding}} ok nvedit {{invoke the nvram editor}} 0: probe-all 1: cd /pci@1f,4000/scsi@3 2: 7 " scsi-initiator-id" integer-property {{note the space before AND after the first quotation prior to the word 'scsi' in this line}} 3: device-end 4: install-console 5: banner 6: {{at this point, use Ctrl + C to exit nvedit}} ok nvstore ok printenv nvramrc {{verify/compare this exactly matches with your screens output}} nvramrc = probe-all cd /pci@1f,4000/scsi@3 7 " scsi-initiator-id" integer-property device-end install-console banner
Step 1.7.8
Enter the following command into the cconsole: host clustnode2 window to enable the nvramrc:
ok setenv use-nvramrc? true use-nvramrc? = true
Step 1.7.9
Verify the nvramrc script works by performing the following steps on clustnode2:
On clustnode2, reset the system by entering the reset command into the OBP ok prompt.
After clustnode2 resets, enter the printenv scsi-initiator-id command into the OBP ok prompt to confirm that the global SCSI-initiator ID is set to 6:
On clustnode2, use the cd command to navigate to the node (that is, directory) that represents the internal SCSI controller /pci@1f, 4000/scsi@3.
At the OBP ok prompt, enter the .properties command to verify that clustnode2's internal SCSI bus SCSI-initiator ID is set back to 7, as indicated in the next code box.
ok reset-all Resetting ... ok printenv scsi-initiator-id scsi-initiator-id = 6
ok cd /pci@1f,4000/scsi@3 ok .properties scsi-initiator-id 00000007 . . .
Step 1.7.10
For local (manual) installations, when dual-hosted SCSI devices are configured (that is, cluster-pair configuration using dual-hosted D1000 arrays), verify that the probe-scsi-all command completes successfully on both cluster nodes. This is performed after the reset-all command succeeds.
For local (manual) installations, enter the following OBP command, simultaneously, on both cluster nodes, and verify the command completes successfully:
ok probe-scsi-all
Step 1.7.11
After you have verified all settings are correct, reset auto-boot? to "true" on each cluster node:
ok setenv auto-boot? true auto-boot? = true
Step 1.7.12
For local (manual) installations, install and configure Solaris according to the recommendations listed on the EIS Installation Checklist for Sun Cluster 3.0 Systems. Specifically, for each cluster node, implement all guidelines listed for the EIS Basic Cluster Configuration, and the EIS WorkGroup Server (WGS) Checklist.
Checklist recommendations include installing the Entire Distribution (+OEM, as required by third party applications). Select the proper locale (english) and root disk layout.
EIS Installation Checklists are located on EIS CD1, under: <eiscd>...sun/docs/EISchecklists/pdf
NOTE
For new cluster installations, obtain all site-specific information by examining the appropriate EIS installation documentation, and completing all EIS pre-installation procedures for running the EISdocV2 tool.
Key Practice: Implement Solaris JumpStart to maintain consistency and fully automate the installation of the Solaris OE and additional software packages for each node in the SunPlex platform. Solaris JumpStart can improve installations and minimize operator errors that occur during a manual process. Combining Solaris JumpStart and Flash archives can be used to enable quick and consistent disaster/recovery operations.
Step 1.7.13
For local (manual) installations, reboot each cluster node after installing Solaris.
Step 1.7.14
At this phase of the installation, verify that all Solaris site specific data is correct on each cluster node:
Example Solaris OE Configuration - clustnode1
Host name: |
clustnode1 |
IP Address: |
129.153.xx.xxx |
Name Service: |
None (local files) |
Set Subnet: |
Yes |
Subnet Mask: |
255.255.255.0 |
Default Gateway: |
None |
NOTE
The values quoted in the previous table are sample values. In a live installation, substitute the appropriate site-specific values (for example, as provided on the EIS installation documentation).
Step 1.7.15
Verify the primary boot disk (c0t0) is partitioned correctly on each cluster node.
Key Practice: Follow EIS recommendations for disk data-layout. When possible, configure a standard, flexible disk partitioning scheme. In this case, a consistent, flexible partitioning scheme allows for using either SVM or VxVM. Implement the following standard partitioning for boot disks. Furthermore, partitioning each disk spindle identically can save time, provide flexibility, and maintain consistency across nodes.
NOTE
It is often easier to use the Solaris format command to calculate the exact number of even cylinders to be configured, as when determining the size of the root file system. Using these guidelines, the size of the root file system is dependent on the actual size of the disk.
On each cluster node, verify that the primary boot disk (c0t0) is partitioned as per EIS Installation Checklist recommendations, or site-specific requirements.
Slice 0 = assigned to "/" (10 GB) Slice 1 = assigned to swap (2 GB) Slice 2 = assigned as "backup" (full extent) Slice 6 = assigned to /globaldevices (100 MB)
NOTE
Boot disk partitioning must adhere to all EIS recommendations, as well as meet site-specific requirements. For each cluster node, ensure that the following boot (root) disk partitioning guidelines: Combine /, /usr, and /opt (recommended), and add an additionial 100 Mbytes to the size of the root (/) file system. If /usr is a separate file system, include an extra 40 Mbytes. If /var is a separate file system, it should be sized appropriately, to ensure core dumps can be saved. 750 Mbytes swap is minimum, and should be sized 2 Gbytes, or 2x RAM whichever is greater. Configure /globaldevices (100 Mbytes); Leave 20 Mbytes for SDS, and assign to slice 7. For VxVM, reserve first two cylinders for private region and encapsulation area, and ensure slice 3 and 4 are unassigned. Customer applications may require one slice. Live Upgrade requires one slice.
CAUTION
Our disk data layout combines /var under the root file system (/). Consider an alternate approach, placing /var on a separate disk slice. When /var is combined with the root file system, as in our configuration, consider disabling sendmail. Alternatively, if sendmail is required by your applications you should limit the amount of free space available to the file system by explicitly setting MinFreeBlocks (sendmail) variable. Upon reaching this limit, this will cause sendmail to reject incoming messages, rather than causing the /var file system to run out of space; thus, preventing the basis for this type of denial of service attack.
By default, /tmp is installed as tmpfs file system, which can potentially result in total consumption of all system virtual memory, and ultimately cause the system to hang. Avoid this by explicitly setting the size= option for mount_tmpfs(8) to help prevent the basis for this type of denial of service attack by any login user. Alternatively, consider converting /tmp to use real disk space, though some applications would suffer.
Step 1.7.16
Examine the installation logs, ensuring that any Solaris OE installation errors do not go undetected or unresolved. On each cluster node, enter the following commands:
# cd /var/sadm/system/logs # pwd /var/sadm/system/logs # ls begin.log finish.log install.log begin.log_2000_04_13 finish.log_2000_04_13 sysidtool.log {{sample dates only}} # more * {{It is important to resolve any installation error messages noted in the logs. Example: pkgadd: ERROR: postinstall script did not complete successfully}}
Key Practice: Verify that the Solaris OE installation was successful and that any errors reported are fully resolved before proceeding. Review the /var/sadm/README file to determine the location of the most recent installation logs (for example, /var/sadm/system/logs). Examine the current Solaris OE installation log files for potential errors (that is, begin.log, sysidtool.log, or install_log). Confirm the cause of any installation error messages which may occur, resolving failures before proceeding further. Refer to SC3.0 U3 Error Messages Manual for error message translations.
Step 1.7.17
On each node in the SunPlex platform, examine the /etc/inet/hosts file, verifying that IP addresses and host names are configured correctly.
NOTE
Prior to using a text editor (such as, vi) , set your terminal environment variable, as appropriatefor example, TERM=ansi or vt100for proper video display.
On each cluster node, configure this file to include the entries for each cluster node (that is, clustnode1, clustnode2), the SC3.0 Admin. Workstation (clustadm), our single terminal concentrator (tc) and logical host (lh-hanfs). For each cluster node, append 'hostname.some.com' to the IP Address entry as per the example, below, eliminating sendmail messages. Verify that each cluster node is configured correcly, as indicated in the code boxes below:
clustnode1# more /etc/inet/hosts ... {{output omitted}} . . . xxx.xxx.xx.xxx clustnode1 loghost clustnode1.some.com xxx.xxx.xx.xxx clustadm xxx.xxx.xx.xxx clustnode2 xxx.xxx.xx.xxx tc tc-nhl ... {{output omitted}} xxx.xxx.xx.xxx lh-hanfs ... {{output omitted}}
clustnode2# more /etc/inet/hosts ... {{output omitted}} . . . xxx.xxx.xx.xxx clustnode2 loghost clustnode2.some.com xxx.xxx.xx.xxx clustadm xxx.xxx.xx.xxx clustnode1 xxx.xxx.xx.xxx tc tc-nhl ... {{output omitted}} xxx.xxx.xx.xxx lh-hanfs ... {{output omitted}}
NOTE
Each example shows standard host name entries for the two-node cluster. On each cluster node, verify that your configuration matches the actual site specific installation requirements. This file will be modified further, during SC3.0U3 software installation procedures.
Step 1.7.18
For local (manual) installations, insert the first EIS CD into the clustadm workstation. Note that the vold(1M) daemon will automatically mount it in the /cdrom directory.
In order to make the contents of the CDROM available to the cluster nodes through the network, enter the following command into clustadm workstation:
root@clustadm# share -F nfs -o ro,anon=0 /cdrom/cdrom0
Step 1.7.19
For local (manual) installations only, on clustadm, verify /cdrom has been shared correctly by entering the following command:
root@clustadm# share - /cdrom/ ro,anon=0 "" root@clustadm#
Step 1.7.20
For local (manual) installations only, enter the following command into each cluster node:
# mkdir /cdrom # mount F nfs -o ro clustadm:/cdrom/eis-cd /cdrom
This example assumes the SC3.0 Admin Workstation is clustadm , which has successfully shared the contents of the CD-ROM drive and is accessible from each cluster node, as indicated.
Step 1.7.21
On each cluster node, run the setup-standard.sh script, as per EIS instructions. This will configure the root shell environment /.profile, and install both ACT and Explorer software. For site-specific information, refer to EIS installation documentation, when responding to each prompt.
At this time, run setup-standard.sh on each cluster node, responding to each prompt, as indicated below:
# cd /cdrom/eis-cd/sun/install # sh setup-standard.sh
Many prompts provide a default value: Accept all default values, except for the following quieries:
Enter "n", when asked if you want to "enable the email panic facility?" during ACT installation (into the /opt/CTEact directory).
When prompted for SUNWexplo, enter all site-specific information, as appropriate.
Enter a single "-" , when asked "Would you like explorer output to be sent to alternate email addresses at the completion of explorer?"
Enter "n", when asked if you "wish to run Explorer once a week?"
When prompted, DO NOT run /opt/SUNWexplo/bin/explorer -q -e at this time. We will use this tool to gather important cluster configuration data after the installation has been completed.
For /opt/SUNWexplo, enter "y" when asked if you "want this directory created now?"
Enter "y" to proceed with the installation of Explorer.
Upon completion, you should see a message indicating the "/.profile was created/modified", along with notification that the "Installation of <SUNWexplo> was successful".
Step 1.7.22
On each cluster node, ensure that all shell environment variables and path names are configured, as required.
Example: Environment Variables (settings) - clustnode1
Variable |
Label |
TERM |
ansi or vt220 |
stty |
istrip |
set prompt |
<hostname># {{e.g., clustnode1}} |
Ensure the following PATH variable settings |
/usr/bin:/usr/ucb:/etc:/sbin:/usr/sbin: |
Ensure the following MANPATH variable settings |
/usr/dt/man:/usr/openwin/share/man:/opt/VRTS/man:/usr/cluster/dtk/man: |
The EIS setup-standard.sh script configures /.profile to ensure the correct EIS installation environment. Refer to this script, for further information.
Step 1.7.23
On each cluster node, configure the EIS installation environment. For example, you will modify /.profile to uncomment the DISPLAY settings that are located in the "clusters" section (end of file). For VxVM, you must, also, correct the default entry for LD_LIBRARY_PATH, on each cluster node.
NOTE
In some circumstances, setting LD_LIBRARY_PATH in this manner could result in performance penalties, and should be restricted. The EIS recommendation is presented here, in accordance with this standard.
At this time, edit the /.profile file on each cluster node, making the changes referenced in the following code box. After making changes, verify that the entries are correct, as indicated below.
# more /.profile # Initial settings for user root # Version . . . {{..... output omitted....}} # Modify the following entry, per EIS LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH:/usr/openwin/lib {{..... output omitted....}} # Uncomment next section for cluster node . . . LOGINFROM='who am i | cut -f2 -d"(" | cut -f1 -d")"' DISPLAY=${LOGINFROM}:0.0 export LOGINFROM DISPLAY echo "" echo "DISPLAY=$DISPLAY" echo "" # Add the following entries, per EIS # Note: BPLAB's recommendation is to use TERM=ansi, # instead of vt100 if ["'tty'" = "/dev/console"]; then TERM=vt100; export TERM fi
Step 1.7.24
If not already done, verify that remote root login is enabled during the installation. This change should only be temporary, and is useful during the installation.
Verify the /etc/default/login file has been modified, as indicated:
# more /etc/default/login ...{{output omitted}} . . . # Comment this line out to allow remote root login # CONSOLE=/dev/console
Remote root login should NOT be allowed (that is, it should be DISABLED) after the cluster installation has been completed successfully, and before running Explorer to capture configuration data.
Step 1.7.26
On each cluster node, verify all changes made during the previous step(s) are successful, and activated. For example, as the root user, log in remotely (rlogin) to the clustadm workstation, and log in remotely to each cluster node. For each node, verify environment variables are correct, after logging in remotely.
Step 1.7.27
At this time, from the cconsole window of each cluster node, first log off then log in again as the root user. Next, from the Cluster Control Panel, choose ctelnet, which can be used during the installation, when entering commands to each cluster node.
NOTE
When rebooting cluster nodes, or examining the console windows for error messages, always refer back to the cconsole window.
Step 1.7.28
Prior to installing VxVM, install the SUNWsan package on each cluster node, and add the additional patches during the next step.
For local (manual) installations, refer to the EIS-CD and locate the appropriate version of this package. SUNWsan is included as part of a compressed archive, named "SFKpackages.tar.Z". This archive can be located by examining the subdirectories (follow the subdirectory for the required version), under: <eis-cd1>.../sun/progs/SAN
On each cluster node, add the SUNWsan package, as indicated in the following code box:
Several "WARNING..." messages may appear indicating "possible conflict" in the /etc/init.d/rc3.d. This message may be ignored.
# cd /cdrom/eis-cd/sun/progs/SAN/3.0 # pkgadd -d . SUNWsan {{..... output omitted....}}
Step 1.7.29
Add patches supplied on the most recent EIS-CD. Enter the following commands on each cluster node:
# cd /cdrom/eis-cd/sun/patch/8 # which unpack-patches /opt/sun/bin/unpack-patches # unpack-patches Are you ready to unpack patches into /tmp/8? [y/n] y {{..... output omitted....}} # cd /tmp/8 # install_all_patches Are you ready to continue with the install? [y/n] y {{..... output omitted....}}
NOTE
Verify that patches install correctly. Do not reboot at this time; instead, proceed immediately to the next step, and install any additional patches.
Step 1.7.30
At this time, install all additional patches, as required for the configuration. For example, for Solaris 8/VxVM, patch 111413-xx must be installed before installing VxVM 3.2 software. At this time, enter the following commands on each cluster node:
# cd /cdrom/PATCHES/VERITAS_3.2 # patchadd 111413-08 checking installed patches... . . . {{ output omitted }} . . . Patch packages installed: SUNWluxop SUNWluxox #
NOTE
Ensure that installation errors do not go undetected, or unresolved, before continuing the installation. Refer to log files under /var/sadm/patch/... directory, for each patch installed. Note the /var/sadm/README file (when navigating the directory structure).
Step 1.7.31
For local (manual) installations, verify that all patches are installed correctly. For example, to list all patches applied, enter the following command into each cluster node.
# patchadd -p | more
NOTE
Both the /usr/sbin/patchadd -p and /usr/bin/showrev -p commands will display a list of patches that have been added to the system. We will use /usr/bin/patchadd -p | grep <patch#>, where <patch#> is the number of the patch you are checking.
# patchadd -p | grep <patch#>
Step 1.7.32
For local (manual) installation, modify this file, as indicated in the code box below, adding [SUCCESS=return], after files (for the hosts: entry). After editing, verify that the changes are correct, as indicated in the following code box:
# more /etc/nsswitch.conf . . . {{output omitted}} . . . group: files hosts: files [SUCCESS=return] services: files netmasks: files
Key Practice: The cluster environment requires that local (/etc) files supporting network services are searched ahead of any naming services. This increases availability by not having to rely on an outside agent. To do this, always put 'files' first (ahead of dns, nis, etc.), for hosts, netmasks, group, and services.
Step 1.7.33
Create the required /etc/system file entries, such as shared memory settings for Oracle, etc. After making changes, verify the settings are correct, as indicated.
CAUTION
Before editing, always take precautions, and reverify each entry is correct before proceeeding to the next step. Also, EIS installation checklists indicate the setting: rpcmod:svc_default_stksize=0x8000. Note that we DO NOT set this variable at this time. SUNWscr will create an entry later (SUNWscr sets it as rpcmod:svc_default_stksize=0x6000). We will modify this file later during subsequent Modules.
Verify that the changes are correct, as indicated, on each cluster node:
# more /etc/system {{..... output omitted....}} * added per EIS installation cookbook exclude: lofs set ip:ip_enable_group_ifs=0 forceload: misc/obpsym set nopanicdebug=1 set lwp_default_stksize=0x6000
Step 1.7.34
For the root file system, add the 'logging' option by editing the /etc/vfstab file. On each cluster node, verify that the file has been modified, as indicated:
# more /etc/vfstab {{..... output omitted....}} /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no logging {{..... output omitted....}}
NOTE
This file will be further modified by SUNWscr, in a future module.
Step 1.7.35
For local (manual) installations, it is a good idea to verify system diagnostics run successfully before installing SunCluster software.
For example, at this time, follow EIS recommendations for using SunVTS to run CPU/MEMORY stress tests, and verify that no failures occur. At this point in the installation, however, do NOT execute any tests which overwrite disk data, unless you plan to reload software again.
Prior to further customization, verify system diagnostics are executed successfully on each cluster node. For example, EIS Installation (WGS) procedures require running SunVTS for a minimum of two hours to verify that each Solaris node can successfully complete diagnostics.
NOTE
At this time, it is important to resolve any diagnostic errors that occur before performing the next step in the cluster installation.