- Using Solaris Volume Manager Software With Sun Cluster 3.0 Framework
- Configuring Solaris Volume Manager Software in the Sun Cluster 3.0 Environment
- Advantages of Using Solaris Volume Manager Software in a Sun Cluster 3.0 Environment
- Ordering Sun Documents
Configuring Solaris Volume Manager Software in the Sun Cluster 3.0 Environment
In this section, we present a sample configuration that consists of two nodes, phys-node-1 and phys-node-2. Each node has two internal drives and is connected to two hardware RAID devices. On each disk array, two logical unit numbers (LUNs) are created. Sun Cluster 3.0 software is installed on both nodes, and a quorum device is chosen. The two LUNS on the first array are known in the cluster as /dev/did/dsk/d3 and /dev/did/dsk/d4. The LUNs on the second array are /dev/did/dsk/d5 and /dev/did/dsk/d6. We create a diskset, nfsds, that contains one mirror, with two submirrors. On top of the mirror, we create four soft partitions that can be used to create file systems. Because this setup has two storage arrays, we must configure mediators.
To Configure Solaris Volume Manager Software on All Cluster Nodes
Create local replicas.
Because Solaris Volume Manager software is part of the Solaris 9 OE distribution, you do not have to install it separately. However, for the software to operate, it must have at least one local replica. The preferred minimum is three replicas. Use the following commands to create the local state database replicas:
# metadb -af -c 2 c1t0d0s7 # metadb -a -c 2 c2t0d0s7
Document the commands you use to create the local state database replicas and keep the information offline so you can refer to it if you have to recreate the configuration.
Mirror the boot disk.
We highly recommend that you mirror boot disks on all nodes. The procedure to do this is the same on a cluster node as it is on a noncluster node, with the following exceptions:
After installing the cluster, the device to mount in /etc/vfstab for the global device's file system changes from a logical Solaris OE name (c#t#d#s#) to a DID name. However, you should use the logical Solaris OE name (c#t#d#s#) as a building block for the first submirror.
The volume containing the /global/.devices/node@nodeid file system should have a unique name on each cluster node.
For example, if /dev/md/dsk/d91 on node1 is mounted globally under /global/.devices/node@node1, create a volume /dev/md/dsk/d92 on node2 to be mounted under /global/.devices/node@node2. (The device to mount should be unique in the mnttab maintained by the Solaris OE kernel. Because the global device's file systems on each node are mounted globally and, therefore, appear in the mnttab on both nodes, they should have different devices to mount in /etc/vfstab.)
The sdmsetup script, described in the Sun BluePrints OnLine article "Configuring Boot Disks With Solaris Volume Manager Software," is available from the Sun BluePrints OnLine Web site at:
http://www.sun.com/solutions/blueprints/online.html
This script is cluster-aware and helps automate the mirroring and cloning of boot disks of cluster nodes.
[Optional] Change the interval of the mdmonitord daemon on all the nodes. Edit the /etc/rc2.d/S95svm.sync script to add a time interval for periodic checking as follows:
if [ -x $MDMONITORD ]; then $MDMONITORD -t 3600 error=$? case $error in 0) ;; *) echo "Could not start $MDMONITORD. Error $error." ;; esac fi
Restart the mdmonitord daemon to make the changes effective as follows.:
# /etc/rc2.d/S95svm.sync stop # /etc/rc2.d/S95svm.sync start
To Create Disksets
Document the commands you use to create the disksets and to add the disks, and keep the information offline.
Define the diskset and the nodes that can master the diskset as follows:
# metaset -s nfsds -a -h phys-node-1 phys-node-2
Only run the metaset command on one node. The metaset command issues rpc calls to the other node to create an entry in the other node's local replicas for this set.
Add drives to the diskset using their fully qualified DID names as follows:
# metaset -s nfsds -a /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d5 /dev/did/rdsk/d6
Again, rpc calls are made to the other node to ensure that it also has the necessary information in its local state replicas.
When Solaris Volume Manager software creates disksets, it automatically tells the cluster software to create an entry in the cluster database for the diskset. This entry is called a device group, which you can check using the following command:
# scstat -D
Configure mediators on each node, as follows:
# metaset -s nfsds -a -m phys-node-1 # metaset -s nfsds -a -m phys-node-2
Check the status of the mediators as follows:
# medstat -s nfsds
To Create Volumes and File Systems
Create the following volumes.
A mirrored volume composed of two striped submirrors. Each stripe is composed of the two slice 0s of the LUN in each storage array.
Four soft partitions on top of this mirror.
# metainit nfsds/d1 1 2 /dev/did/rdsk/d3s0 /dev/did/rdsk/d4s0 # metainit nfsds/d2 1 2 /dev/did/rdsk/d5s0 /dev/did/rdsk/d6s0 # metinit nfsds/d0 -m nfsds/d1 # metattach nfsds/d0 nfsds/d2 # metainit nfsds/d10 -p nfsds/d0 200m # metainit nfsds/d11 -p nfsds/d0 200m # metainit nfsds/d12 -p nfsds/d0 200m # metainit nfsds/d13 -p nfsds/d0 200m
Create an /etc/lvm/md.tab file on both nodes. Keep a copy of this file offline so you can use it to recreate the configuration, if it becomes necessary to do so.
On the node that is the current primary of the diskset, supply the following commands:
# metastat -s nfsds -p >> /etc/lvm/md.tab
On the other node, supply the following commands:
# metaset -s nfsds -p | tail +2 >> /etc/lvm/md.tab
Create file systems on the logical partitions and mount them globally.
On one node, create the file system as shown here:
# newfs /dev/md/nfsds/rdsk/d10 # newfs /dev/md/nfsds/rdsk/d11 # newfs /dev/md/nfsds/rdsk/d12 # newfs /dev/md/nfsds/rdsk/d13
On all nodes, create mount points as follows:
# mkdir /global/nfsd/fs1 /global/nfsd/fs2 /global/nfsd/fs3 /global/fs4
On all nodes, put following entries in /etc/vfstab as shown here:
/dev/md/nfsds/dsk/d10 /dev/md/nfsds/rdsk/d10 /global/nfsd/fs1 ufs 2 yes global /dev/md/nfsds/dsk/d11 /dev/md/nfsds/rdsk/d11 /global/nfsd/fs2 ufs 2 yes global /dev/md/nfsds/dsk/d12 /dev/md/nfsds/rdsk/d12 /global/nfsd/fs3 ufs 2 yes global /dev/md/nfsds/dsk/d13 /dev/md/nfsds/rdsk/d13 /global/nfsd/fs4 ufs 2 yes global
On one node, mount the global file systems as follows:
# mount /global/nfsd/fs1 # mount /global/nfsd/fs2 # mount /global/nfsd/fs3 # mount /global/nfsd/fs4
Test the configuration.
On one node, run the following commands:
# metaset # scstat -D
On all nodes, run the following command:
# df -k