Example 2: Logical Device With VxVM
To illustrate what has been presented, the following paragraphs describe a method to access a replicated set of data created using an enterprise class subsystem: the StorEdge SE99x0 systems. ShadowImage is used to replicate the LUNs. The data to access is configured as a volume, using VERITAS Volume Manager 3.5 patched at the latest level.
The primary volume is called labvol. It is part of the disk group labdg, and is a RAID 0 stripe of five LUNs:
VxVM disk group |
labdg |
VxVM volumes |
labvol, RAID 0 |
VxVM device name (LUNS) |
HDS99100_0 HDS99100_1 HDS99100_2 HDS99100_3 HDS99100_4 |
This example covers the trickiest situation, accessing the primary and secondary data from the same host. In this situation, the primary and secondary LUNs are both visible from the same host. Because they are physically different, VxVM assigns them distinct device names:
VxVM device name |
VxVM disk name |
HDS99100_0 HDS99100_1 HDS99100_2 HDS99100_3 HDS99100_4 HDS99100_10 HDS99100_11 HDS99100_12 HDS99100_13 HDS99100_14 |
9910_0 9910_1 9910_2 9910_3 9910_4 9910_0_S 9910_1_S 9910_2_S 9910_3_S 9910_4_S |
ShadowImage is set up so that LUNs are paired, under the consistency group LAB, as follows:
Primary Disk --> Secondary Disk |
HDS99100_0 --> HDS99100_10 HDS99100_1 --> HDS99100_11 HDS99100_2 --> HDS99100_12 HDS99100_3 --> HDS99100_13 HDS99100_4 --> HDS99100_14 |
As described in the previous section, to access the replicated data, you must insure that every layer of the I/O stack is correctly set up. In this example, the steps are:
To Ensure Consistent and Accessible Replicated Disks in the Physical Layer
Suspend the replication:
Before accessing the replicated LUNs, stop (suspend) the replication and make sure every LUN is in a suspended (psus) state:
root@storage103 # pairsplit -g LAB root@storage103 # pairevtwait -g LAB -s psus -t 1800
Of course, pairsplit must be issued when all the LUNs are already synchronized (state PAIR). Failing to do so will result in corrupted secondary devices.
To Detect the Replicated Disks in the Driver Layer
Scan the disks and verify that they are all visible and accessible from the secondary host.
This is achieved using the VxVM command, vxdiskconfig, which scans the I/O buses for new devices and reads the private region of every disk:
root@storage103 # vxdiskconfig
At this stage, the private region of the cloned disks has the same entries as the primary disks, in particular for the diskname and diskid parameters.
To Reconstruct the Replicated Logical Groups and Volumes in the LVM Layer
Modify the primary disk group configuration to reflect the new devices, and apply the modified configuration to a newly created disk group.
Save the disk group configuration of the primary data:
Clean the cloned disk to re-initialize the disk ID of each cloned disk:
Create the new disk group and populate it with the cloned disks:
Modify labdg_S.vxprint to reflect the disk configuration of the replicated disk group by replacing every occurrence of the primary diskname and device name by the corresponding diskname and device name of the secondary disks.
To ease this process, first create a file containing the device name of primary and secondary disks:
The string replacement is done by using the following Bourne Shell loop:
Apply the volume configuration file to the new disk group:
Start the new disk group:
root@storage103 # vxprint -g labdg -m > labdg.vxprint
This command dumps the whole configuration of the disk group labdg into a file. This file contains more information than necessary. The information of concern is subdisks, plexes, and volumes, which can be extracted by running:
root@storage103 # i=´grep -n "sd " labdg.vxprint | cut -d: -f1 | head -1´ root@storage103 # more +$i labdg.vxprint > labdg_S.vxprint.tmp root@storage103 # mv labdg_S.vxprint.tmp labdg_S.vxprint
The file labdg_S.vxprint is the file to push in the disk group configuration of the replicated disks to re-create the volumes.
root@storage103 # vxdisksetup -f -i \ 9910_0_S=HDS9910_10 \ 9910_1_S=HDS9910_11 \ 9910_2_S=HDS9910_12 \ 9910_3_S=HDS9910_13 \ 9910_4_S=HDS9910_14
root@storage103 # vxdg init labdg_S 9910_0_S 9910_1_S 9910_2_S 9910_3_S 9910_4_S
root@storage103 # cat paires.txt HDS99100_0 HDS99100_10 HDS99100_1 HDS99100_11 HDS99100_2 HDS99100_12 HDS99100_3 HDS99100_13 HDS99100_4 HDS99100_14
root@storage103 # while read pdev sdev do pname=´vxdisk list | grep -w $p| awk '{print $3}'´ sname=´vxdisk list | grep -w $s| awk '{print $3}'´ cat labdg_S.vxprint | sed -e "s/$pname/$sname/g" -e "s/$pdev/$sdev/g"\ > labdg_S.vxprint.tmp mv labdg_S.vxprint.tmp labdg_S.vxprint done < paires.txt
At this stage, the file labdg_S.vxprint reflects the configuration of the new disk group.
root@storage103 # vxmake -g labdg_S -d labdg_S.vxprint
# vxvol -g labdg_S init active labvol1
To Make the Replicated File System Consistent in the File System Layer
Check consistency of the file system:
Mount the file system:
root@storage103 # fsck /dev/vx/rdsk/labdg_S/labvol1
The fsck lists the corrupted files. Action must be taken to recover them. This operation makes sense in case of a crash. During a crash, some files might be corrupted (files in creation and modification mode).
root@storage103 # mount /dev/vx/rdsk/labdg_S/labvol1 /mnt/LAB_S
To Make the Data Ready for the Application in the Application Layer
At this stage, you can consider the replicated data accessible. Some application specific actions might take place, such as modifying configuration files, links, or other cleanup or recover processes.