- Native Multipathing
- Storage Array Type Plug-in (SATP)
- Path Selection Plugin (PSP)
- Third-Party Plug-ins
- Multipathing Plugins (MPPs)
- Anatomy of PSA Components
- I/O Flow Through PSA and NMP
- Listing Multipath Details
- Claim Rules
- MP Claim Rules
- Plug-in Registration
- SATP Claim Rules
- Modifying PSA Plug-in Configurations Using the UI
- Modifying PSA Plug-ins Using the CLI
- Summary
Modifying PSA Plug-ins Using the CLI
The CLI provides a range of options to configure, customize, and modify PSA plug-in settings. I provide the various configurable options and their use cases as we go.
Available CLI Tools and Their Options
New to vSphere 5.0 is the expansion of using esxcli as the main CLI utility for managing ESXi 5.0. The same binary is used whether you log on to the host locally or remotely via SSH. It is also used by vMA or vCLI. This simplifies administrative tasks and improves portability of scripts written to use esxcli.
ESXCLI Namespace
Figure 5.39 shows the command-line help for esxcli.
Figure 5.39. Listing esxcli namespace
The relevant namespace for this chapter is storage. This is what most of the examples use. Figure 5.40 shows the command-line help for the storage namespace:
esxcli storage
Figure 5.40. Listing esxcli storage namespace
Table 5.11 lists ESXCLI namespaces and their usage.
Table 5.11. Available Namespaces in the storage Namespace
Name Space |
Usage |
core |
Use this for anything on the PSA level like other MPPs, PSA claim rules, and so on. |
nmp |
Use this for NMP and its “children,” such as SATP and PSP. |
vmfs |
Use this for handling VMFS volumes on snapshot LUNs, managing extents, and upgrading VMFS manually. |
filesystem |
Use this for listing, mounting, and unmounting supported datastores. |
nfs |
Use this to mount, unmount, and list NFS datastores. |
Adding a PSA Claim Rule
PSA claim rules can be for MP, Filter, and VAAI classes. I cover the latter two in Chapter 6.
Following are a few examples of claim rules for the MP class.
Adding a Rule to Change Certain LUNs to Be Claimed by a Different MPP
In general, most arrays function properly using the default PSA claim rules. In certain configurations, you might need to specify a different PSA MPP.
A good example is the following scenario:
You installed PowerPath/VE on your ESXi 5.0 host but then later realized that you have some MSCS cluster nodes running on that host and these nodes use Passthrough RDMs (Physical compatibility mode RDM). Because VMware does not support third-party MPPs with MSCS, you must exclude the LUNs from being managed by PowerPath/VE.
You need to identify the device ID (NAA ID) of each of the RDM LUNs and then identify the paths to each LUN. You use these paths to create the claim rule.
Here is the full procedure:
Power off one of the MSCS cluster nodes and locate its home directory. If you cannot power off the VM, skip to Step 6.
Assuming that the cluster node is located on Clusters_Datastore in a directory named node1, the command and its output would look like Listing 5.1.
Listing 5.1. Locating the RDM Filename
#cd /vmfs/volumes/Clusters_datastore/node1 #fgrep scsi1 *.vmx |grep fileName scsi1:0.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/ node1/quorum.vmdk" scsi1:1.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/ node1/data.vmdk"
The last two lines are the output of the command. They show the RDM filenames for the node’s shared storage, which are attached to the virtual SCSI adapter named scsi1.
Using the RDM filenames, including the path to the datastore, you can identify the logical device name to which each RDM maps as shown in Listing 5.2.
Listing 5.2. Identifying RDM’s Logical Device Name Using the RDM Filename
#vmkfstools --queryrdm /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk Disk /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk is a Passthrough Raw Device Mapping
Maps to: vml.02000100006006016055711d00cff95e65664ee011524149442035
You may also use the shorthand version using -q instead of --queryrdm.
This example is for the quorum.vmdk. Repeat the same process for the remaining RDMs. The device name is prefixed with vml and is highlighted.
Identify the NAA ID using the vml ID as shown in Listing 5.3.
Listing 5.3. Identifying NAA ID Using the Device vml ID
#esxcfg-scsidevs --list --device vml.02000100006006016055711d00cff95e65664 ee011524149442035 |grep Display Display Name: DGC Fibre Channel Disk
(naa.6006016055711d00cff95e65664ee011)
You may also use the shorthand version:
#esxcfg-scsidevs -l -d vml.02000100006006016055711d00cff95e65664 ee011524149442035 |grep Display
Now, use the NAA ID (highlighted in Listing 5.3) to identify the paths to the RDM LUN.
Figure 5.41 shows the output of command:
esxcfg-mpath -m |grep naa.6006016055711d00cff95e65664ee011 | sed 's/ fc.*//'
Figure 5.41. Listing runtime pathnames to an RDM LUN
You may also use the verbose version of the command:
esxcfg-mpath --list-map |grep naa.6006016055711d00cff95e65664ee011 | sed 's/fc.*//'
This truncates the output beginning with “fc” to the end of the line on each line. If the protocol in use is not FC, replace that with “iqn” for iSCSI or “fcoe” for FCoE.
The output shows that the LUN with the identified NAA ID is LUN 1 and has four paths shown in Listing 5.4.
Listing 5.4. RDM LUN’s Paths
vmhba3:C0:T1:L1 vmhba3:C0:T0:L1 vmhba2:C0:T1:L1 vmhba2:C0:T0:L1
If you cannot power off the VMs to run Steps 1–5, you may use the UI instead.
- Use the vSphere client to navigate to the MSCS node VM. Right-click the VM in the inventory pane and then select Edit Settings (see Figure 5.42).
Figure 5.42. Editing VM’s settings via the UI
- In the resulting dialog (see Figure 5.43), locate the RDM listed in the Hardware tab. You can identify this by the summary column showing Mapped Raw LUN. On the top right-hand side you can locate the Logical Device Name, which is prefixed with vml in the field labeled Physical LUN and Datastore Mapping File.
Figure 5.43. Virtual machine properties dialog
- Double-click the text in that field. Right-click the selected text and click Copy as shown in Figure 5.44.
Figure 5.44. Copying RDM’s VML ID (Logical Device Name) via the UI
- You may use the copied text to follow Steps 4 and 5. Otherwise, you may instead get the list of paths to the LUN using the Manage Paths button in the dialog shown in Figure 5.44.
- In the Manage Paths dialog (see Figure 5.45), click the Runtime Name column to sort it. Write down the list of paths shown there.
Figure 5.45. Listing the runtime pathnames via the UI
- The list of paths shown in Figure 5.45 are
vmhba1:C0:T0:L1 vmhba1:C0:T1:L1 vmhba2:C0:T0:L1 vmhba2:C0:T1:L1
- Create the claim rule.
I use the list of paths obtained in Step 5 for creating the rule from the ESXi host from which it was obtained.
The Ground Rules for Creating the Rule
- The rule number must be lower than any of the rules created by PowerPath/VE installation. By default, they are assigned rules 250–320 (refer to Figure 5.26 for the list of PowerPath claim rules).
Figure 5.46. Adding new MP claim rules
- The rule number must be higher than 101 because this is used by the Dell Mask Path rule. This prevents claiming devices masked by that rule.
- If you created other claim rules in the past on this host, use a rule number that is different from what you created in a fashion that the new rules you are creating now do not conflict with the earlier rules.
- If you must place the new rules in an order earlier than an existing rule but there are no rule numbers available, you may have to move one of the lower-numbered rules higher by the number of rules you plan on creating.
For example, you have previously created rules numbered 102–110 and that rule 109 cannot be listed prior to the new rules you are creating. If the new rules count is four, you need to assign them rule numbers 109–112. To do that, you need to move rules 109 and 110 to numbers 113 and 114. To avoid having to do this in the future, consider leaving gaps in the rule numbers among sections.
An example of moving a rule is
esxcli storage core claimrule move --rule 109 --new-rule 113 esxcli storage core claimrule move --rule 110 --new-rule 114
- You may also use the shorthand version:
esxcli storage core claimrule move -r 109 -n 113 esxcli storage core claimrule move -r 110 -n 114
Now, let’s proceed with adding the new claim rules:
- The set of four commands shown in Figure 5.46 create rules numbered 102–105. The rules criteria are
- The claim rule type is “location” (-t location).
- The location is specified using each path to the same LUN in the format:
- –A or --adapter vmhba(x) where X is the vmhba number associated with the path.
- –C or --channel (Y) where Y is the channel number associated with the path.
- –T or --target (Z) where Z is the target number associated with the path.
- –L or --lun (n) where n is the LUN number.
- The plug-in name is NMP, which means that this claim rule is for NMP to claim the paths listed in each rule created.
- Repeat Step 1 for each LUN you want to reconfigure.
- Verify that the rules were added successfully. To list the current set of claim rules, run the command shown in Figure 5.47:
esxcli storage core claimrule list.
Figure 5.47. Listing added claim rules
Notice that the four new rules are now listed, but the Class column shows them as file. This means that the configuration files were updated successfully but the rules were not loaded into memory yet.
Figure 5.48 shows a sample command line that implements a wildcard for the target. Notice that this results in creating two rules instead of four and the “target” match is *.
Figure 5.48. Adding MP claim rules using a wildcard
- Before loading the new rules, you must first unclaim the paths to the LUN specified in that rule set. You use the NAA ID as the device ID:
esxcli storage core claiming unclaim --type device –-device naa.600601 6055711d00cff95e65664ee011
You may also use the shorthand version:
esxcli storage core claiming unclaim -t device –d naa.6006016055711d00 cff95e65664ee011
- Load the new claim rules so that the paths to the LUN get claimed by NMP:
esxcli storage core claimrule load
- Use the following command to list the claim rules to verify that they were successfully loaded:
esxcli storage core claimrule list
Now you see that each of the new rules is listed twice—once with file class and once with runtime class—as shown in Figure 5.49.
Figure 5.49. Listing MP claim rules
How to Delete a Claim Rule
Deleting a claim rule must be done with extreme caution. Make sure that you are deleting the rule you intend to delete. Prior to doing so, make sure to collect a “vm-support” dump by running vm-support from a command line at the host or via SSH. Alternatively, you can select the menu option Collect Diagnostics Data via the vSphere client.
To delete a claim rule, follow this procedure via the CLI (locally, via SSH, vCLI, or vMA):
- List the current claim rules set and identify the claim rule or rules you want to delete. The command to list the claim rules is similar to what you ran in Step 6 and is shown in Figure 5.49.
- For this procedure, I am going to use the previous example and delete the four claim rules I added earlier which are rules 102–105. The command for doing that is in Figure 5.50.
Figure 5.50. Removing claim rules via the CLI
You may also run the verbose command:
esxcli storage core claimrule remove --rule <rule-number>
- Running the claimrule list command now results in an output similar to Figure 5.51. Observe that even though I just deleted the claim rules, they still show up on the list. The reason for that is the fact that I have not loaded the modified claim rules. That is why the deleted rules show runtime in their Class column.
Figure 5.51. Listing MP claim rules
- Because I know from the previous procedure the device ID (NAA ID) of the LUN whose claim rules I deleted, I ran the unclaim command using the -t device or --type option and then specified the -d or --device option with the NAA ID. I then loaded the claim rules using the load option. Notice that the deleted claim rules are no longer listed see Figure 5.52.
Figure 5.52. Unclaiming a device using its NAA ID and then loading the claim rules
You may also use the verbose command options:
esxcli storage core claiming unclaim --type device --device <Device-ID>
You may need to claim the device after loading the claim rule by repeating the claiming command using the “claim” instead of the “unclaim” option:
esxcli storage core claiming claim -t device -d <device-ID>
How to Mask Paths to a Certain LUN
Masking a LUN is a similar process to that of adding claim rules to claim certain paths to a LUN. The main difference is that the plug-in name is MASK_PATH instead of NMP as used in the previous example. The end result is that the masked LUNs are no longer visible to the host.
- Assume that you want to mask LUN 1 used in the previous example and it still has the same NAA ID. I first run a command to list the LUN visible by the ESXi host as an example to show the before state (see Figure 5.53).
Figure 5.53. Listing LUN properties using its NAA ID via the CLI
You may also use the verbose command option --device instead of -d.
- Add the MASK_LUN claim rule, as shown in Figure 5.54.
Figure 5.54. Adding Mask Path claim rules
As you see in Figure 5.54, I added rule numbers 110 and 111 to have MASK_PATH plug-in claim all targets to LUN1 via vmhba2 and vmhba3. The claim rules are not yet loaded, hence the file class listing and no runtime class listings.
- Load and then list the claim rules (see Figure 5.55).
Figure 5.55. Loading and listing claim rules after adding Mask Path rules
Now you see the claim rules listed with both file and runtime classes.
- Use the reclaim option to unclaim and then claim the LUN using its NAA ID. Check if it is still visible (see Figure 5.56).
Figure 5.56. Reclaiming the paths after loading the Mask Path rules
You may also use the verbose command option --device instead of -d.
Notice that after reclaiming the LUN, it is now an Unknown device.
How to Unmask a LUN
To unmask this LUN, reverse the preceding steps and then reclaim the LUN as follows:
- Remove the MASK_PATH claim rules (numbers 110 and 111) as shown in Figure 5.57.
Figure 5.57. Removing the Mask Path claim rules
You may also use the verbose command options:
esxcli storage core claimrule remove --rule <rule-number>
- Unclaim the paths to the LUN in the same fashion you used while adding the MASK_PATH claim rules—that is, using the –t location and omitting the –T option so that the target is a wildcard.
- Rescan using both HBA names.
- Verify that the LUN is now visible by running the list command.
Figure 5.58 shows the outputs of Steps 2–4.
Figure 5.58. Unclaiming the Masked Paths
You may also use the verbose command options:
esxcli storage core claiming unclaim --type location --adapter vmhba2 --channel 0 --lun 1 --plugin MASK_PATH
Changing PSP Assignment via the CLI
The CLI enables you to modify the PSP assignment per device. It also enables you to change the default PSP for a specific storage array or family of arrays. I cover the former use case first because it is similar to what you did via the UI in the previous section. I follow with the latter use case.
Changing PSP Assignment for a Device
To change the PSP assignment for a given device, you may follow this procedure:
- Log on to the ESXi 5 host locally or via SSH as root or using vMA 5.0 as vi-admin.
- Identify the device ID for each LUN you want to reconfigure:
esxcfg-mpath -b |grep -B1 "fc Adapter"| grep -v -e "--" |sed 's/ Adapter.*//'
You may also use the verbose version of this command:
esxcfg-mpath --list-paths grep -B1 "fc Adapter"| grep -v -e "--" | sed 's/Adapter.*//'
Listing 5.5 shows the output of this command.
Listing 5.5. Listing Device ID and Its Paths
naa.60060e8005275100000027510000011a : HITACHI Fibre Channel Disk (naa.6006 0e8005275100000027510000011a) vmhba2:C0:T0:L1 LUN:1 state:active fc vmhba2:C0:T1:L1 LUN:1 state:active fc vmhba3:C0:T0:L1 LUN:1 state:active fc vmhba3:C0:T1:L1 LUN:1 state:active fc
From there, you can identify the device ID (in this case, it is the NAA ID). Note that this output was collected using a Universal Storage Platform®V (USP V), USP VM, or Virtual Storage Platform (VSP).
This output means that LUN1 has device ID naa.60060e8005275100000027510000011a.
- Using the device ID you identified, run this command:
esxcli storage nmp device set -d <device-id> --psp=<psp-name>
You may also use the verbose version of this command:
esxcli storage nmp device set --device <device-id> --psp=<psp-name>
For example:
esxcli storage nmp device set -d naa.60060e8005275100000027510000011a --psp=VMW_PSP_FIXED
This command sets the device with ID naa.60060e8005275100000027510000011a to be claimed by the PSP named VMW_PSP_FIXED.
Changing the Default PSP for a Storage Array
There is no simple way to change the default PSP for a specific storage array unless that array is claimed by an SATP that is specific for it. In other words, if it is claimed by an SATP that also claims other brands of storage arrays, changing the default PSP affects all storage arrays claimed by the SATP. However, you may add an SATP claim rule that uses a specific PSP based on your storage array’s Vendor and Model strings:
- Identify the array’s Vendor and Model strings. You can identify these strings by running
esxcli storage core device list -d <device ID> |grep 'Vendor\|Model'
Listing 5.6 shows an example for a device on an HP P6400 Storage Array.
Listing 5.6. Listing Device’s Vendor and Model Strings
esxcli storage core device list -d naa.600508b4000f02cb0001000001660000 |grep 'Model\|Vendor' Vendor: HP Model: HSV340
- In this example, the Vendor String is HP and the Model is HSV340.
- Use the identified values in the following command:
esxcli storage nmp satp rule add --satp <current-SATP-USED> --vendor <Vendor string> --model <Model string> --psp <PSP-name> --description <Description>
In this example, the command would be like this:
esxcli storage nmp satp rule add --satp VMW_SATP_EVA --vendor HP --model HSV340 --psp VMW_PSP_FIXED --description "Manually added to use FIXED"
It runs silently and returns an error if it fails.
Example of an error:
"Error adding SATP user rule: Duplicate user rule found for SATP VMW_ SATP_EVA matching vendor HP model HSV340 claim Options PSP VMW_PSP_ FIXED and PSP Options"
This error means that a rule already exists with these options. I simulated this rule by first adding it and then rerunning the same command. To view the existing SATP claim rules list for all HP storage arrays, you may run the following command:
esxcli storage nmp satp rule list |less -S |grep 'Name\|---\|HP'|less -S
Figure 5.59 shows the output of this command (I cropped some blank columns, including Device, for readability):
Figure 5.59. Listing SATP rule list for HP devices
You can easily identify non-system rules where the Rule Group column value is user. Such rules were added by a third-party MPIO installer or manually added by an ESXi 5 administrator. The rule in this example shows that I had already added VMW_PSP_FIXED as the default PSP for VMW_SATP_EVA when the matching vendor is HP and Model is HSV340.
I don’t mean to state by this example that HP EVA arrays with HSV340 firmware should be claimed by this specific PSP. I am only using it for demonstration purposes. You must verify which PSP is supported by and certified for your specific storage array from the array vendor.
As a matter of fact, this HP EVA model happens to be an ALUA array and the SATP must be VMW_SATP_ALUA see Chapter 6. How did I know that? Let me explain!
- Look at the output in Figures 5.29–5.32. There you should notice that there are no listings of HP EVA arrays with Claim Options value of tpgs_on. This means that they were not claimed by any specific SATP explicitly.
- To filter out some clutter from the output, run the following command to list all claim rules with a match on Claim Options value of tpgs_on.
esxcli storage nmp satp rule list |grep 'Name\|---\|tpgs_on' |less -S
Listing 5.7 shows the output of that command:
Listing 5.7. Listing SATP Claim Rules List
Name Device Vendor Model Rule Group Claim Options ------------------- ------ ------- -------- ---------- ------------ VMW_SATP_ALUA NETAPP system tpgs_on VMW_SATP_ALUA IBM 2810XIV system tpgs_on VMW_SATP_ALUA system tpgs_on VMW_SATP_ALUA_CX DGC system tpgs_on
- I cropped some blank columns for readability.
- Here you see that there is a claim rule with a blank vendor and the Claim Options is tpgs_on. This claim rule claims any device with any vendor string as long as its Claim Options is tpgs_on.
- Based on this rule, VMW_SATP_ALUA claims all ALUA-capable arrays including HP storage arrays based on a match on the Claim Options value of tpgs_on.
What does this mean anyway?
It means that the claim rule that I added for the HSV340 is wrong because it will force it to be claimed by an SATP that does not handle ALUA. I must remove the rule that I added then create another rule that does not violate the default SATP assignment:
- To remove the SATP claim rule, use the same command used to add, substituting the add option with remove:
esxcli storage nmp satp rule remove --satp VMW_SATP_EVA --vendor HP --model HSV340 --psp VMW_PSP_FIXED
- Add a new claim rule to have VMW_SATP_ALUA claim the HP EVA HSV340 when it reports Claim Options value as tpgs_on:
esxcli storage nmp satp rule add --satp VMW_SATP_ALUA --vendor HP --model HSV340 --psp VMW_PSP_FIXED --claim-option tpgs_on --description "Re-added manually for HP HSV340"
- Verify that the rule was created correctly. Run the same command used in Step 2 in the last procedure:
esxcli storage nmp satp rule list |grep 'Name\|---\|tpgs_on' |less -S
Figure 5.60 shows the output.
Figure 5.60. SATP rule list after adding rule
Notice that the claim rule has been added in a position prior to the catch-all rule described earlier. This means that this HP EVA HSV340 model will be claimed by VMW_SATP_ALUA when the Claim Options value is tpgs_on.