This procedure contains instructions for adding new storage array partner groups to a running cluster. The following procedures contain instructions for other array-installation situations:
How to Install a Storage Array in a New Cluster, Using a Partner-Group Configuration
How to Install a Storage Array in a New Cluster Using a Single-Controller Configuration
How to Add a Storage Array to an Existing Cluster, Using a Single-Controller Configuration
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new storage arrays to reside on. Afterward, assign an IP address to the new storage arrays.
Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, as shown in Figure 1–6.
This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Install the Ethernet cable between the storage arrays and the local area network (LAN), as shown in Figure 1–6.
If they are not already installed, install interconnect cables between the two storage arrays of each partner group, as shown in Figure 1–6.
To install interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Power on the storage arrays.
The storage arrays might require several minutes to boot.
To power on storage arrays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Administer the storage arrays' network addresses and settings.
Use the telnet command to access the master controller unit and to administer the storage arrays.
To administer the network address and the settings of a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Install any required storage array controller firmware upgrades.
For partner-group configurations, use the telnet command to the master controller unit. If necessary, install the required controller firmware for the storage array.
For the required revision number of the storage array controller firmware, see the Sun StorEdge T3 Disk Tray Release Notes.
Ensure that each storage array has a unique target address.
To verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Ensure that the cache and mirror settings for each storage array are set to auto.
For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
Ensure that the mp_support parameter for each storage array is set to mpxio.
For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
Configure the new storage arrays with the desired logical volumes.
To create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. To mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
Reset the storage arrays.
To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
If you are adding Sun StorEdge T3+ arrays, install the media interface adapter (MIA) in the Sun StorEdge T3 arrays that you are adding, as shown in Figure 1–6.
To install an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–6.
To install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.
Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group, as shown in Figure 1–6.
To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.
Determine the resource groups and device groups that are running on all nodes.
Record this information because you use it in Step 30 of this procedure to return resource groups and device groups to these nodes.
Move all resource groups and device groups off each node in the cluster.
If you need to install host adapters in the node, and if the host adapter you are installing is the first adapter on the node, determine whether the required support packages are already installed on this node.
The following packages are required.
# pkginfo | egrep Wlux system SUNWluxd Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities system SUNWluxox Sun Enterprise Network Array libraries (64-bit) |
If this is not the first host adapter on the node, skip to Step 20.
If the required support packages are not present, install them.
The support packages are located in the Product directory of the Solaris DVD. Add any missing packages.
If you need to install host adapters in the node, shut down and power off the node, and then install them in the node.
To shut down and power off a node, see your Sun Cluster system administration documentation.
To install host adapters, see the documentation that shipped with your host adapters and nodes.
If you installed host adapters in the node, power on and boot the node into noncluster mode.
To boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
If you installed host adapters in the node, and if necessary, upgrade the host adapter firmware on the node.
If necessary, install the required Solaris patches for storage array support on the node.
If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.
You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.
Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.
For required firmware, see the Sun System Handbook.
If you installed host adapters in the node, reboot the node in cluster mode.
Connect fiber-optic cables between the node and the FC switches, as shown in Figure 1–7.
To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.
On the current node, update the /devices and /dev entries.
# devfsadm |
From any node in the cluster, update the global device namespace.
Label the new storage array logical volume.
To label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
(Optional) On the current node, verify that the device IDs (DIDs) are assigned to the new LUNs.
(Optional) Restore the resource groups to the original node.
Do the following for each resource group that you want to return to the original node.
If you are using Sun Cluster 3.2, use the following command:
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …] |
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
If you are using Sun Cluster 3.1, use the following command:
# scswitch -z -g resourcegroup -h nodename |
For each of the other nodes in the cluster, repeat Step 17 through Step 30.
Perform volume management administration to incorporate the new logical volumes into the cluster.
For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.