This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You have an existing storage array that is installed and configured.
If you are installing a storage array in a running cluster that does not yet have a storage array that is installed, use the procedure in How to Add the First Storage Array to an Existing Cluster.
Configure the new storage array.
Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.
On both nodes, insert the new storage array into the cluster. Add paths to the disk drives.
# luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s):
Do not press the Return key until you complete Step 3.
Cable the new storage array to a spare port in the existing hub, switch, or host adapter in your cluster.
For cabling diagrams, see Appendix A, Cabling Diagrams.
You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
After you cable the new storage array, press the Return key to complete the luxadm insert_device operation.
Waiting for Loop Initialization to complete... New Logical Nodes under /dev/dsk and /dev/rdsk : c4t98d0s0 c4t98d0s1 c4t98d0s2 c4t98d0s3 c4t98d0s4 c4t98d0s5 c4t98d0s6 ... New Logical Nodes under /dev/es: ses12 ses13
On both nodes, verify that the new storage array is visible to both nodes.
On one node, use the scgdevs command to update the DID database.