This section contains instructions on installing arrays both to new clusters and operational clusters.
Table 1–1 Task Map: Installing Storage Arrays
Task |
Information |
---|---|
Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. | |
Add a storage array to an operational cluster. |
How to Add the First Storage Array to an Existing Cluster How to Add a Subsequent Storage Array to an Existing Cluster |
This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.
Install host adapters in the nodes that are to be connected to the storage array.
For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.
To ensure maximum redundancy, put each host adapter on a separate I/O board, if possible.
Cable the storage arrays to the nodes.
For cabling diagrams, see Appendix A, Cabling Diagrams.
Check the revision number for the storage array's controller firmware. If necessary, install the most recent firmware.
For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You do not have an existing storage array that is installed and configured.
If you are installing a storage array in a running cluster that already has storage arrays installed and configured, use the procedure in How to Add a Subsequent Storage Array to an Existing Cluster.
Determine if the storage array packages need to be installed on the nodes. These nodes are the nodes to which you are connecting the storage array. This product requires the following packages.
# pkginfo | egrep Wlux system SUNWld Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities system SUNWluxox Sun Enterprise Network array libraries (64 bit) |
On each node, install any necessary packages for the Solaris Operating System.
The storage array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.
The -G option applies only if you are using the Solaris 10 OS. Omit this option if you are using Solaris 8 or 9 OS.
# pkgadd -G -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN |
Add package(s) in the current zone only. When used in the global zone, the package is added to the global zone only and is not propagated to any existing or yet-to-be-created non-global zone. When used in non-global zone, the package(s) are added to the non-global zone only.
Path to the Solaris Operating System
The packages to be added
Shut down and power off any node that is connected to the storage array.
For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.
Install host adapters in the node that is to be connected to the storage array.
For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.
Cable, configure, and power on the storage array.
For cabling diagrams, see Appendix A, Cabling Diagrams.
Perform a reconfiguration boot to create the new Solaris device files and links.
# boot -r |
Determine if any patches need to be installed on nodes that are to be connected to the storage array.
For a list of patches specific to Sun Cluster, see your Sun Cluster release notes documentation.
Obtain and install any necessary patches on the nodes that are to be connected to the storage array.
For procedures about how to apply patches, see your Sun Cluster system administration documentation.
Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.
If required by the patch README instructions, shut down and reboot the node.
For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.
Perform Step 3 through Step 9 for each node that is attached to the storage array.
Perform volume management administration to add the disk drives in the storage array to the volume management configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You have an existing storage array that is installed and configured.
If you are installing a storage array in a running cluster that does not yet have a storage array that is installed, use the procedure in How to Add the First Storage Array to an Existing Cluster.
Configure the new storage array.
Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.
On both nodes, insert the new storage array into the cluster. Add paths to the disk drives.
# luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s): |
Do not press the Return key until you complete Step 3.
Cable the new storage array to a spare port in the existing hub, switch, or host adapter in your cluster.
For cabling diagrams, see Appendix A, Cabling Diagrams.
You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
After you cable the new storage array, press the Return key to complete the luxadm insert_device operation.
Waiting for Loop Initialization to complete... New Logical Nodes under /dev/dsk and /dev/rdsk : c4t98d0s0 c4t98d0s1 c4t98d0s2 c4t98d0s3 c4t98d0s4 c4t98d0s5 c4t98d0s6 ... New Logical Nodes under /dev/es: ses12 ses13 |
On both nodes, verify that the new storage array is visible to both nodes.
#luxadm probe |
On one node, use the scgdevs command to update the DID database.
#scgdevs |