Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Fibre Channel JBOD Storage Device Manual SPARC Platform Edition |
1. Installing and Maintaining a Fibre Channel JBOD Storage Device
How to Install a Storage Array in a New Cluster
How to Add the First Storage Array to an Existing Cluster
How to Add a Subsequent Storage Array to an Existing Cluster
FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures
This section contains instructions on installing arrays both to new clusters and operational clusters.
Table 1-1 Task Map: Installing Storage Arrays
|
This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.
For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.
Note - To ensure maximum redundancy, put each host adapter on a separate I/O board, if possible.
For cabling diagrams, see Appendix A, Cabling Diagrams.
For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You do not have an existing storage array that is installed and configured.
If you are installing a storage array in a running cluster that already has storage arrays installed and configured, use the procedure in How to Add a Subsequent Storage Array to an Existing Cluster.
# pkginfo | egrep Wlux system SUNWld Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities system SUNWluxox Sun Enterprise Network array libraries (64 bit)
The storage array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.
# pkgadd -G -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
Add package(s) in the current zone only. When used in the global zone, the package is added to the global zone only and is not propagated to any existing or yet-to-be-created non-global zone. When used in non-global zone, the package(s) are added to the non-global zone only.
Path to the Oracle Solaris Operating System
The packages to be added
For the procedure about how to shut down and power off a node, see Oracle Solaris Cluster system administration documentation.
For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.
For cabling diagrams, see Appendix A, Cabling Diagrams.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For a list of patches specific to Oracle Solaris Cluster, see your Oracle Solaris Cluster release notes documentation.
For procedures about how to apply patches, see your Oracle Solaris Cluster system administration documentation.
Note - Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.
For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You have an existing storage array that is installed and configured.
If you are installing a storage array in a running cluster that does not yet have a storage array that is installed, use the procedure in How to Add the First Storage Array to an Existing Cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC (role-based access control) authorization.
Note - Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.
# luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s):
For cabling diagrams, see Appendix A, Cabling Diagrams.
Note - You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
Waiting for Loop Initialization to complete... New Logical Nodes under /dev/dsk and /dev/rdsk : c4t98d0s0 c4t98d0s1 c4t98d0s2 c4t98d0s3 c4t98d0s4 c4t98d0s5 c4t98d0s6 ... New Logical Nodes under /dev/es: ses12 ses13
# luxadm probe
# cldevice populate