Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge 6320 System Manual Oracle Solaris Cluster |
1. Installing and Maintaining a Sun StorEdge 6320 System
How to Install Storage Systems in a New Cluster
Adding Storage Systems to an Existing Cluster
How to Perform Initial Configuration Tasks on the Storage Array
How to Create a Logical Volume
How to Remove a Logical Volume
How to Upgrade Storage Array Firmware
How to Remove a Storage System
Replacing a Node-to-Switch Component
How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing
How to Replace a Node-to-Switch Component in a Cluster Without Multipathing
This section contains the procedures listed in Table 1-1.
Table 1-1 Task Map: Installing Storage Systems
|
You can install your storage system in several different configurations. Evaluate your needs. Determine which configuration is best for your situation. See the Sun StorEdge 6320 System Installation Guide, and Installing Storage Arrays in Oracle Solaris Cluster 3.3 With Sun StorEdge 6120 Array Manual.
Use this procedure to install a storage system before you install the Oracle Solaris operating environment and Oracle Solaris Cluster software on your nodes. To add a storage system to an existing cluster, use the procedure in Adding Storage Systems to an Existing Cluster.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
Note - In a StorEdge 6320SL storage system, the customer provides the switch.
For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.
For instructions, see the Sun StorEdge 6320 System Installation Guide.
For instructions, see the Sun StorEdge 6320 System Installation Guide.
Caution - Do not connect the switch's Ethernet port to the storage system's private LAN. |
For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.
For instructions about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide. For instructions about how to power on a node, see the documentation that shipped with your node hardware.
For more information, see the Sun StorEdge 6320 System Installation Guide.
For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.
For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.
The following configurations might prevent some nodes from accessing each storage array in the cluster.
Zone configuration
Multiple clusters that use same switch
Unconfigured ports or misconfigured ports
For the procedure about how to install the Oracle Solaris operating environment, see How to Install Solaris Software in Oracle Solaris Cluster Software Installation Guide.
If you are using Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, previously called Sun StorEdge Traffic Manager in the Solaris 9 OS, verify that the paths to the storage device are functioning. To configure multipathing for the Oracle Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
# devfsadm
Note - You can wait for the devfsadm daemon to automatically update the Oracle Solaris device files and links, or you can run the devfsadm command to immediately update the Oracle Solaris device files and links.
# luxadm probe
See Also
To continue with Oracle Solaris Cluster software installation tasks, see your Oracle Solaris Cluster software installation documentation.
Use this procedure to add a new storage system to a running cluster. To install systems to a new Oracle Solaris Cluster configuration that is not running, use the procedure in How to Install Storage Systems in a New Cluster.
This procedure defines Node N as the node to be connected to the storage system you are adding and the node with which you begin working.
Note - In a StorEdge 6320SL storage system, the customer provides the switch.
For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.
For more information, see the Sun StorEdge 6320 System Installation Guide.
For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.
For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.
For instructions, see the Sun StorEdge 6320 System Installation Guide.
For instructions, see the Sun StorEdge 6320 System Installation Guide.
Caution - Do not connect the switch's Ethernet port to the storage system's private LAN. |
For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.
Note - The storage arrays in your system might require several minutes to boot.
For the procedure about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide.
The following configurations might prevent some nodes from accessing each storage array in the cluster:
Zone configuration
Multiple clusters that use the same switch
Unconfigured ports or misconfigured ports
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.
Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status + # cldevicegroup status +
# clnode evacuate nodename
To install host adapters, proceed to Step 4.
For the required packages, see the documentation that shipped with your host adapters.
If this is not the first host adapter, skip to Step 6.
The support packages are located in the Product directory of the Oracle Solaris CD-ROM.
For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration on the Oracle Technology Network. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Oracle Technology Network.
For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware or the Sun StorEdge 6320 System Installation Guide.
For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6320 System Installation Guide.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration on the Oracle Technology Network. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Oracle Technology Network.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# devfsadm
Note - You can wait for the devfsadm daemon to automatically update the Oracle Solaris device files and links, or you can run the devfsadm command to immediately update the Oracle Solaris device files and links.
# cldevice populate
For the procedure about how to label a logical volume, see the Sun StorEdge 6320 System Reference and Service Manual.
# cldevice list -n NodeN -v
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
Use the following command:
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.