Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge 9900 Series Storage Device Manual |
1. Installing and Configuring a Sun StorEdge or StorageTek 9900 Series Storage Array
How to Remove a Logical Volume
2. Enabling Multipathing Software in a Sun StorEdge or StorageTek 9900 Series Storage Array
3. Maintaining a Sun StorEdge or StorageTek 9900 Series Storage Array
The initial installation of a storage array in a new cluster must be performed by your Oracle service provider.
Use this procedure to add a new storage array to a running cluster.
This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.
If you need to add a storage array to more than two nodes, repeat Step 18 through Step 31 for each additional node that connects to the storage array.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.
Note - The storage array requires approximately 10 minutes to boot.
Contact your service provider to power on the storage array.
Contact your service provider to verify that the storage array is configured for multipathing.
Contact your service provider to create the desired logical volumes.
Note - If you use multipathing software, each node requires two paths to the same set of LUNs.
If you do not need to install a host adapter, skip to Step 11.
For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
If your node is not enabled with DR, you must shut down this node to install the host adapter. Proceed to Step 6.
Record this information because you use this information in Step 30 and Step 31 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status -n NodeA[ NodeB ...] # cldevicegroup status -n NodeA[ NodeB ...]
The node or nodes for which you are determining resource groups and device groups.
For more information, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
Contact your service provider to install a fiber-optic cable between the storage array and your node.
Contact your service provider to configure the storage array.
For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# cldevice populate
# cldevice list -n NodeA -v
Note - If you use multipathing software, each node requires two paths to the same set of LUNs.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
If you do not need to install host adapters, skip to Step 24.
For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
Contact your service provider to install a fiber-optic cable between the storage array and your node.
Oracle Solaris 10 automatically installs Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, which was previously called Sun StorEdge Traffic Manager in the Solaris 9 OS. For more information, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
For the procedure about how to boot a cluster node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# cldevice populate
# cldevice show
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group that is returned to the node or nodes.
The resource group or groups that you are returning to the node or nodes.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.