Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge A3500FC System Manual SPARC Platform Edition |
1. Installing and Maintaining a Sun StorEdge A3500FC System
How to Install a Storage System in a New Cluster
How to Add a Storage System to an Existing Cluster
How to Remove a Storage System
How to Replace a Failed Controller or Restore an Offline Controller
How to Upgrade Controller Module Firmware in a Running Cluster
How to Add a Disk Drive in a Running Cluster
How to Replace a Failed Disk Drive in a Running Cluster
How to Remove a Disk Drive From a Running Cluster
This section contains the procedures about how to configure a storage system after you install Oracle Solaris Cluster software. Table 1-2 lists these procedures.
To configure a storage system before you install Oracle Solaris Cluster software, use the same procedures you use in a noncluster environment. For the procedures about how to configure a storage system before you install Oracle Solaris Cluster software, see the Sun StorEdge RAID Manager User’s Guide.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Table 1-2 Task Map: Configuring a Storage System
|
The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge RAID Manager User’s Guide and the Sun StorEdge RAID Manager Release Notes for the following procedures.
Use this procedure to create a logical unit number (LUN) from unassigned disk drives or remaining capacity. See theSun StorEdge RAID Manager Release Notes for the latest information about LUN administration.
This product supports the use of hardware RAID and host-based software RAID. For host-based software RAID, this product supports RAID levels 0+1 and 1+0.
Note - You must use hardware RAID for Oracle RAC data stored on the storage array. Do not place RAC data under volume management control. You must place all non-RAC data that is stored on the storage arrays under volume management control. Use either hardware RAID, host-based software RAID, or both types of RAID to manage your non-RAC data.
Hardware RAID uses the storage array's or storage system's hardware redundancy to ensure that independent hardware failures do not impact data availability. If you mirror across separate storage arrays, host-based software RAID ensures that independent hardware failures do not impact data availability when an entire storage array is offline. Although you can use hardware RAID and host-based software RAID concurrently, you need only one RAID solution to maintain a high degree of data availability.
Note - When you use host-based software RAID with hardware RAID, the hardware RAID levels you use affect hardware maintenance. If you use hardware RAID level 1, 3, or 5, you can perform most maintenance procedures without volume management disruptions. If you use hardware RAID level 0, some maintenance procedures require additional volume management administration because the availability of the LUNs is impacted.
Caution - Do not configure LUNs as quorum devices. The use of LUNs as quorum devices is not supported. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
After the LUN formatting completes, a logical name for the new LUN appears in /dev/rdsk on all nodes. These nodes are attached to the storage system.
For the procedure about how to create a LUN, see the Sun StorEdge RAID Manager User’s Guide.
If the following warning message displays, ignore the message. Continue with the next step.
scsi: WARNING: /sbus@40,0/SUNW,socal@0,0/sf@1,0/ssd@w200200a0b80740db,4 (ssd0): corrupt label - wrong magic number
Note - Use the format(1M) command to verify Oracle Solaris logical device names.
# /etc/raid/bin/hot_add
# cldevice populate
# cldevice show
If the device ID numbers are the same, proceed to Step 7.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to delete one or more LUNs. See the Sun StorEdge RAID Manager Release Notes for the latest information about LUN administration.
Caution - This procedure removes all data on the LUN that you delete. |
Caution - Do not delete LUN 0. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
# format AVAILABLE DISK SELECTIONS: 0. c0t5d0 <SYMBIOS-StorEdgeA3500FCr-0301 cyl3 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,0 1. c0t5d1 <SYMBIOS-StorEdgeA3500FCr-0301 cyl2025 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,1
For more information, see your Solaris Volume Manageror Veritas Volume Manager documentation.
LUNs that were managed by Veritas Volume Manager must be removed from Veritas Volume Manager control before you can delete the LUNs. To remove the LUNs, after you delete the LUN from any disk group, use the following commands.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure about how to delete a LUN, see the Sun StorEdge RAID Manager User’s Guide.
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
For example, with this configuration.
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
The alternate paths would be:
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# cldevice clear
# clnode evacuate nodename
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
Use this procedure to completely remove and reset the LUN configuration.
Caution - If you reset a LUN configuration, a new device ID number is assigned to LUN 0. This change occurs because the software assigns a new world wide name (WWN) to the new LUN. |
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
# format
For example:
phys-schost-1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t5d0 <SYMBIOS-StorEdgeA3500FCr-0301 cyl3 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,0 1. c0t5d1 <SYMBIOS-StorEdgeA3500FCr-0301 cyl2025 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,1
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
You must completely remove LUNs that were managed by VERITAS Volume Manager from Veritas Volume Manager control before you can delete the LUNs.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure about how to reset the LUN configuration, see the Sun StorEdge RAID Manager User’s Guide.
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# devfsadm -C
# cldevice clear
# clnode evacuate from-node
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
If an error message like the following appears, ignore it. Continue with the next step.
device id for '/dev/rdsk/c0t5d0' does not match physical disk's id.
The device ID number for the original LUN 0 is removed. A new device ID is assigned to LUN 0.
Use this section to correct mismatched device ID numbers that might appear during the creation of A3500FC LUNs. You correct the mismatch by deleting the Oracle Solaris OS and Oracle Solaris Cluster paths to the LUNs that have DID numbers that are different. After rebooting, the paths are corrected.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
# format
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
For example, with this configuration.
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
The alternate paths would be as follows.
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# cldevice clear
# clnode evacuate nodename
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.