Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual For Solaris OS (SPARC Platform Edition) |
1. Installing and Maintaining a SCSI RAID Storage Device
How to Install a Storage Array in a New Cluster
How to Add a Storage Array to an Existing Cluster
FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures
Sun StorEdge A1000 Array and Netra st A1000 Array FRUs
Sun StorEdge A3500 System FRUs
How to Replace a Failed Controller or Restore an Offline Controller
How to Upgrade Controller Module Firmware
This section contains the procedures about how to configure a storage array or storage system after you install Oracle Solaris Cluster software. Table 1-2 lists these procedures.
To configure a storage array or storage system before you install Oracle Solaris Cluster software, use the same procedures you use in a noncluster environment. For the procedures about how to configure a storage system before you install Oracle Solaris Cluster software, see the Sun StorEdge RAID Manager User’s Guide.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Table 1-2 Task Map: Configuring Disk Drives
|
The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge RAID Manager User’s Guide for these procedures.
Use this procedure to create a logical unit number (LUN) from unassigned disk drives or remaining capacity. For information about LUN administration, see the Sun StorEdge RAID Manager Release Notes.
This product supports the use of hardware RAID and host-based software RAID. For host-based software RAID, this product supports RAID levels 0+1 and 1+0.
Note - You must use hardware RAID for Oracle RAC data stored on the storage array. Do not place RAC data under volume management control. You must place all non-RAC data that is stored on the storage arrays under volume management control. Use either hardware RAID, host-based software RAID, or both types of RAID to manage your non-RAC data.
Hardware RAID uses the storage array's or storage system's hardware redundancy to ensure that independent hardware failures do not impact data availability. If you mirror across separate storage arrays, host-based software RAID ensures that independent hardware failures do not impact data availability when an entire storage array is offline. Although you can use hardware RAID and host-based software RAID concurrently, you need only one RAID solution to maintain a high degree of data availability.
Note - When you use host-based software RAID with hardware RAID, the hardware RAID levels you use affect hardware maintenance. If you use hardware RAID level 1, 3, or 5, you can perform most maintenance procedures without volume management disruptions. If you use hardware RAID level 0, some maintenance procedures require additional volume management administration because the availability of the LUNs is impacted.
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
After the LUN formatting completes, a logical name for the new LUN appears in /dev/rdsk on all nodes. These nodes are attached to the storage array or storage system.
If the following SCSI warning is displayed, ignore the message. Continue with the next step.
... corrupt label - wrong magic number
For the procedure about how to create a LUN, refer to your storage device's documentation. Use the format(1M) command to verify Solaris logical device names.
# /etc/raid/bin/hot_add
# cldevice populate
For an example of output indicating that the ID numbers have not been properly updated, see Example 1-1.
Run the following command:
# cldevice show
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Note - The StorEdge A3500 system does not support using LUNs as quorum devices.
Example 1-1 Verifying the Device IDs
Step 5 in the preceding procedure directs you to verify that the device ID numbers for the LUNs are the same on both nodes. In the sample output that follows, the device ID numbers are different.
# cldevice show ... DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-2:/dev/rdsk/c1t3d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t3d1 Replication: none default_fencing: global
Use this procedure to delete one or more LUNs. You might need to delete a LUN to free up or reallocate resources, or to use the disks for other purposes. See the Sun StorEdge RAID Manager Release Notes for the latest information about LUN administration.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
All data on the LUN that you delete will be removed.
You are not deleting LUN 0.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
# format
For example:
phys-schost-1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t5d0 <SYMBIOS-StorEdgeA3500FCr-0301 cyl3 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,0 1. c0t5d1 <SYMBIOS-StorEdgeA3500FCr-0301 cyl2025 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,1
Note - Your storage array or storage system might not support LUNs as quorum devices. To determine if this restriction applies to your storage array or storage system, see Restrictions and Requirements.
To determine whether this LUN is a quorum device, use the following command.
# clquorum show
For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
LUNs that were managed by Veritas Volume Manager must be removed from Veritas Volume Manager control before you can delete the LUNs. To remove the LUNs, after you delete the LUN from any disk group, use the following commands.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure about how to delete a LUN, refer to your storage device's documentation.
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
For example:
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
Therefore, the alternate paths are as follows:
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# cldevice clear
# clnode evacuate from-node
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
Use this procedure to completely remove and reset the LUN configuration.
Caution - If you reset a LUN configuration, a new device ID number is assigned to LUN 0. This change occurs because the software assigns a new world wide name (WWN) to the new LUN. |
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
# format
For example:
phys-schost-1# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t5d0 <SYMBIOS-StorEdgeA3500FCr-0301 cyl3 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,0 1. c0t5d1 <SYMBIOS-StorEdgeA3500FCr-0301 cyl2025 alt2 hd64 sec64> /pseudo/rdnexus@0/rdriver@5,1
To determine whether this LUN is a quorum device, use the following command.
# clquorum show
For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
You must completely remove LUNs that were managed by VERITAS Volume Manager from Veritas Volume Manager control before you can delete the LUNs.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure about how to reset the LUN configuration, see the Sun StorEdge RAID Manager User’s Guide.
For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
Example:
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
Therefore, the alternate paths are as follows:
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# devfsadm -C
# cldevice clear
# clnode evacuate from-node
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Solaris Cluster documentation, see Related Documentation.
If an error message like the following appears, ignore it. Continue with the next step.
device id for '/dev/rdsk/c0t5d0' does not match physical disk's id.
The device ID number for the original LUN 0 is removed. A new device ID is assigned to LUN 0.
Use this section to correct mismatched device ID numbers that might appear during the creation of LUNs. You correct the mismatch by deleting Solaris and Oracle Solaris Cluster paths to the LUNs that have device ID numbers that are different. After rebooting, the paths are corrected.
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
# format
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
For example:
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
Therefore, the alternate paths are as follows:
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# cldevice clear
# clnode evacuate from-node
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.