1. Installing and Maintaining a SCSI RAID Storage Device
How to Reset the LUN Configuration
How to Correct Mismatched Device ID Numbers
FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures
Sun StorEdge A1000 Array and Netra st A1000 Array FRUs
Sun StorEdge A3500 System FRUs
How to Replace a Failed Controller or Restore an Offline Controller
How to Upgrade Controller Module Firmware
This section contains the instructions for installing storage arrays both in new clusters and existing clusters.
Table 1-1 Task Map: Installing Storage Arrays
|
This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.
This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage, you need to update your nvramrc script and set the scsi-initiator-id by following this procedure.
Before You Begin
Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.
You have read the entire procedure.
You can access necessary patches, drivers, software packages, and hardware.
Your nodes are powered off or are at the OpenBoot PROM.
Your arrays are powered off.
Your interconnect hardware is connected to the nodes in your cluster.
No software is installed.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
For cabling diagrams, see Appendix A, Cabling Diagrams.
To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.
Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.
Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
{1} ok setenv scsi-initiator-id 6 scsi-initiator-id = 6
{0} ok show-disks
Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.
For a full list of commands, see the OpenBoot 2.x Command Reference Manual.
Caution - Insert exactly one space after the first double quote and before scsi-initiator-id. |
{0} ok nvedit 0: probe-all 1: cd /pci@1f,4000/scsi@2 2: 7 encode-int " scsi-initiator-id" property 3: device-end 4: cd /pci@1f,4000/scsi@3 5: 7 encode-int " scsi-initiator-id" property 6: device-end 7: install-console 8: banner[Control C] {0} ok
The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.
{0} ok nvstore {1} ok
{0} ok nvquit {1} ok
If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.
{0} ok printenv nvramrc nvramrc = probe-all cd /pci@1f,4000/scsi@2 7 " scsi-initiator-id" integer-property device-end cd /pci@1f,4000/scsi@3 7 " scsi-initiator-id" integer-property device-end install-console banner {1} ok
{0} ok setenv use-nvramrc? true use-nvramrc? = true {1} ok
Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.
{0} ok cd /pci@6,4000/pci@3/scsi@5 {0} ok .properties scsi-initiator-id 00000007 ...
For the most current list of patches, see My Oracle Support.
# reboot
For the procedure about how to install the RAID Manager software, see the Sun StorEdge RAID Manager User’s Guide.
For the required version of the RAID Manager software that Oracle Solaris Cluster software supports, see Restrictions and Requirements.
For the most current list of patches, see My Oracle Support.
For the NVSRAM file revision number, boot level, and procedure about how to upgrade the NVSRAM file, see the Sun StorEdge RAID Manager Release Notes.
For the firmware revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the firmware, see the Sun StorEdge RAID Manager User’s Guide.
Rdac_RetryCount=1 Rdac_NoAltOffline=TRUE
For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.
For the procedure about how to set up the storage array with LUNs and hot spares, see the Sun StorEdge RAID Manager User’s Guide.
Note - Use the format command to verify Solaris logical device names.
# /etc/raid/bin/hot_add
See Also
To continue with Oracle Solaris Cluster software and data services installation tasks, see your Oracle Solaris Cluster software installation documentation and the Oracle Solaris Cluster data services developer's documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
Use this procedure to add a storage device to an existing cluster. If you need to install a storage device in a new cluster, use the procedure in How to Install a Storage Array in a New Cluster.
You might want to perform this procedure in the following scenarios.
You need to increase available storage.
You need to upgrade to a higher-quality or larger storage array.
To upgrade storage arrays, remove the old storage array and then add the new storage array.
To replace a storage array with the same type of storage array, use this procedure.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
This procedure defines Node A as the node with which you begin working. Node B is the remaining node.
This procedure uses an updated method for setting the scsi-initiator-id. For this storage array, the method that was published in earlier documentation is still applicable. However, if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage, you need to update your nvramrc script and set the scsi-initiator-id by using this procedure.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read RBAC (role-based access control) authorization.
For the required version of the RAID Manager software that Oracle Solaris Cluster software supports, see Restrictions and Requirements.
For the procedure about how to install RAID Manager software, see the Sun StorEdge RAID Manager Installation and Support Guide.
For the most current list of software, firmware, and patches that your storage array or storage system requires, refer to the appropriate EarlyNotifier that is outlined in Related Documentation. This document is available online to Oracle service providers and to customers with service contracts at My Oracle Support.
For the location of patches and installation instructions, see your Oracle Solaris Cluster release notes documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
Rdac_RetryCount=1 Rdac_NoAltOffline=TRUE
For the procedure about how to power on the storage array or storage system, see your storage documentation. For a list of storage documentation, see Related Documentation.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.
For cabling diagrams, see Appendix A, Cabling Diagrams.
To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.
Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.
Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
{1} ok setenv scsi-initiator-id 6 scsi-initiator-id = 6
{0} ok show-disks
Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.
For a full list of commands, see the OpenBoot 2.x Command Reference Manual.
Caution - Insert exactly one space after the first double quote and before scsi-initiator-id. |
{0} ok nvedit 0: probe-all 1: cd /pci@1f,4000/scsi@2 2: 7 encode-int " scsi-initiator-id" property 3: device-end 4: cd /pci@1f,4000/scsi@3 5: 7 encode-int " scsi-initiator-id" property 6: device-end 7: install-console 8: banner[Control C] {0} ok
The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.
{0} ok nvstore {1} ok
{0} ok nvquit {1} ok
If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.
{0} ok printenv nvramrc nvramrc = probe-all cd /pci@1f,4000/scsi@2 7 " scsi-initiator-id" integer-property device-end cd /pci@1f,4000/scsi@3 7 " scsi-initiator-id" integer-property device-end install-console banner {1} ok
{0} ok setenv use-nvramrc? true use-nvramrc? = true {1} ok
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation. For a list of Oracle Solaris Cluster documentation, see Related Documentation.
For the procedure about how to install host adapters, see the documentation that shipped with your nodes.
For cabling diagrams, see Adding a Sun StorEdge A3500 Storage System.
Do not enable the node to boot. If necessary, halt the system to continue with OpenBoot PROM (OBP) Monitor tasks.
If the node does not recognize the new hardware, check all hardware connections and repeat the installation steps you performed in Step 12.
{0} ok show-disks ... b) /sbus@6,0/QLGC,isp@2,10000/sd... d) /sbus@2,0/QLGC,isp@2,10000/sd...{0} ok
Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.
{0} ok cd /pci@6,4000/pci@3/scsi@5 {0} ok .properties scsi-initiator-id 00000007 ...
For the NVSRAM file revision number and boot level, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the NVSRAM file, see the Sun StorEdge RAID Manager User’s Guide.
For the revision number and boot level of the controller module firmware, see the Sun StorEdge RAID Manager Release Notes. For the procedure about how to upgrade the controller firmware, see How to Upgrade Controller Module Firmware.
# reboot
# cldevice show
For more information about controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.
See Also
To create a LUN from disk drives that are unassigned, see How to Create a LUN.
To upgrade controller module firmware, see How to Upgrade Controller Module Firmware.