Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With SCSI JBOD Storage Device Manual |
1. Installing a SCSI JBOD Storage Device
SPARC: How to Install a Storage Array in a New SPARC Based Cluster
x86: How to Install a Storage Array in a New x86 Based Cluster
SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster
x86: How to Add a Storage Array to an Existing x86 Based Cluster
This section contains procedures for installing storage arrays in new clusters and adding them to existing clusters.
If your storage array uses single-ended SCSI specifications, ensure that your bus lengths comply to the following guidelines.
The single-ended SCSI specifications specify that bus lengths are based on speed and number of devices. The bus lengths in the following table outline a typical implementation of the single-ended SCSI specifications for Oracle hardware. For details, see your host adapter and storage documentation.
Table 1-1 Typical Single-Ended, Wide SCSI Bus Lengths
|
1Devices include both targets and initiators.
If you exceed these specifications, you might experience SCSI errors. The host adapter or the driver might recover from these errors by retrying the request. If this action does not succeed, the host adapter or the driver might recover by renegotiating to a less demanding mode of operation. In some cases, the host adapter or the driver might not be able to recover from these errors, and I/O might fail. You experience delays in I/O if the host adapter or the driver needs to perform this recovery.
If your configuration uses UltraSCSI and requires the 6-meter bus length, use the host adapter driver's scsi-options property to limit the speed negotiation to FastSCSI operation. Use the following /kernel/drv/glm.conf file as an example to set the scsi-options property.
name="glm" parent="/pci@1f,4000" unit-address="2" scsi-options=0x3f8;
This example uses specific hardware. Change this example to accommodate the hardware in your configuration. In this example, the scsi-options property sets the following support.
Disconnect/reconnect
Synchronous transfer
Parity
Fast SCSI
Wide SCSI
For more information, see your isp(7D), glm(7D) or other host adapter driver man page and documentation on http://sunsolve.sun.com.
Table 1-2 Task Map: Installing Storage Arrays
|
When your cluster configuration contains shared SCSI JBOD arrays, you must ensure the two nodes connected to a shared SCSI JBOD array have unique SCSI initiator IDs.
The installation procedures in this section describe the steps for setting SCSI initiator IDs in a two-node cluster. If your cluster has additional nodes connected to shared SCSI JBOD arrays, apply these steps as appropriate. Some topologies, for example, clustered pairs, use these procedures unchanged. Others, for example the N+1 topology, may require minor changes.
x86 only - On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might be unable to boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.
This procedure assumes that you are installing one or more storage arrays at initial installation of a SPARC based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.
Multihost storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Oracle Solaris Cluster Concepts Guide.
Note - This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, the method changes if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage. You then must update your nvramrc script and set the scsi-initiator-id by following the steps in this procedure.
Before You Begin
Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.
You have read the entire procedure.
You can access necessary patches, drivers, software packages, and hardware.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
You have planned the SCSI address assignments for your host adapters.
Your nodes are powered off or are at the OpenBoot PROM.
Your arrays are powered off.
Your cluster interconnect hardware is connected to the nodes in your cluster.
No software is installed.
If possible, put each host adapter on a separate I/O board to ensure maximum redundancy.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the SCSI bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.
If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.
To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.
Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.
Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
{1} ok setenv scsi-initiator-id 6 scsi-initiator-id = 6
{0} ok show-disks
Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.
For a full list of commands, see the OpenBoot 2.x Command Reference Manual.
Caution - Insert exactly one space after the first double quote and before scsi-initiator-id. |
{0} ok nvedit 0: probe-all 1: cd /pci@1f,4000/scsi@2 2: 7 encode-int " scsi-initiator-id" property 3: device-end 4: cd /pci@1f,4000/scsi@3 5: 7 encode-int " scsi-initiator-id" property 6: device-end 7: install-console 8: banner[Control C] {0} ok
The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.
{0} ok nvstore {1} ok
{0} ok nvquit {1} ok
If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.
{0} ok printenv nvramrc nvramrc = probe-all cd /pci@1f,4000/scsi@2 7 " scsi-initiator-id" integer-property device-end cd /pci@1f,4000/scsi@3 7 " scsi-initiator-id" integer-property device-end install-console banner {1} ok
{0} ok setenv use-nvramrc? true use-nvramrc? = true {1} ok
Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.
{0} ok cd /pci@6,4000/pci@3/scsi@5 {0} ok .properties scsi-initiator-id 00000007 ...
See your Oracle Solaris Cluster installation documentation for instructions.
See the host adapter documentation for driver installation procedures .
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
scsi-options=0x1ff8;
For software installation procedures, see the Oracle Solaris Cluster installation documentation.
Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp. |
You might need disk-partitioning information if you replace a failed disk drive in the future.
This procedure assumes that you are installing one or more storage arrays at initial installation of an x86 based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.
Multi-host storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Oracle Solaris Cluster Concepts Guide.
Note - On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might not boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.
Before You Begin
Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.
You have read the entire procedure.
You can access necessary patches, drivers, software packages, and hardware.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
You have planned the SCSI address assignments for your host adapters.
Your nodes and arrays are powered off.
Your cluster interconnect hardware is connected to the nodes in your cluster.
No software is installed.
If possible, put each host adapter on a separate BUS to ensure maximum redundancy.
To avoid SCSI-chain conflicts, perform the following steps.
Note - Perform these steps on only one cluster node.
To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the bus length does not exceed SCSI-bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI-bus-length limitations, see your hardware documentation.
If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
See your Oracle Solaris Cluster installation documentation for instructions.
For driver installation procedures, see the host adapter documentation.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.
To find this information on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:
# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63> /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7;
Note - These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.
The entries in this example have the following meanings:
Matches your setting in the BIOS for the host adapter ports.
Indicates that these settings are for the mpt driver.
Specifies the path to your local drive which you discovered in Step a.
Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.
Sets your node's local drive back to the default SCSI setting of 7.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Run the update_drv -f sd command and then the devfsadm command.
See your Oracle Solaris Clusterinstallation documentation for instructions.
For driver installation procedures, see the host adapter documentation.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Run the update_drv -f sd command and then the devfsadm command.
scsi-options=0x1ff8;
For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.
Run the update_drv -f sd command and then the devfsadm command.
For software installation procedures, see the Oracle Solaris Cluster installation documentation.
Example 1-1 x86: A Completed mpt.conf File When Using StorEdge 3320 Arrays
The following mpt.conf file shows all entries, assuming the following:
The output of the format command is that shown in Step a.
You are using Sun StorEdge 3320 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter.
# more /kernel/drv/mpt.conf scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7;
Example 1-2 x86: A Completed mpt.conf File When Using StorEdge 3310 Arrays
The following mpt.conf file shows all entries, assuming the following:
The output of the format command is that shown in Step a.
You are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter.
# more /kernel/drv/mpt.conf scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7; scsi-options=0x1ff8;
Next Steps
If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.
Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp. |
This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New x86 Based Cluster.
Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.
You need to increase your storage pool.
You need to upgrade to a higher-quality or to a larger storage array.
To upgrade storage arrays, remove the old storage array and then add the new storage array.
To replace a storage array with the same type of storage array, see How to Replace the Chassis.
This procedure defines Node A as the node with which you begin working. Node B is the remaining node.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration, see the Oracle Solaris Cluster system administration documentation, and skip steps that instruct you to shut down the node.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.
Install the software and patches to all nodes that will connect to the new storage array.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
# clnode evacuate NodeA
If necessary, terminate the ports that will connect to Node B.
If you have a Netra D130 array, always terminate the ports that connect to Node B.
If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.
If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.
To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.
Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.
For the procedure about powering on a storage device, see the service manual that shipped with your storage device.
Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.
{1} ok setenv scsi-initiator-id 6 scsi-initiator-id = 6
{0} ok show-disks
Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.
For a full list of commands, see the OpenBoot 2.x Command Reference Manual.
Caution - Insert exactly one space after the first double quote and before scsi-initiator-id. |
{0} ok nvedit 0: probe-all 1: cd /pci@1f,4000/scsi@2 2: 7 encode-int " scsi-initiator-id" property 3: device-end 4: cd /pci@1f,4000/scsi@3 5: 7 encode-int " scsi-initiator-id" property 6: device-end 7: install-console 8: banner[Control C] {0} ok
The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.
{0} ok nvstore {1} ok
{0} ok nvquit {1} ok
If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.
{0} ok printenv nvramrc nvramrc = probe-all cd /pci@1f,4000/scsi@2 7 " scsi-initiator-id" integer-property device-end cd /pci@1f,4000/scsi@3 7 " scsi-initiator-id" integer-property device-end install-console banner {1} ok
{0} ok setenv use-nvramrc? true use-nvramrc? = true {1} ok
# cldevice list -n NodeA -v
# clnode evacuate NodeB
For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.
For the procedure about how to install a host adapter, see your host adapter and server documentation.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
If you added port terminators in Step 4, remove the terminator ports and connect the storage array to Node B.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.
Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.
{0} ok cd /pci@6,4000/pci@3/scsi@5 {0} ok .properties scsi-initiator-id 00000007 ...
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# cldevice list -n NodeB -v
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Next Steps
If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.
Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp. |
This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New x86 Based Cluster.
Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.
You need to increase your storage pool.
You need to upgrade to a higher-quality or to a larger storage array.
To upgrade storage arrays, remove the old storage array and then add the new storage array.
To replace a storage array with the same type of storage array, see How to Replace the Chassis.
This procedure defines Node A as the node with which you begin working. Node B is the remaining node.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration, see the Oracle Solaris Cluster system administration documentation, and skip steps that instruct you to shut down the node.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Install the software and patches to all nodes that will connect to the new storage array.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
# clnode evacuate NodeA
Note - Perform these steps on one cluster node, the node on which you have configured SCSI initiator IDs for the cluster in the past.
To access the BIOS on the V40z server with X4422A Sun Dual Gigabit Ethernet and Dual SCSI Adapter cards, press Ctrl-C when prompted during reboot.
If necessary, terminate the ports that will connect to Node B.
If you have a Netra D130 array, always terminate the ports that connect to Node B.
If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.
If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.
To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.
To find this information on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:
# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63> /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
scsi-initiator-id=6; name="mpt" parent="/pci@0,0/pci1022,7450@a" unit-address="4" scsi-initiator-id=7;
Note - These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.
The entries in this example have the following meanings:
Matches your setting in the BIOS for the host adapter ports.
Indicates that these settings are for the mpt driver.
Is set to the path to your local drive.
Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.
Sets your node's local drive back to the default SCSI setting of 7.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# cldevice show -v
# clnode evacuate NodeB
For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.
For the procedure about how to install a host adapter, see your host adapter and server documentation.
For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
If you added port terminators in Step 5, remove the terminator ports and connect the storage array to Node B.
For cabling diagrams, see Chapter 3, Cabling Diagrams.
Note - Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.
To access the BIOS on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.
# cldevice show -v
For more information, see your Solaris Volume Manager documentation.
Next Steps
If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.
Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp. |