JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With SCSI JBOD Storage Device Manual
search filter icon
search icon

Document Information

Preface

1.  Installing a SCSI JBOD Storage Device

Installing a Storage Array

Setting SCSI Initiator IDs

SPARC: How to Install a Storage Array in a New SPARC Based Cluster

x86: How to Install a Storage Array in a New x86 Based Cluster

SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

x86: How to Add a Storage Array to an Existing x86 Based Cluster

2.  Maintaining a SCSI JBOD Storage Device

3.  Cabling Diagrams

Index

Installing a Storage Array

This section contains procedures for installing storage arrays in new clusters and adding them to existing clusters.

If your storage array uses single-ended SCSI specifications, ensure that your bus lengths comply to the following guidelines.

The single-ended SCSI specifications specify that bus lengths are based on speed and number of devices. The bus lengths in the following table outline a typical implementation of the single-ended SCSI specifications for Oracle hardware. For details, see your host adapter and storage documentation.

Table 1-1 Typical Single-Ended, Wide SCSI Bus Lengths

Number of Devices1
Maximum Length
UltraSCSI (40MB/s: FAST-20, Wide)
  up to 4
3 meters
  up to 8
1.5 meters
FastSCSI (20MB/s: FAST-10, Wide)
  up to 16
6 meters

1Devices include both targets and initiators.

If you exceed these specifications, you might experience SCSI errors. The host adapter or the driver might recover from these errors by retrying the request. If this action does not succeed, the host adapter or the driver might recover by renegotiating to a less demanding mode of operation. In some cases, the host adapter or the driver might not be able to recover from these errors, and I/O might fail. You experience delays in I/O if the host adapter or the driver needs to perform this recovery.

If your configuration uses UltraSCSI and requires the 6-meter bus length, use the host adapter driver's scsi-options property to limit the speed negotiation to FastSCSI operation. Use the following /kernel/drv/glm.conf file as an example to set the scsi-options property.

name="glm" parent="/pci@1f,4000" unit-address="2" scsi-options=0x3f8;

This example uses specific hardware. Change this example to accommodate the hardware in your configuration. In this example, the scsi-options property sets the following support.

For more information, see your isp(7D), glm(7D) or other host adapter driver man page and documentation on http://sunsolve.sun.com.

Table 1-2 Task Map: Installing Storage Arrays

Task
Information
Install a storage array in a new cluster, before the OS and Oracle Solaris Cluster software are installed.
Add a storage array to an existing cluster.

Setting SCSI Initiator IDs

When your cluster configuration contains shared SCSI JBOD arrays, you must ensure the two nodes connected to a shared SCSI JBOD array have unique SCSI initiator IDs.

The installation procedures in this section describe the steps for setting SCSI initiator IDs in a two-node cluster. If your cluster has additional nodes connected to shared SCSI JBOD arrays, apply these steps as appropriate. Some topologies, for example, clustered pairs, use these procedures unchanged. Others, for example the N+1 topology, may require minor changes.


x86 only - On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might be unable to boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.


SPARC: How to Install a Storage Array in a New SPARC Based Cluster

This procedure assumes that you are installing one or more storage arrays at initial installation of a SPARC based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.

Multihost storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Oracle Solaris Cluster Concepts Guide.


Note - This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, the method changes if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage. You then must update your nvramrc script and set the scsi-initiator-id by following the steps in this procedure.


Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

  1. Verify that the storage arrays are set up correctly for your planned configuration.
  2. If necessary, install the host adapters in the nodes that you plan to connect to the storage array.

    If possible, put each host adapter on a separate I/O board to ensure maximum redundancy.

  3. Cable the storage arrays.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the SCSI bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  4. Connect the AC or DC power cords for each storage array to a different power source.

    If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.

  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.

      Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.
    3. Set the scsi-initiator-id for one node to 6.
      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.
      {0} ok show-disks

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution

      Caution - Insert exactly one space after the first double quote and before scsi-initiator-id.


      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:
        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:
        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.

      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.
      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. Verify that the scsi-initiator-id is set correctly on the second node.
    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.
    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.

      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  7. Install the operating system software.
    1. Install the Oracle Solaris operating system.

      See your Oracle Solaris Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      See the host adapter documentation for driver installation procedures .

    3. Apply any required Oracle Solaris patches.

      The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

      Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

      If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

      For required firmware, see the Sun System Handbook.

  8. If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file:
    scsi-options=0x1ff8;
  9. Install the Oracle Solaris Cluster software and volume management software.

    For software installation procedures, see the Oracle Solaris Cluster installation documentation.

  10. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information.

    Caution

    Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


    You might need disk-partitioning information if you replace a failed disk drive in the future.

x86: How to Install a Storage Array in a New x86 Based Cluster

This procedure assumes that you are installing one or more storage arrays at initial installation of an x86 based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.

Multi-host storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Oracle Solaris Cluster Concepts Guide.


Note - On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might not boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.


Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

  1. Verify that the storage arrays are set up correctly for your planned configuration.
  2. If necessary, install the host adapters in the nodes that you plan to connect to the storage array.

    If possible, put each host adapter on a separate BUS to ensure maximum redundancy.

  3. Power on one node.
  4. On the first node, ensure that each device in the SCSI chain has a unique SCSI address by configuring the initiator IDs in the BIOS.

    To avoid SCSI-chain conflicts, perform the following steps.


    Note - Perform these steps on only one cluster node.


    1. Access your BIOS settings.

      To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that the internal controller is set to the default value of 7.
    3. Set the new host adapter scsi-initiator-id to 6.
  5. Cable the storage arrays to all nodes.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the bus length does not exceed SCSI-bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI-bus-length limitations, see your hardware documentation.


  6. Connect the AC or DC power cords for each storage array to a different power source.

    If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.

  7. Power on the storage array.

    For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

  8. Install the operating system software on the node for which you configured the BIOS in Step 4.
    1. Install the Solaris operating system.

      See your Oracle Solaris Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      For driver installation procedures, see the host adapter documentation.

    3. Apply any required Solaris patches.

      The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

      Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

      If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

      For required firmware, see the Sun System Handbook.

  9. On the node for which you configured the BIOS in Step 4, finish configuring the SCSI initiator IDs.
    1. Get the information required for the mpt.conf file.

      To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.

      To find this information on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:

      # echo | format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
              0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63>
                  /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    2. Create a /kernel/drv/mpt.conf file.
    3. Include the following entries:
      scsi-initiator-id=6;
      name="mpt" parent="/pci@0,0/pci1022,7450@a"
             unit-address="4"
             scsi-initiator-id=7;

      Note - These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.


      The entries in this example have the following meanings:

      scsi-initiator-id=6;

      Matches your setting in the BIOS for the host adapter ports.

      name="mpt";

      Indicates that these settings are for the mpt driver.

      parent

      Specifies the path to your local drive which you discovered in Step a.

      unit-address

      Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.

      scsi-initiator-id=7;

      Sets your node's local drive back to the default SCSI setting of 7.

    4. Reboot the node to activate the mpt.conf file changes.

      For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  11. To make the changes to the /kernel/drv/sd.conf file active, perform the following option.

    Run the update_drv -f sd command and then the devfsadm command.

  12. Power on the remaining nodes and install the operating system software on them.
    1. Install the Oracle Solaris operating system.

      See your Oracle Solaris Clusterinstallation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      For driver installation procedures, see the host adapter documentation.

    3. Apply any required Oracle Solaris patches.

      The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

      Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

      If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

      For required firmware, see the Sun System Handbook.

  13. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  14. To make the changes to the /kernel/drv/sd.conf file active, perform the following option.

    Run the update_drv -f sd command and then the devfsadm command.

  15. If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file on each node:
    scsi-options=0x1ff8;
  16. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  17. To make the changes to the /kernel/drv/sd.conf file active, perform the following option.

    Run the update_drv -f sd command and then the devfsadm command.

  18. Install the Oracle Solaris Cluster software and volume management software on each node.

    For software installation procedures, see the Oracle Solaris Cluster installation documentation.

Example 1-1 x86: A Completed mpt.conf File When Using StorEdge 3320 Arrays

The following mpt.conf file shows all entries, assuming the following:

# more /kernel/drv/mpt.conf
scsi-initiator-id=6;
name="mpt" parent="/pci@0,0/pci1022,7450@a"
       unit-address="4"
       scsi-initiator-id=7;

Example 1-2 x86: A Completed mpt.conf File When Using StorEdge 3310 Arrays

The following mpt.conf file shows all entries, assuming the following:

# more /kernel/drv/mpt.conf
scsi-initiator-id=6;
name="mpt" parent="/pci@0,0/pci1022,7450@a"
       unit-address="4"
       scsi-initiator-id=7;
scsi-options=0x1ff8;

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution

Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New x86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  2. Move all resource groups and device groups off Node A.
    # clnode evacuate NodeA
  3. If you need to install host adapters in Node A, perform the following steps.
    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. Connect the storage array to the host adapters on Node A.
    • If necessary, terminate the ports that will connect to Node B.

      • If you have a Netra D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note - A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.

      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.


      Note - If necessary, halt the nodes so that you can perform OpenBoot PROM (OBP) Monitor tasks at the ok prompt.


    2. If necessary, power on Node A, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.
    3. Set the scsi-initiator-id for Node A to 6.
      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.
      {0} ok show-disks

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution

      Caution - Insert exactly one space after the first double quote and before scsi-initiator-id.


      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:
        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:
        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.

      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.
      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. To create the new Solaris device files and links, perform a reconfiguration boot on Node A by adding -r to your boot instruction.
  7. If necessary, format and label the disks.
  8. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.
    # cldevice list -n NodeA -v
  9. Move all resource groups and device groups off Node B.
    # clnode evacuate NodeB
  10. If you need to install host adapters in Node B, perform the following steps.
    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

      For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  11. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 4, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  12. Verify that the scsi-initiator-id is set correctly on the second node.
    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.
    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.

      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  13. To create the new Oracle Solaris device files and links, perform a reconfiguration boot by adding -r to your boot instruction.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  14. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.
    # cldevice list -n NodeB -v
  15. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution

Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


x86: How to Add a Storage Array to an Existing x86 Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New x86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  2. Move all resource groups and device groups off Node A.
    # clnode evacuate NodeA
  3. If you need to install host adapters in Node A, perform the following steps.
    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, ensure that each device in the SCSI chain has a unique SCSI address by configuring the scsi-initiator-id in the BIOS.

    Note - Perform these steps on one cluster node, the node on which you have configured SCSI initiator IDs for the cluster in the past.


    1. Access your host adaptor's BIOS settings.

      To access the BIOS on the V40z server with X4422A Sun Dual Gigabit Ethernet and Dual SCSI Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that internal controller is set to the default value of 7.
    3. Select a unique value for each of the new host adapter's ports.
    4. Set each controller's scsi-initiator-id to that value.
  5. Connect the storage array to the host adapters on Node A.
    • If necessary, terminate the ports that will connect to Node B.

      • If you have a Netra D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  6. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, finish configuring the SCSI initiator IDs on the same node on which you configured the BIOS in Step 4.
    1. Get the information required for the mpt.conf file.

      To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.

      To find this information on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:

      # echo | format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
              0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63>
                  /pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    2. Create or edit your /kernel/drv/mpt.conf file.
    3. Include the following entries:
      scsi-initiator-id=6;
      name="mpt" parent="/pci@0,0/pci1022,7450@a"
             unit-address="4"
             scsi-initiator-id=7;

      Note - These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.


      The entries in this example have the following meanings:

      scsi-initiator-id=6;

      Matches your setting in the BIOS for the host adapter ports.

      name="mpt"

      Indicates that these settings are for the mpt driver.

      parent

      Is set to the path to your local drive.

      unit-address

      Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.

      scsi-initiator-id=7;

      Sets your node's local drive back to the default SCSI setting of 7.

  7. To create the new Solaris device files and links, perform a reconfiguration boot on Node A by adding -r to your boot instruction.

    For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  8. If necessary, format and label the disks.
  9. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.
    # cldevice show -v
  10. Move all resource groups and device groups off Node B.
    # clnode evacuate NodeB
  11. If you need to install host adapters in Node B, perform the following steps.
    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Oracle Solaris Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

      For the procedure about booting cluster nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  12. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 5, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note - Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  13. Verify that the scsi-initiator-id is set correctly on the second node.
    1. Access your BIOS settings.

      To access the BIOS on X4000 series servers with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.
  14. To create the new Solaris device files and links, perform a reconfiguration boot by adding -r to your boot instruction.
  15. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.
    # cldevice show -v
  16. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solaris Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution

Caution - Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.