Sun Cluster 3.0-3.1 With SCSI JBOD Storage Device Manual for Solaris OS

Chapter 1 Installing a SCSI JBOD Storage Device

This chapter describes the procedures about how to install SCSI JBOD storage devices in a SunTM Cluster environment.

The procedures in this chapter apply to the following SCSI JBOD storage devices:

Installing a Storage Array

This section contains procedures for installing storage arrays in new clusters and adding them to existing cluster.

If your storage array uses single-ended SCSI specifications, ensure that your bus lengths comply to the following guidelines.

The single-ended SCSI specifications specify that bus lengths are based on speed and number of devices. The bus lengths in the following table outline a typical implementation of the single-ended SCSI specifications for Sun hardware. For details, see your host adapter and storage documentation.

Table 1–1 Typical Single-Ended, Wide SCSI Bus Lengths

Number of Devices [Devices include both targets and initiators.]

Maximum Length 

UltraSCSI (40MB/s: FAST-20, Wide)

  up to 4 

3 meters 

  up to 8 

1.5 meters 

FastSCSI (20MB/s: FAST-10, Wide)

 

  up to 16 

6 meters 

If you exceed these specifications, you might experience SCSI errors. The host adapter or the driver might recover from these errors by retrying the request. If this action does not succeed, the host adapter or the driver might recover by renegotiating to a less demanding mode of operation. In some cases, the host adapter or the driver might not be able to recover from these errors, and I/O might fail. You experience delays in I/O if the host adapter or the driver needs to perform this recovery.

If your configuration uses UltraSCSI and requires the 6-meter bus length, use the host adapter driver's scsi-options property to limit the speed negotiation to FastSCSI operation. Use the following /kernel/drv/glm.conf file as an example to set the scsi-options property.


name="glm" parent="/pci@1f,4000" unit-address="2" scsi-options=0x3f8;

This example uses specific hardware. Change this example to accommodate the hardware in your configuration. In this example, the scsi-options property sets the following support.

For more information, see your isp(7D), glm(7D) or other host adapter driver man page and documentation on http://sunsolve.ebay.sun.com.

Table 1–2 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

SPARC: How to Install a Storage Array in a New SPARC Based Cluster

x86: How to Install a Storage Array in a New X86 Based Cluster

Add a storage array to an existing cluster.  

SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

x86: How to Add a Storage Array to an Existing X86 Based Cluster

Setting SCSI Initiator IDs

When your cluster configuration contains shared SCSI JBOD arrays, you must ensure the two nodes connected to a shared SCSI JBOD array have unique SCSI initiator IDs.

The installation procedures in this section describe the steps for setting SCSI initiator IDs in a two-node cluster. If your cluster has additional nodes connected to shared SCSI JBOD arrays, apply these steps as appropriate. Some topologies, for example, clustered pairs, use these procedures unchanged. Others, for example the N+1 topology, may require minor changes.


x86 only –

On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might be unable to boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.


ProcedureSPARC: How to Install a Storage Array in a New SPARC Based Cluster

This procedure assumes that you are installing one or more storage arrays at initial installation of a SPARC based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.

Multihost storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Sun Cluster Concepts Guide for Solaris OS.


Note –

This procedure uses an updated method for setting the scsi-initiator-id. The method that was published in earlier documentation is still applicable. However, the method changes if your cluster configuration uses a Sun StorEdge PCI Dual Ultra3 SCSI host adapter to connect to any other shared storage. You then must update your nvramrc script and set the scsi-initiator-id by following the steps in this procedure.


Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

Steps
  1. Verify that the storage arrays are set up correctly for your planned configuration.

  2. If necessary, install the host adapters in the nodes that you plan to connect to the storage array.

    If possible, put each host adapter on a separate I/O board to ensure maximum redundancy.

  3. Cable the storage arrays.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the SCSI bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  4. Connect the AC or DC power cords for each storage array to a different power source.

    If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.

  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

    2. If necessary, power on a node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for one node to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  7. Install the operating system software.

    1. Install the Solaris operating system.

      See your Sun Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      See the host adapter documentation for driver installation procedures .

    3. Apply any required Solaris patches.

      PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

      To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

      For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  8. If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file:


    scsi-options=0x1ff8;
  9. Install the Sun Cluster software and volume management software.

    For software installation procedures, see the Sun Cluster installation documentation.

  10. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information.


    Caution – Caution –

    Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


    You might need disk-partitioning information if you replace a failed disk drive in the future.

Procedurex86: How to Install a Storage Array in a New X86 Based Cluster

This procedure assumes that you are installing one or more storage arrays at initial installation of an x86 based cluster. If you are adding arrays to a running cluster, use the procedure in SPARC: How to Add a Storage Array to an Existing SPARC Based Cluster.

Multi-host storage in clusters uses the multi-initiator capability of the small computer system interface (SCSI) specification. When installing arrays in your cluster, you must ensure that each device in each SCSI chain has a unique SCSI address. The procedure that follows has specific instructions for achieving this requirement. For additional information about multi-initiator capability, see Multi-Initiator SCSI in Sun Cluster Concepts Guide for Solaris OS.


Note –

On x86 based systems, setting SCSI initiator IDs is a two-step process. You first set the IDs in the BIOS and then in a configuration file. Until both steps are complete, the IDs are not set and the systems might not boot or the nodes might panic. Set the IDs on one node at a time, as instructed in the procedure.


Before You Begin

Before performing this procedure, ensure that you have met the following prerequisites. This procedure relies on the following prerequisites and assumptions.

Steps
  1. Verify that the storage arrays are set up correctly for your planned configuration.

  2. If necessary, install the host adapters in the nodes that you plan to connect to the storage array.

    If possible, put each host adapter on a separate BUS to ensure maximum redundancy.

  3. Power on one node.

  4. On the first node, ensure that each device in the SCSI chain has a unique SCSI address by configuring the initiator IDs in the BIOS.

    To avoid SCSI-chain conflicts, perform the following steps.


    Note –

    Perform these steps on only one cluster node.


    1. Access your BIOS settings.

      To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that the internal controller is set to the default value of 7.

    3. Set the new host adapter scsi-initiator-id to 6.

  5. Cable the storage arrays to all nodes.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed SCSI-bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI-bus-length limitations, see your hardware documentation.


  6. Connect the AC or DC power cords for each storage array to a different power source.

    If your storage array has redundant power inputs, connect each power cord from the storage array to a different power source. If the arrays are not mirrors of each other, the arrays can share power sources.

  7. Power on the storage array.

    For the procedure about powering on a storage device, see the service manual that shipped with your storage device.

  8. Install the operating system software on the node for which you configured the BIOS in Step 4.

    1. Install the Solaris operating system.

      See your Sun Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      For driver installation procedures, see the host adapter documentation.

    3. Apply any required Solaris patches.

      PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

      To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

      For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  9. On the node for which you configured the BIOS in Step 4, finish configuring the SCSI initiator IDs.

    1. Get the information required for the mpt.conf file.

      To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.

      To find this information on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:


      # echo | format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
      		0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63>
      			/pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    2. Create a /kernel/drv/mpt.conf file.

    3. Include the following entries:


      scsi-initiator-id=6;
      name="mpt" parent="/pci@0,0/pci1022,7450@a"
             unit-address="4"
             scsi-initiator-id=7;

      Note –

      These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.


      The entries in this example have the following meanings:

      scsi-initiator-id=6;

      Matches your setting in the BIOS for the host adapter ports.

      name="mpt";

      Indicates that these settings are for the mpt driver.

      parent

      Specifies the path to your local drive which you discovered in Step a.

      unit-address

      Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.

      scsi-initiator-id=7;

      Sets your node's local drive back to the default SCSI setting of 7.

    4. Reboot the node to activate the mpt.conf file changes.


      # boot
      
  10. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  11. To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.

    • On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot.

    • For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.

  12. Power on the remaining nodes and install the operating system software on them.

    1. Install the Solaris operating system.

      See your Sun Cluster installation documentation for instructions.

    2. Install any unbundled drivers required by your cluster configuration.

      For driver installation procedures, see the host adapter documentation.

    3. Apply any required Solaris patches.

      PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

      To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

      For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  13. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  14. To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.

    • On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot.

    • For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.

  15. If you are using Sun StorEdge 3310 JBOD arrays with the Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI host adapter, you must throttle down the speed of the adapter to U160. Add the following entry to your /kernel/drv/mpt.conf file on each node:


    scsi-options=0x1ff8;
  16. Ensure that each LUN has an associated entry in the /kernel/drv/sd.conf file.

    For more information, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.

  17. To make the changes to the /kernel/drv/sd.conf file active, perform one of the following options.

    • On systems that run Solaris 8 Update 7 or below, perform a reconfiguration boot.

    • For Solaris 9 and above, run the update_drv -f sd command and then the devfsadm command.

  18. Install the Sun Cluster software and volume management software on each node.

    For software installation procedures, see the Sun Cluster installation documentation.


Example 1–1 A Completed mpt.conf File

The following mpt.conf file shows all entries, assuming the following:


# more /kernel/drv/mpt.conf
scsi-initiator-id=6;
name="mpt" parent="/pci@0,0/pci1022,7450@a"
       unit-address="4"
       scsi-initiator-id=7;
scsi-options=0x1ff8;

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution – Caution –

Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


ProcedureSPARC: How to Add a Storage Array to an Existing SPARC Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New X86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    
  3. If you need to install host adapters in Node A, perform the following steps.

    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. Connect the storage array to the host adapters on Node A.

    • If necessary, terminate the ports that will connect to Node B.

      • If you have a NetraTM D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  5. Ensure that each device in the SCSI chain has a unique SCSI address.

    To avoid SCSI-chain conflicts, the following steps instruct you to reserve SCSI address 7 for one host adapter in the SCSI chain and change the other host adapter's global scsi-initiator-id to an available SCSI address. Then the steps instruct you to change the scsi-initiator-id for local devices back to 7.


    Note –

    A slot in the storage array might not be in use. However, do not set the scsi-initiator-id to a SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


    1. If necessary, power on the storage devices.

      For the procedure about powering on a storage device, see the service manual that shipped with your storage device.


      Note –

      If necessary, halt the nodes so that you can perform OpenBootTM PROM (OBP) Monitor tasks at the ok prompt.


    2. If necessary, power on Node A, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    3. Set the scsi-initiator-id for Node A to 6.


      {1} ok setenv scsi-initiator-id 6
      scsi-initiator-id = 6
    4. Find the paths to the host adapters that connect to the local disk drives.


      {0} ok show-disks
      

      Use this information to change the SCSI addresses in the nvramrc script. Do not include the /sd directories in the device paths.

    5. Edit the nvramrc script to set the scsi-initiator-id for the local devices on the first node to 7.

      For a full list of commands, see the OpenBoot 2.x Command Reference Manual.


      Caution – Caution –

      Insert exactly one space after the first double quote and before scsi-initiator-id.



      {0} ok nvedit
       0: probe-all 
       1: cd /pci@1f,4000/scsi@2
       2: 7 encode-int " scsi-initiator-id" property
       3: device-end 
       4: cd /pci@1f,4000/scsi@3 
       5: 7 encode-int " scsi-initiator-id" property 
       6: device-end
       7: install-console
       8: banner[Control C] 
      {0} ok
    6. Store the changes.

      The changes you make through the nvedit command are recorded on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

      • To store the changes, type the following command:


        {0} ok nvstore
        {1} ok 
      • To discard the changes, type the following command:


        {0} ok nvquit
        {1} ok 
    7. Verify the contents of the nvramrc script that you created, as shown in the following example.

      If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


      {0} ok printenv nvramrc
      nvramrc =             probe-all
                            cd /pci@1f,4000/scsi@2 
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            cd /pci@1f,4000/scsi@3
                            7 " scsi-initiator-id" integer-property
                            device-end 
                            install-console
                            banner
      {1} ok
    8. Instruct the OpenBoot PROM (OBP) Monitor to use the nvramrc script, as shown in the following example.


      {0} ok setenv use-nvramrc? true
      use-nvramrc? = true
      {1} ok 
  6. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.

  7. If necessary, format and label the disks.

  8. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.

    # scdidadm -l

  9. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    
  10. If you need to install host adapters in Node B, perform the following steps.

    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

  11. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 4, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  12. Verify that the scsi-initiator-id is set correctly on the second node.

    1. If necessary, power on the second node, but do not allow it to boot. If necessary, halt the system to continue with OBP Monitor tasks.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

      Use the show-disks command to find the paths to the host adapters that are connected to these enclosures. Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7.


      {0} ok cd /pci@6,4000/pci@3/scsi@5
      {0} ok .properties
      scsi-initiator-id        00000007 
      ...
  13. Perform a reconfiguration boot to create the new Solaris device files and links.

  14. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.


    # scdidadm -L
    
  15. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution – Caution –

Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.


Procedurex86: How to Add a Storage Array to an Existing X86 Based Cluster

This procedure contains instructions for adding storage arrays to an operational cluster. If you need to install storage arrays to a new cluster, use the procedure in SPARC: How to Install a Storage Array in a New SPARC Based Cluster or x86: How to Install a Storage Array in a New X86 Based Cluster.

Adding a storage array enables you to alter your storage pool. You might want to perform this procedure in the following scenarios.

This procedure defines Node A as the node with which you begin working. Node B is the remaining node.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Install all software that is specific to the storage array or to any new host adapters.

    Install the software and patches to all nodes that will connect to the new storage array.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    
  3. If you need to install host adapters in Node A, perform the following steps.

    1. Shut down and power off Node A.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install host adapters in Node A.

      For the procedure about how to install host adapters, see your host adapters and server documentation.

  4. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, ensure that each device in the SCSI chain has a unique SCSI address by configuring the scsi-initiator-id in the BIOS.


    Note –

    Perform these steps on one cluster node, the node on which you have configured SCSI initiator IDs for the cluster in the past.


    1. Access your host adaptor's BIOS settings.

      To access the BIOS on the V40z server with X4422A Sun Dual Gigabit Ethernet and Dual SCSI Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that internal controller is set to the default value of 7.

    3. Select a unique value for each of the new host adapter's ports.

    4. Set each controller's scsi-initiator-id to that value.

  5. Connect the storage array to the host adapters on Node A.

    • If necessary, terminate the ports that will connect to Node B.

      • If you have a NetraTM D130 array, always terminate the ports that connect to Node B.

      • If you have a StorEdge 3310 or 3320 SCSI array, terminate the ports that connect to Node B when using a split-bus configuration.

    • If your storage array is a StorEdge 3310 or 3320 SCSI array, do not power on the storage array until the storage array is cabled to Node A.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed SCSI bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about SCSI bus-length limitations, see your hardware documentation.


  6. If you installed host adapters in Step 3, or if you intend to use previously unconfigured host adapters, finish configuring the SCSI initiator IDs on the same node on which you configured the BIOS in Step 4.

    1. Get the information required for the mpt.conf file.

      To create the mpt.conf entries, you need the path to your boot disk and the SCSI unit address.

      To find this information on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, use the following command:


      # echo | format
      Searching for disks...done
      
      AVAILABLE DISK SELECTIONS:
      		0. clt0d0 <DEFAULT cyl 8938 alt 2 hd 255 sec 63>
      			/pci@0,0/pci1022,7450@a/pci17c2,10@4/sd@0,0
    2. Create or edit your /kernel/drv/mpt.conf file.

    3. Include the following entries:


      scsi-initiator-id=6;
      name="mpt" parent="/pci@0,0/pci1022,7450@a"
             unit-address="4"
             scsi-initiator-id=7;

      Note –

      These entries are based on the foregoing example output of the format command. Your entries must include the values output from your format command. Also, note that the parent and unit-address values are strings. The quotation marks are required to form correct values in the mpt.conf file.


      The entries in this example have the following meanings:

      scsi-initiator-id=6;

      Matches your setting in the BIOS for the host adapter ports.

      name="mpt"

      Indicates that these settings are for the mpt driver.

      parent

      Is set to the path to your local drive.

      unit-address

      Specifies the unit address of the local drive. In the example in Step a, this information derives from the pci17c2,10@4 portion of the output.

      scsi-initiator-id=7;

      Sets your node's local drive back to the default SCSI setting of 7.

  7. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.

  8. If necessary, format and label the disks.

  9. On Node A, verify that the device IDs have been assigned to the disk drives in the storage array.

    # scdidadm -l

  10. Move all resource groups and device groups off Node B.


    # scswitch -S -h from-node
    
  11. If you need to install host adapters in Node B, perform the following steps.

    1. Shut down Node B.

      For the procedure about how to shut down and power off a node, see the Sun Cluster system administration documentation.

    2. Install the host adapters in Node B.

      For the procedure about how to install a host adapter, see your host adapter and server documentation.

    3. Power on and boot Node B.

  12. Connect the storage array to the host adapters on Node B.

    If you added port terminators in Step 5, remove the terminator ports and connect the storage array to Node B.

    For cabling diagrams, see Chapter 3, Cabling Diagrams.


    Note –

    Ensure that the bus length does not exceed bus-length specifications. This measurement includes the cables to both nodes, as well as the bus length that is internal to each storage array, node, and the host adapter. For more information about bus-length limitations, see your hardware documentation.


  13. Verify that the scsi-initiator-id is set correctly on the second node.

    1. Access your BIOS settings.

      To access the BIOS on the V40z server with SG-XCPI2SCSI-LM320 Sun StorEdge PCI/PCI-X Dual Ultra320 SCSI Host Adapter cards, press Ctrl-C when prompted during reboot.

    2. Verify that the scsi-initiator-id for each host adapter on the second node is set to 7.

  14. Perform a reconfiguration boot to create the new Solaris device files and links.

  15. On Node B, verify that the device IDs have been assigned to the disk drives in the storage array.


    # scdidadm -L
    
  16. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

If needed, finish setting up your storage arrays, including partitions. If you are using Solstice DiskSuiteTM/Solaris Volume Manager as your volume manager, save the disk-partitioning information. You might need disk-partitioning information if you replace a failed disk drive in the future.


Caution – Caution –

Do not save disk-partitioning information in /tmp because you will lose this file when you reboot. Instead, save this file in /usr/tmp.