Sun Cluster 3.0 12/01 Hardware Guide

Installing Netra D130/StorEdge S1 Enclosures

This section describes the procedure for an initial installation of a Netra D130/StorEdge S1 storage enclosures.

How to Install a Netra D130/StorEdge S1 Enclosure

Use this procedure for an initial installation of a Netra D130/StorEdge S1 enclosures, prior to installing the Solaris operating environment and Sun Cluster software. Perform this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.

Multihost storage in clusters uses the multi-initiator capability of the SCSI (Small Computer System Interface) specification. For conceptual information on multi-initiator capability, see the Sun Cluster 3.0 12/01 Concepts document.

  1. Ensure that each device in the SCSI chain has a unique SCSI address.

    The default SCSI address for host adapters is 7. Reserve SCSI address 7 for one host adapter in the SCSI chain. This procedure refers to node that has SCSI address 7 as the "second node." This procedure refers to the node that has an available SCSI address as the "first node."


    Note -

    Even though a slot in the Netra D130/StorEdge S1 enclosures might not be in use, do not set the scsi-initiator-id for the first node to the SCSI address for that disk slot. This precaution minimizes future complications if you install additional disk drives.


  2. Install the host adapters and (if used) Netra E1 Expanders in the nodes that will be connected to the Netra D130/StorEdge S1 enclosures.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter, Netra E1, and node hardware.


    Note -

    If your host has only one SCSI port see "Single SCSI Port Hosts". If your host has two SCSI ports see "Dual SCSI Port Hosts".


Single SCSI Port Hosts

When installing the Netra D130/StorEdge S1 storage enclosures on single SCSI port hosts use the Netra E1 PCI Expander for the second host SCSI port. Figure 10-1 shows an overview of the installation. The storage devices are cabled so that there is no single point of failure in the cluster. Netra E1 PCI Expanders provide the second SCSI port for the 1RU form factor hosts such as the Netra t1, x1 or t1 200.

Figure 10-1 Overview Example of a Enclosure Mirrored Pair Using E1 Expanders

Graphic

  1. Connect the cables to the Netra D130/StorEdge S1 enclosures, as shown in Figure 10-2.

    Make sure that the entire SCSI bus length to each Netra D130 enclosures is less than 6 m. The maximum SCSI bus length for the StorEdge S1 enclosure is 12 m. This measurement includes the cables to both nodes, as well as the bus length internal to each enclosure, node, and host adapter. Refer to the documentation that shipped with the enclosures for other restrictions regarding SCSI operation.

    Figure 10-2 Example of SCSI Cabling for an Enclosure Mirrored Pair

    Graphic

  2. Connect the Ethernet cables between the host enclosures, Netra E1 PCI expanders, and Ethernet switches, as shown in Figure 10-3.

    Figure 10-3 Example of Ethernet cabling for an Mirrored Pair Using E1 Expanders

    Graphic

Dual SCSI Port Hosts

  1. Connect the cables to the Netra D130/StorEdge S1 enclosures, as shown in Figure 10-4.

    Make sure that the entire SCSI bus length to each Netra D130 enclosures is less than 6 m (12 m for the StorEdge S1). This measurement includes the cables to both nodes, as well as the bus length internal to each Netra D130/StorEdge S1 enclosures, node, and host adapter. Refer to the documentation that shipped with the Netra D130/StorEdge S1 enclosures for other restrictions regarding SCSI operation.

    Figure 10-4 Example of SCSI Cabling for an Enclosure Mirrored Pair

    Graphic

  2. Connect the AC or DC power cord for each Netra D130/StorEdge S1 enclosures of the mirrored pair to a different power source.

  3. Power on the first node but do not allow it to boot. If necessary, halt the node to continue with OpenBootTM PROM (OBP) Monitor tasks (the first node is the node with an available SCSI address).

  4. Find the paths to the host adapters.


    {0} ok show-disks
    a) /pci@1f,4000/pci@4/SUNW,isptwo@4/sd
    b) /pci@1f,4000/pci@2/SUNW,isptwo@4/sd

    Identify and record the two controllers that will be connected to the storage devices, and record these paths. Use this information to change the SCSI addresses of these controllers in the nvramrc script in Step 5. Do not include the /sd directories in the device paths.

  5. Edit the nvramrc script to set the scsi-initiator-id for the host adapters on the first node.

    For a full list of nvramrc editor and nvedit keystroke commands, see the OpenBoot 3.x Command Reference Manual.

    The following example sets the scsi-initiator-id to 6. The OpenBoot PROM Monitor prints the line numbers (0:, 1:, and so on).


    Note -

    Insert exactly one space after the first quotation mark and before scsi-initiator-id.



    {0} ok nvedit 
    0: probe-all
    1: cd /pci@1f,4000/pci@4/SUNW,isptwo@4
    2: 6 " scsi-initiator-id" integer-property 
    3: device-end 
    4: cd /pci@1f,4000/pci@2/SUNW,isptwo@4 
    5: 6 " scsi-initiator-id" integer-property 
    6: device-end 
    7: install-console 
    8: banner <Control C> 
    {0} ok
  6. Store the changes.

    The changes you make through the nvedit command are done on a temporary copy of the nvramrc script. You can continue to edit this copy without risk. After you complete your edits, save the changes. If you are not sure about the changes, discard them.

    • To store the changes, type:


      {0} ok nvstore
      {0} ok 

    • To discard the changes, type:


      {0} ok nvquit
      {0} ok 
  7. Verify the contents of the nvramrc script you created in Step 5, as shown in the following example.

    If the contents of the nvramrc script are incorrect, use the nvedit command to make corrections.


    {0} ok printenv nvramrc 
    nvramrc =             probe-all
                          cd /pci@1f,4000/pci@4/SUNW,isptwo@4
                          6 " scsi-initiator-id" integer-property 
                          device-end 
                          cd /pci@1f,4000/pci@2/SUNW,isptwo@4
                          6 " scsi-initiator-id" integer-property 
                          device-end 
                          install-console
                          banner
    {0} ok
  8. Instruct the OpenBoot PROM Monitor to use the nvramrc script.


    {0} ok setenv use-nvramrc? true
    use-nvramrc? = true
    {0} ok 
  9. Power on the second node but do not allow it to boot. If necessary, halt the node to continue with OpenBoot PROM Monitor tasks (the second node is the node that has SCSI address 7).

  10. Verify that the scsi-initiator-id for the host adapter on the second node is set to 7.

    Use the show-disks command to find the paths to the host adapters connected to these enclosures (as in Step 4). Select each host adapter's device tree node, and display the node's properties to confirm that the scsi-initiator-id for each host adapter is set to 7, as shown in the following example.


    {0} ok cd /pci@1f,4000/pci@4/SUNW,isptwo@4
    {0} ok .properties
    ...
    scsi-initiator-id        00000007
    ...
    {0} ok cd /pci@1f,4000/pci@2/SUNW,isptwo@4
    {0} ok .properties
    ...
    scsi-initiator-id        00000007
  11. Continue with the Solaris operating environment, Sun Cluster software, and volume management software installation tasks.

    For software installation procedures, see the Sun Cluster 3.0 12/01 Software Installation Guide.