Sun Cluster 3.0 5/02 Supplement

Configuring a Sun StorEdge/Netra st A1000 Array

This section describes the procedures for configuring a StorEdge/Netra st A1000 array after installing Sun Cluster software. Table D-1 lists these procedures.

Configuring a StorEdge/Netra st A1000 array before installing Sun Cluster software is the same as doing so in a non-cluster environment. For procedures on configuring StorEdge/Netra st A1000 arrays before installing Sun Cluster, see the Sun StorEdge RAID Manager User's Guide.

Table D-1 Task Map: Configuring StorEdge/Netra st A1000 Disk Drives

Task 

For Instructions, Go To 

Create a logical unit (LUN). 

"How to Create a LUN"

Remove a LUN. 

"How to Delete a LUN"

Reset the StorEdge/Netra st A1000 configuration. 

"How to Reset a StorEdge/Netra st A1000 LUN Configuration"

Create a hot spare. 

Follow the same procedure that is used in a non-cluster environment. 

Sun StorEdge RAID Manager User's Guide

 

Sun StorEdge RAID Manager Release Notes

Delete a hot spare. 

Follow the same procedure that is used in a non-cluster environment. 

Sun StorEdge RAID Manager User's Guide

 

Sun StorEdge RAID Manager Release Notes

Increase the size of a drive group. 

Follow the same procedure that is used in a non-cluster environment. 

Sun StorEdge RAID Manager User's Guide

 

Sun StorEdge RAID Manager Release Notes

How to Create a LUN

Use this procedure to create a logical unit (LUN) from unassigned disk drives or remaining capacity. See the Sun StorEdge RAID Manager Release Notes for the latest information about LUN administration.

This product supports the use of hardware RAID and host-based software RAID. For host-based software RAID, this product supports RAID levels 0+1 and 1+0.


only -

When you use host-based software RAID with hardware RAID, the hardware RAID levels you use affect the hardware maintenance procedures because they affect volume management administration. If you use hardware RAID level 1, 3, or 5, you can perform most maintenance procedures in "Maintaining a StorEdge/Netra st A1000 Array" without volume management disruptions. If you use hardware RAID level 0, some maintenance procedures in "Maintaining a StorEdge/Netra st A1000 Array" require additional volume management administration because the availability of the LUNs is impacted.


  1. With all cluster nodes booted and attached to the StorEdge/Netra st A1000 array, create the LUN on one node.

    Shortly after the LUN formatting completes, a logical name for the new LUN appears in /dev/rdsk on all cluster nodes that are attached to the StorEdge/Netra st A1000 array.

    For the procedure on creating a LUN, see the Sun StorEdge RAID Manager User's Guide.

    If the following warning message is displayed, ignore it and continue with the next step:


    scsi: WARNING: /sbus@e,0/QLGC,isp@1,10000/sd@2,1 (sd153):corrupt label - wrong magic number


    only -

    Use the format(1M) command to verify Solaris logical device names and label the LUN if necessary.


  2. Ensure that the new logical name for the LUN you created in Step 1 appears in the /dev/rdsk directory on both nodes by running the hot_add command on both nodes:


    # /etc/raid/bin/hot_add
    

  3. On one node, update the global device namespace:


    # scgdevs
    

  4. Use the scdidadm command to verify that the DID numbers for the LUNs are the same on both nodes. In the sample output that follows, the DID numbers are different:


    # scdidadm -L
    ... 
    33       e07a:/dev/rdsk/c1t4d2          /dev/did/rdsk/d33
    33       e07c:/dev/rdsk/c0t4d2          /dev/did/rdsk/d33

  5. Are the DID numbers you received from running the scdidadm command in Step 4 the same for both your nodes?

  6. If you want a volume manager to manage the new LUN you created in Step 1, run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to incorporate the new LUN into a diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  7. If you want the new LUN to be a quorum device, add the quorum device.

    For the procedure on adding a quorum device, see the Sun Cluster 3.0 U2 System Administration Guide

How to Delete a LUN

Use this procedure to delete a LUN(s). See the Sun StorEdge RAID Manager Release Notes for the latest information about LUN administration.


Caution - Caution -

This procedure removes all data on the LUN you delete.



Caution - Caution -

Do not delete LUN 0.


  1. From one node that is connected to the StorEdge/Netra st A1000 array, use the format command to determine the paths to the LUN you are deleting (sample output follows).


    f28c# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c0t10d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
              /sbus@3,0/SUNW,fas@3,8800000/sd@a,0
           1. c1t5d0 <Symbios-StorEDGEA1000-0301 cyl 12160 alt 2 hd 64 sec 64>
              /pseudo/rdnexus@1/rdriver@5,0
           2. c2t2d0 <Symbios-StorEDGEA1000-0301 cyl 12160 alt 2 hd 64 sec 64>
              /pseudo/rdnexus@2/rdriver@2,0

  2. Determine if the LUN that you plan to remove is configured as a quorum device.


    # scstat -q
    
    • If the LUN is not a quorum device, go to Step 3.

    • If the LUN is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

  3. Remove the LUN from disksets or disk groups.

    Run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. See the following paragraph for additional VERITAS Volume Manager commands that are required.

    LUNs that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete them. To remove the LUNs, after you delete the LUN from any disk group, use the following commands:


    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    

  4. From one node, delete the LUN.

    For the procedure on deleting a LUN, see the Sun StorEdge RAID Manager User's Guide.

  5. From the same node, remove the paths to the LUN(s) you are deleting.


    # rm /dev/rdsk/cNtXdY*
    # rm /dev/dsk/cNtXdY*
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    

  6. From the same node, remove all obsolete device IDs (DID)s.


    # scdidadm -C
    

  7. From the same node, switch resources and device groups off the node.


    # scswitch -Sh nodename
    

  8. Shut down the node.


    # shutdown -y -g0 -i0
    

  9. Boot the node and wait for it to rejoin the cluster:


    # boot -r
    

  10. Repeat Step 5 through Step 9 on the other node that is attached to the StorEdge/Netra st A1000 array.

How to Reset a StorEdge/Netra st A1000 LUN Configuration

Use this procedure to reset a StorEdge/Netra st A1000 LUN configuration.


Caution - Caution -

Resetting LUN configuration results in a new DID number being assigned to LUN 0. This is because the software assigns a new worldwide number (WWN) to the new LUN.


  1. From one node that is connected to the StorEdge/Netra st A1000 array, use the format command to determine the paths to the LUN(s) you are resetting, as shown in the following example (sample output shown below).


    f28c# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c0t10d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
              /sbus@3,0/SUNW,fas@3,8800000/sd@a,0
           1. c1t5d0 <Symbios-StorEDGEA1000-0301 cyl 12160 alt 2 hd 64 sec 64>
              /pseudo/rdnexus@1/rdriver@5,0
           2. c2t2d0 <Symbios-StorEDGEA1000-0301 cyl 12160 alt 2 hd 64 sec 64>
              /pseudo/rdnexus@2/rdriver@2,0

  2. Determine if the LUN that you plan to reset is configured as a quorum device.


    # scstat -q
    
    • If the LUN is not a quorum device, go to Step 3.

    • If the LUN is configured as a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

  3. Remove the LUN from disksets or disk groups.

    Run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation. See the following paragraph for additional VERITAS Volume Manager commands that are required.

    LUNs that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete them. To remove the LUNs, after you delete the LUN from any disk group, use the following commands:


    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    

  4. On one node, reset the LUN configuration.

    For the procedure for resetting StorEdge/Netra st A1000 LUN configuration, see the Sun StorEdge RAID Manager User's Guide.


    only -

    Use the format command to verify Solaris logical device names.


  5. By using the format command, label the new LUN 0.

  6. Remove the paths to the old LUN(s) you reset:


    # rm /dev/rdsk/cNtXdY*
    # rm /dev/dsk/cNtXdY*
    
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    

  7. Update device namespaces on both nodes:


    devfsadm -C
    

  8. Remove all obsolete DIDs on both nodes:


    # scdidadm -C
    

  9. Switch resources and device groups off the node:


    # scswitch -Sh nodename
    

  10. Shut down the node:


    # shutdown -y -g0 -i0
    

  11. Boot the node and wait for it to rejoin the cluster:


    # boot -r
    

    If the following error message appears, ignore it and continue with the next step. The DID will be updated when the procedure is complete.


    device id for '/dev/rdsk/c0t5d0' does not match physical disk's id.

  12. After the node has rebooted and joined the cluster, repeat Step 6 through Step 11 on the other node that is attached to the StorEdge/Netra st A1000 array.

    The DID number for the original LUN 0 is removed and a new DID is assigned to LUN 0.

How to Correct Mismatched DID Numbers

Use this section to correct mismatched device ID (DID) numbers that might appear during the creation of A1000 LUNs. You correct the mismatch by deleting Solaris and Sun Cluster paths to the LUNs that have DID numbers that are different. After rebooting, the paths are corrected.


only -

Use this procedure only if you are directed to do so from "How to Create a LUN".


  1. From one node that is connected to the StorEdge/Netra st A1000 array, use the format command to determine the paths to the LUN(s) that have different DID numbers:


    # format
    

  2. Remove the paths to the LUN(s) that have different DID numbers:


    # rm /dev/rdsk/cNtXdY*
    # rm /dev/dsk/cNtXdY*
    
    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    

  3. Use the lad command to determine the alternate paths to the LUN(s) that have different DID numbers.

    The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other array in the disk array to determine the alternate path.

    For example, with this configuration:


    # lad
    c0t5d0 1T93600714 LUNS: 0 1
    c1t4d0 1T93500595 LUNS: 2

    The alternate paths would be as follows.


    /dev/osa/dev/dsk/c1t4d1*
    /dev/osa/dev/rdsk/c1t4d1*

  4. Remove the alternate paths to the LUN(s) that have different DID numbers:


    # rm /dev/osa/dev/dsk/cNtXdY*
    # rm /dev/osa/dev/rdsk/cNtXdY*
    

  5. On both nodes, remove all obsolete DIDs:


    # scdidadm -C
    

  6. Switch resources and device groups off the node:


    # scswitch -Sh nodename
    

  7. Shut down the node:


    # shutdown -y -g0 -i0
    

  8. Boot the node and wait for it to rejoin the cluster:


    # boot -r
    

  9. Repeat Step 1 through Step 8 on the other node that is attached to the StorEdge/Netra st A1000 array.

  10. Return to "How to Create a LUN".