Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual

Maintaining Storage Arrays

The maintenance procedures in FRUs That Do Not Require Sun Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 1–2 lists the procedures that require cluster-specific steps.

Table 1–2 Task Map: Maintaining a Storage Array

Task 

Information 

Remove a storage array 

How to Remove a Storage Array

Replace a storage array 

How to Replace a Storage Array

Add a disk drive 

How to Add a Disk Drive

Remove a disk drive 

How to Remove a Disk Drive

Replace a disk drive 

How to Replace a Disk Drive

FRUs That Do Not Require Sun Cluster Maintenance Procedures

Each storage device has a different set of FRUs that do not require cluster-specific procedures.

Sun StorEdge A5x00 FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A5000 Installation and Service Manual for the following procedures.

ProcedureHow to Replace a Storage Array

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Example 1–1 shows you how to apply this procedure.

Steps
  1. If possible, back up the metadevices or volumes that reside in the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  2. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. On all nodes that are connected to the storage array, run the luxadm remove_device command.


    # luxadm remove_device -F boxname
    

    See Example 1–1 for an example of this command and its use.

  4. Disconnect the fiber-optic cables from the storage array.

  5. Power off and disconnect the storage array from the AC power source.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

  6. Connect the fiber optic cables to the new storage array.

  7. Connect the new storage array to an AC power source.

  8. One disk drive at a time, remove the disk drives from the old storage array. Insert the disk drives into the same slots in the new storage array.

  9. Power on the storage array.

  10. Use the luxadm insert_device command to find the new storage array.

    Repeat this step for each node that is connected to the storage array.


    # luxadm insert_device
    

    See Example 1–1 for an example of this command and its use.

  11. On all nodes that are connected to the new storage array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  12. Perform volume management administration to add the new storage array to the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.


Example 1–1 Replacing a Sun StorEdge A5x00 Storage Array

The following example shows how to replace a Sun StorEdge A5x00 storage array. The storage array to be replaced is venus1.


# luxadm remove_device -F venus1

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
			/devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/   \
				ses@w123456789abcdf03,0:0
			/devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/   \
				ses@w123456789abcdf00,0:0

Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: 
<Return>
Hit <Return> after removing the device(s). <Return>

# luxadm insert_device
Please hit <RETURN> when you have finished adding Fibre Channel 
Enclosure(s)/Device(s): <Return>
# scgdevs

ProcedureHow to Remove a Storage Array

Use this procedure to remove a storage array from a cluster. Example 1–2 shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the storage array.

Steps
  1. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  2. On all nodes that are connected to the storage array, run the luxadm remove_device command.


    # luxadm remove_device -F boxname
    
  3. Remove the storage array and the fiber-optic cables that are connected to the storage array.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  4. On all nodes, remove references to the storage array.


    # devfsadm -C
    # scdidadm -C
    
  5. If necessary, remove any unused host adapters from the nodes.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.


Example 1–2 Removing a Sun StorEdge A5x00 Storage Array

The following example shows how to remove a Sun StorEdge A5x00 storage array. The storage array to be removed is venus1.


# luxadm remove_device -F venus1
WARNING!!! Please ensure that no file systems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Storage Array:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/	\
				ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/	\
				ses@w123456789abcdf00,0:0


Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>

# devfsadm -C
# scdidadm -C

ProcedureHow to Add a Disk Drive

For conceptual information about quorums, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. On one node that is connected to the storage array, install the new disk.

    Install the new disk drive. Press the Return key when prompted. You can insert multiple disk drives at the same time.


    # luxadm insert_device enclosure,slot
    
  2. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete its processing depends on the number of devices that are connected to the node. Expect at least five minutes.


    # devfsadm -C
    
  3. Ensure that entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    
  4. If necessary, partition the disk drive.

    You can use either the format(1M) command or copy the partitioning from another disk drive in the storage array.

  5. From any node in the cluster, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  6. Verify that a device ID (DID) has been assigned to the disk drive.


    #scdidadm -l 
    

    Note –

    The DID that was assigned to the new disk drive might not be in sequential order in the storage array.


  7. Perform necessary volume management administration actions on the new disk drive.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

ProcedureHow to Remove a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.

Example 1–3 shows you how to apply this procedure.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. Is the disk drive that you want to remove a quorum device?


    # scstat -q
    
    • If no, proceed to Step 2.

    • If yes, choose and configure another device to be the new quorum device. Then remove the old quorum device.

      For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.

  2. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Identify the disk drive that needs to be removed.

    If the disk error message reports the drive problem by DID, determine the Solaris device name.


    # scdidadm -l deviceID
    
  5. On any node that is connected to the storage array, run the luxadm remove_device command.

    Remove the disk drive. Press the Return key when prompted.


    # luxadm remove_device -F /dev/rdsk/cNtXdYsZ
    
  6. On all connected nodes, remove references to the disk drive.


    # devfsadm -C
    # scdidadm -C
    

Example 1–3 Removing a Disk Drive in a Sun StorEdge A5x00 Storage Array

The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array. The disk drive to be removed is d4.


# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# luxadm remove_device -F /dev/rdsk/c1t32d0s2

WARNING!!! Please ensure that no file systems are mounted on these device(s).
All data on these devices should have been backed up.

The list of devices that will be removed is: 
			1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7
# devfsadm -C
# scdidadm -C

ProcedureHow to Replace a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Is the disk drive you are replacing a quorum device?


    # scstat -q
    
    • If no, proceed to Step 3.

    • If yes, add a new quorum device on a different storage array. Remove the old quorum device.

      For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 11 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the world wide name (WWN) for the disk drive are the same.


    #scdidadm -o diskid -l cNtXdY
    
  5. Which volume manager are you using?

    • If VERITAS Volume Manager, proceed to Step 6.

    • If Solstice DiskSuite/Solaris Volume Manager, save the disk partitioning information to partition the new disk drive.


      # prtvtoc /dev/rdsk/cNtXdYs2 > filename
      

      Note –

      You can also use the format utility to save the disk's partition information.


  6. On any node that is connected to the storage array, remove the disk drive when prompted.


    # luxadm remove_device -F /dev/rdsk/cNtXdYs2
    

    After running the command, warning messages might display. These messages can be ignored.

  7. On any node that is connected to the storage array, run the luxadm insert_device command. Add the new disk drive when prompted.


    # luxadm insert_device boxname,fslotnumber
    

    or


    # luxadm insert_device boxname,fslotnumber
    

    If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.

  8. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete depends on the number of devices that are connected to the node. Expect at least five minutes.


    # devfsadm -C
    
  9. Which volume manager are you using?

    • If VERITAS Volume Manager, proceed to Step 10.

    • If Solstice DiskSuite/Solaris Volume Manager, on one node that is connected to the storage array, partition the new disk drive. Use the partitioning information you saved in Step 5.


      # fmthard -s filename /dev/rdsk/cNtXdYs2
      

      Note –

      You can also use the format utility to partition the new disk drive.


  10. From all nodes that are connected to the storage array, update the DID database and driver.


    # scdidadm -R deviceID
    

    Note –

    After running scdidadm —R on the first node, each subsequent node that you run the command on might display the warning, device id for the device matches the database. Ignore this warning.


  11. On any node, confirm that the failed disk drive has been replaced. Compare the following physical DID to the physical DID in Step 4.

    If the following physical DID is different from the physical DID in Step 4, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  12. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  13. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure about how to add a quorum device, see Sun Cluster system administration documentation.