Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual

Chapter 1 Installing and Maintaining a Fibre Channel JBOD Storage Device

This chapter describes the procedures about how to install, configure, and maintain fibre channel (FC) JBOD storage devices in a SunTM Cluster environment.

The procedures in this chapter apply to the Sun StorEdge A5x00.

This chapter contains the following main sections:

For information about how to use storage arrays in a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.

Installing Storage Arrays

This section contains instructions on installing arrays both to new clusters and operational clusters.

Table 1–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed. 

How to Install a Storage Array in a New Cluster

Add a storage array to an operational cluster. 

How to Add the First Storage Array to an Existing Cluster

How to Add a Subsequent Storage Array to an Existing Cluster

ProcedureHow to Install a Storage Array in a New Cluster

This procedure assumes you are installing one or more storage arrays at initial installation of a cluster.

Steps
  1. Install host adapters in the nodes that are to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.


    Note –

    To ensure maximum redundancy, put each host adapter on a separate I/O board, if possible.


  2. Cable the storage arrays to the nodes.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  3. Check the revision number for the storage array's controller firmware. If necessary, install the most recent firmware.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

ProcedureHow to Add the First Storage Array to an Existing Cluster

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Determine if the storage array packages need to be installed on the nodes. These nodes are the nodes to which you are connecting the storage array. This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWld      Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx   Sun Enterprise Network Array sf Device Driver
    									(64-bit)
    system	SUNWluxl    Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx   Sun Enterprise Network Array socal Device Driver
    									(64-bit)
    system	SUNWluxop   Sun Enterprise Network Array firmware and utilities
    system	SUNWluxox   Sun Enterprise Network array libraries (64 bit)				
  2. On each node, install any necessary packages for the Solaris Operating System.

    The storage array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.


    Note –

    The -G option applies only if you are using the Solaris 10 OS. Omit this option if you are using Solaris 8 or 9 OS.



    # pkgadd -G -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
    -G

    Add package(s) in the current zone only. When used in the global zone, the package is added to the global zone only and is not propagated to any existing or yet-to-be-created non-global zone. When used in non-global zone, the package(s) are added to the non-global zone only.

    path_to_Solaris

    Path to the Solaris Operating System

    Pkg1 Pkg2

    The packages to be added

  3. Shut down and power off any node that is connected to the storage array.

    For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.

  4. Install host adapters in the node that is to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your network adapters and nodes.

  5. Cable, configure, and power on the storage array.

    For cabling diagrams, see Appendix A, Cabling Diagrams.

  6. Perform a reconfiguration boot to create the new Solaris device files and links.


    # boot -r
    
  7. Determine if any patches need to be installed on nodes that are to be connected to the storage array.

    For a list of patches specific to Sun Cluster, see your Sun Cluster release notes documentation.

  8. Obtain and install any necessary patches on the nodes that are to be connected to the storage array.

    For procedures about how to apply patches, see your Sun Cluster system administration documentation.


    Note –

    Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.


  9. If required by the patch README instructions, shut down and reboot the node.

    For the procedure about how to shut down and power off a node, see Sun Cluster system administration documentation.

  10. Perform Step 3 through Step 9 for each node that is attached to the storage array.

  11. Perform volume management administration to add the disk drives in the storage array to the volume management configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

ProcedureHow to Add a Subsequent Storage Array to an Existing Cluster

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Configure the new storage array.


    Note –

    Each storage array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new storage array that you are adding. For more information about loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.


  2. On both nodes, insert the new storage array into the cluster. Add paths to the disk drives.


    # luxadm insert_device
    Please hit <RETURN> when you have finished adding
    Fibre Channel Enclosure(s)/Device(s):
    

    Note –

    Do not press the Return key until you complete Step 3.


  3. Cable the new storage array to a spare port in the existing hub, switch, or host adapter in your cluster.

    For cabling diagrams, see Appendix A, Cabling Diagrams.


    Note –

    You must use FC switches when installing storage arrays in a partner-group configuration. If you want to create a storage area network (SAN) by using two FC switches and Sun SAN software, see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  4. After you cable the new storage array, press the Return key to complete the luxadm insert_device operation.


    Waiting for Loop Initialization to complete...
    New Logical Nodes under /dev/dsk and /dev/rdsk :
    c4t98d0s0
    c4t98d0s1
    c4t98d0s2
    c4t98d0s3
    c4t98d0s4
    c4t98d0s5
    c4t98d0s6
    ...
    New Logical Nodes under /dev/es:
    ses12
    ses13
    
  5. On both nodes, verify that the new storage array is visible to both nodes.


    #luxadm probe
    
  6. On one node, use the scgdevs command to update the DID database.


    #scgdevs
    

Maintaining Storage Arrays

The maintenance procedures in FRUs That Do Not Require Sun Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 1–2 lists the procedures that require cluster-specific steps.

Table 1–2 Task Map: Maintaining a Storage Array

Task 

Information 

Remove a storage array 

How to Remove a Storage Array

Replace a storage array 

How to Replace a Storage Array

Add a disk drive 

How to Add a Disk Drive

Remove a disk drive 

How to Remove a Disk Drive

Replace a disk drive 

How to Replace a Disk Drive

FRUs That Do Not Require Sun Cluster Maintenance Procedures

Each storage device has a different set of FRUs that do not require cluster-specific procedures.

Sun StorEdge A5x00 FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A5000 Installation and Service Manual for the following procedures.

ProcedureHow to Replace a Storage Array

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Example 1–1 shows you how to apply this procedure.

Steps
  1. If possible, back up the metadevices or volumes that reside in the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  2. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. On all nodes that are connected to the storage array, run the luxadm remove_device command.


    # luxadm remove_device -F boxname
    

    See Example 1–1 for an example of this command and its use.

  4. Disconnect the fiber-optic cables from the storage array.

  5. Power off and disconnect the storage array from the AC power source.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.

  6. Connect the fiber optic cables to the new storage array.

  7. Connect the new storage array to an AC power source.

  8. One disk drive at a time, remove the disk drives from the old storage array. Insert the disk drives into the same slots in the new storage array.

  9. Power on the storage array.

  10. Use the luxadm insert_device command to find the new storage array.

    Repeat this step for each node that is connected to the storage array.


    # luxadm insert_device
    

    See Example 1–1 for an example of this command and its use.

  11. On all nodes that are connected to the new storage array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  12. Perform volume management administration to add the new storage array to the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.


Example 1–1 Replacing a Sun StorEdge A5x00 Storage Array

The following example shows how to replace a Sun StorEdge A5x00 storage array. The storage array to be replaced is venus1.


# luxadm remove_device -F venus1

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
			/devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/   \
				ses@w123456789abcdf03,0:0
			/devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/   \
				ses@w123456789abcdf00,0:0

Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: 
<Return>
Hit <Return> after removing the device(s). <Return>

# luxadm insert_device
Please hit <RETURN> when you have finished adding Fibre Channel 
Enclosure(s)/Device(s): <Return>
# scgdevs

ProcedureHow to Remove a Storage Array

Use this procedure to remove a storage array from a cluster. Example 1–2 shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the storage array.

Steps
  1. Perform volume management administration to remove the storage array from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  2. On all nodes that are connected to the storage array, run the luxadm remove_device command.


    # luxadm remove_device -F boxname
    
  3. Remove the storage array and the fiber-optic cables that are connected to the storage array.

    For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.


  4. On all nodes, remove references to the storage array.


    # devfsadm -C
    # scdidadm -C
    
  5. If necessary, remove any unused host adapters from the nodes.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.


Example 1–2 Removing a Sun StorEdge A5x00 Storage Array

The following example shows how to remove a Sun StorEdge A5x00 storage array. The storage array to be removed is venus1.


# luxadm remove_device -F venus1
WARNING!!! Please ensure that no file systems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Storage Array:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/	\
				ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/	\
				ses@w123456789abcdf00,0:0


Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>

# devfsadm -C
# scdidadm -C

ProcedureHow to Add a Disk Drive

For conceptual information about quorums, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. On one node that is connected to the storage array, install the new disk.

    Install the new disk drive. Press the Return key when prompted. You can insert multiple disk drives at the same time.


    # luxadm insert_device enclosure,slot
    
  2. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete its processing depends on the number of devices that are connected to the node. Expect at least five minutes.


    # devfsadm -C
    
  3. Ensure that entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    
  4. If necessary, partition the disk drive.

    You can use either the format(1M) command or copy the partitioning from another disk drive in the storage array.

  5. From any node in the cluster, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  6. Verify that a device ID (DID) has been assigned to the disk drive.


    #scdidadm -l 
    

    Note –

    The DID that was assigned to the new disk drive might not be in sequential order in the storage array.


  7. Perform necessary volume management administration actions on the new disk drive.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

ProcedureHow to Remove a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.

Example 1–3 shows you how to apply this procedure.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. Is the disk drive that you want to remove a quorum device?


    # scstat -q
    
    • If no, proceed to Step 2.

    • If yes, choose and configure another device to be the new quorum device. Then remove the old quorum device.

      For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.

  2. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Identify the disk drive that needs to be removed.

    If the disk error message reports the drive problem by DID, determine the Solaris device name.


    # scdidadm -l deviceID
    
  5. On any node that is connected to the storage array, run the luxadm remove_device command.

    Remove the disk drive. Press the Return key when prompted.


    # luxadm remove_device -F /dev/rdsk/cNtXdYsZ
    
  6. On all connected nodes, remove references to the disk drive.


    # devfsadm -C
    # scdidadm -C
    

Example 1–3 Removing a Disk Drive in a Sun StorEdge A5x00 Storage Array

The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array. The disk drive to be removed is d4.


# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# luxadm remove_device -F /dev/rdsk/c1t32d0s2

WARNING!!! Please ensure that no file systems are mounted on these device(s).
All data on these devices should have been backed up.

The list of devices that will be removed is: 
			1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7
# devfsadm -C
# scdidadm -C

ProcedureHow to Replace a Disk Drive

For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation.

Before You Begin

This procedure assumes that your cluster is operational.

Steps
  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Is the disk drive you are replacing a quorum device?


    # scstat -q
    
    • If no, proceed to Step 3.

    • If yes, add a new quorum device on a different storage array. Remove the old quorum device.

      For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 11 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the world wide name (WWN) for the disk drive are the same.


    #scdidadm -o diskid -l cNtXdY
    
  5. Which volume manager are you using?

    • If VERITAS Volume Manager, proceed to Step 6.

    • If Solstice DiskSuite/Solaris Volume Manager, save the disk partitioning information to partition the new disk drive.


      # prtvtoc /dev/rdsk/cNtXdYs2 > filename
      

      Note –

      You can also use the format utility to save the disk's partition information.


  6. On any node that is connected to the storage array, remove the disk drive when prompted.


    # luxadm remove_device -F /dev/rdsk/cNtXdYs2
    

    After running the command, warning messages might display. These messages can be ignored.

  7. On any node that is connected to the storage array, run the luxadm insert_device command. Add the new disk drive when prompted.


    # luxadm insert_device boxname,fslotnumber
    

    or


    # luxadm insert_device boxname,fslotnumber
    

    If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.

  8. On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.

    The amount of time that the devfsadm command requires to complete depends on the number of devices that are connected to the node. Expect at least five minutes.


    # devfsadm -C
    
  9. Which volume manager are you using?

    • If VERITAS Volume Manager, proceed to Step 10.

    • If Solstice DiskSuite/Solaris Volume Manager, on one node that is connected to the storage array, partition the new disk drive. Use the partitioning information you saved in Step 5.


      # fmthard -s filename /dev/rdsk/cNtXdYs2
      

      Note –

      You can also use the format utility to partition the new disk drive.


  10. From all nodes that are connected to the storage array, update the DID database and driver.


    # scdidadm -R deviceID
    

    Note –

    After running scdidadm —R on the first node, each subsequent node that you run the command on might display the warning, device id for the device matches the database. Ignore this warning.


  11. On any node, confirm that the failed disk drive has been replaced. Compare the following physical DID to the physical DID in Step 4.

    If the following physical DID is different from the physical DID in Step 4, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  12. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  13. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure about how to add a quorum device, see Sun Cluster system administration documentation.