Sun Cluster 3.1 - 3.2 with StorageTek RAID Arrays Manual for Solaris OS

Configuring Storage Arrays

This section contains the procedures to configure a storage array in a running cluster. Table 2–2 lists these procedures.

Table 2–2 Task Map: Configuring a Storage Array

Task 

Information 

Create a logical volume 

How to Create a Logical Volume

Remove a logical volume 

How to Remove a Logical Volume

The following is a list of administrative tasks that do not require cluster-specific procedures. See the storage array's documentationRelated Documentation for the following procedures.

ProcedureHow to Create a Logical Volume

Use this procedure to create a logical volume from unassigned storage capacity.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  2. Follow the instructions in your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see Related Documentation.

    • Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.

    • If necessary, partition the volume.

    • To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.

  3. If you are not using multipathing, skip to Step 5.

  4. If you are using multipathing, and if any devices that are associated with the volume you created are at an unconfigured state, configure the multipathing paths on each node that is connected to the storage device.

    To determine whether any devices that are associated with the volume you created are at an unconfigured state, use the following command.


    # cfgadm -al | grep disk
    

    Note –

    To configure the Solaris I/O multipathing paths on each node that is connected to the storage device, use the following command.


    # cfgadm -o force_update -c configure controllerinstance
    

    To configure the Traffic Manager for the Solaris 9 OS, see the Sun StorEdge Traffic Manager Installation and Configuration Guide. To configure multipathing for the Solaris 10 OS, see the Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.

  5. On one node that is connected to the storage device, use the format command to label the new logical volume.

  6. From any node in the cluster, update the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      

    Note –

    You might have a volume management daemon such as vold running on your node, and have a DVD drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is inserted in the drive. This error is expected behavior. You can safely ignore this error message.


  7. To manage this volume with volume management software, use Solaris Volume Manager or Veritas Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

See Also

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Identify the logical volume that you are removing.

    Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for more information.

  3. (Optional) Migrate all data off the logical volume that you are removing. Alternatively, back up that data.

  4. If the LUN that you are removing is configured as a quorum device, choose and configure another device as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scstat -q
      

    For procedures about how to add and remove quorum devices, see Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

  5. If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that you are removing.

    For instructions about how to update the list of devices, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  6. If you are using volume management software, run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the logical volume from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.


    Note –

    Volumes that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete them from the Sun Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from Veritas Volume Manager control.


    # vxdisk offline Accessname
    # vxdisk rm Accessname
    
    Accessname

    Disk access name


  7. If you are using multipathing, unconfigure the volume in Solaris I/O multipathing.


    # cfgadm -o force_update -c unconfigure Logical_Volume
    
  8. Access the storage device and remove the logical volume.

    To remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.

  9. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 14 and Step 15 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  10. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  11. Shut down and reboot Node A.

    To shut down and boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  12. On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  13. For each additional node that is connected to the shared storage that hosted the logical volume, repeat Step 9 to Step 12.

  14. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  15. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename