Sun Cluster 3.0 - 3.1 with StorageTek Fibre Channel RAID Arrays Manual for Solaris OS

Chapter 2 Installing and Configuring a StorageTek Array

This chapter contains the procedures about how to install and configure StorageTek FC RAID arrays. These procedures are specific to a SunTM Cluster environment.

This chapter contains the following main topics:

For detailed information about storage array architecture, features, configuration utilities, and installation, see the StorageTek documentation listed in Related Documentation.

For a URL to this storage documentation, see Related Documentation.

Installing Storage Arrays

This section contains the procedures listed in Table 2–1.

Table 2–1 Task Map: Installing Storage Arrays

Task 

Information 

Install a storage array in a new cluster, before the OS and Sun Cluster software are installed.  

How to Install Storage Arrays in a New Cluster

Add a storage array to an existing cluster.  

How to Add Storage Arrays to an Existing Cluster

Storage Array Cabling Configurations

You can install your storage array in several different configurations: Figure 2–1 and Figure 2–2 are two examples.

Figure 2–1 SPARC: StorageTek Array Direct-Connect Configuration

Illustration: Each node has 2 connections to the service
panel. These 2 connections reside on both I/O boards.


Note –

The StorageTek 6140 array houses two controllers each having four host ports. The cabling approach is the same as shown in Figure 2–1, but it can support up to four nodes in a direct-attach configuration.


Figure 2–2 StorageTek Array Switched Configuration

Illustration: Each node connects to 2 switches. Each
switch has 2 connections to service panel. Switch connections reside on both
I/O boards.

How to Install Storage Arrays in a New Cluster

Use this procedure to install a storage array in a new cluster. To add a storage array to an existing cluster, use the procedure in How to Add Storage Arrays to an Existing Cluster.

This procedure relies on the following assumptions:

ProcedureInstall and Cable the Hardware

  1. Unpack, place, and level the storage array.

    For instructions, see the StorageTek online documentation listed in Table P–2.

  2. (Optional) Install the Fibre Channel (FC) switch for the storage array if you do not have a switch installed.

    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Connect the nodes to the service processor panel.

    • (SAN Configuration) Connect the FC switches to the service processor panel.

    • (Direct-Attached Configuration) Connect each node to the service processor panel directly.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  4. Install the storage array.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  5. Power on the storage array and the nodes.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  6. Configure the service processor.

    For instructions, see the StorageTek documentation listed in Related Documentation.

ProcedureInstall the Solaris Operating System and Configure Multipathing

  1. On all nodes, install the Solaris operating system and apply the required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS.

  2. Install any required patches or software for Sun StorEdge Traffic Manager software support to nodes and enable multipathing.

    For the procedure about how to install the Sun StorEdge Traffic Manager software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

See Also

ProcedureHow to Add Storage Arrays to an Existing Cluster

Use this procedure to add a new storage array to a running cluster. To install a new storage array in a Sun Cluster configuration that is not running, use the procedure in How to Install Storage Arrays in a New Cluster.

Before You Begin

This procedure relies on the following assumptions:

  1. Unpack, place, and level the storage array.

    For instructions, see the StorageTek online documentation listed in Table P–2.

  2. (Optional) Install the Fibre Channel (FC) switch for the storage array if you do not have a switch installed.

    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  3. Connect the nodes to the service processor panel.

    • (SAN Configuration) Connect the FC switches to the service processor panel.

    • (Direct-Attached Configuration) Connect each node to the service processor panel directly.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  4. Install the storage array.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  5. Power on the storage array and the nodes.

    For instructions, see the StorageTek documentation listed in Related Documentation.

  6. Configure the service processor.

    For instructions, see the StorageTek documentation listed in Related Documentation.

See Also

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in a running cluster. Table 2–2 lists these procedures.

Table 2–2 Task Map: Configuring a Storage Array

Task 

Information 

Create a logical volume. 

How to Create a Logical Volume

Remove a logical volume. 

How to Remove a Logical Volume

The following is a list of administrative tasks that require no cluster-specific procedures. See the storage array's online help for the following procedures.

ProcedureHow to Create a Logical Volume

Use this procedure to create a logical volume from unassigned storage capacity.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

  1. Follow the instructions in your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see Related Documentation.

    • Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.

    • If necessary, partition the volume.

    • To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.

  2. Are you using multipathing?

  3. Are any devices that are associated with the volume you created at an unconfigured state?


    # cfgadm -al | grep disk
    
    • If no, proceed to Step 4.

    • If yes, configure the Traffic Manager paths on each node that is connected to the storage device.


      cfgadm -o force_update -c configure controllerinstance
      

      For the procedure about how to configure Traffic Manager paths, see the Sun StorEdge Traffic Manager Installation and Configuration Guide.

  4. On one node that is connected to the storage device, use the format command to label the new logical volume.

  5. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    Note –

    You might have a volume management daemon such as vold running on your node, and have a CD-ROM drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is in the drive. This error is expected behavior. You can safely ignore this error message.


  6. To manage this volume with volume management software, use the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

See Also

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

  1. Identify the logical volume that you are removing.

    Refer to your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation for more information.

  2. (Optional) Migrate all data off the logical volume that you are removing. Alternatively, back up that data.

  3. Check if the logical volume that you are removing is a quorum device.


    # scstat -q
    

    If yes, choose and configure another device as the quorum device. Then remove the old quorum device.

    For procedures about how to add and remove quorum devices, see Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

  4. If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that you are removing.

    For instructions about how to update the list of devices, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  5. If you are using volume management software, run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the logical volume from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.


    Note –

    Volumes that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete them from the Sun Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from VERITAS Volume Manager control.


    # vxdisk offline Accessname
    # vxdisk rm Accessname
    
    Accessname

    Disk access name


  6. If you are using multipathing, unconfigure the volume in Sun StorEdge Traffic Manager.


    # cfgadm -o force_update -c unconfigure Logical_Volume
    
  7. Access the storage device and remove the logical volume.

    For the procedure about how to remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.

  8. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 13 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  9. Move all resource groups and device groups off Node A.


    # scswitch -s -h from-node
    
  10. Shut down and reboot Node A.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  11. On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.


    # devfsadm -C
    # scdidadm -C
    
  12. For each additional node that is connected to the shared storage that hosted the logical volume, repeat Step 8 to Step 11.

  13. (Optional) Return the resource groups and device groups that you identified in Step 8 to all cluster nodes.