Sun Cluster 3.0-3.1 With Sun StorEdge 6120 Array Manual for Solaris OS

Chapter 1 Installing and Maintaining a Sun StorEdge 6120 Array

This chapter contains the procedures about how to install, configure, and maintain a Sun TMStorEdgeTM 6120 array in dual-controller configurations and in single-controller configurations. These procedures are specific to a Sun StorEdge 6120 array in a SunTM Cluster environment.

This chapter contains the following main topics:

Installing Storage Arrays

This section contains the procedures for installing single-controller and dual-controller storage array configurations in new and existing Sun Cluster configurations. The following table lists these procedures.

Table 1–1 Task Map: Installing a Storage Array

Task 

Information 

Installing arrays in a new cluster, using a single-controller configuration 

How to Install a Single-Controller Configuration in a New Cluster

Installing arrays in a new cluster, using a dual-controller configuration 

How to Install a Dual-Controller Configuration in a New Cluster

Adding arrays to an existing cluster, using a single-controller configuration. 

How to Add a Single-Controller Configuration to an Existing Cluster

Adding arrays to an existing cluster, using a dual-controller configuration. 

How to Add a Dual-Controller Configuration to an Existing Cluster

Storage Array Cabling Configurations

You can install your storage array in several different configurations. Use the Sun StorEdge 6120 Array Installation Guide to evaluate your needs and determine which configuration is best for your situation.

The following figures illustrate example configurations.

Figure 1–1 shows two storage arrays, and two have controllers. The storage arrays connect to a 2-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1–1 Installing a 1x1 Configuration With Software RAID-1

Illustration: The preceding context describes the graphic.

Figure 1–2 shows four storage arrays, and two have controllers. The first storage array without a controller connects to the second storage array, which has a controller. The third storage array without a controller connects to the fourth storage array, which has a controller. The two storage arrays with controllers connect to a 2-node cluster through two switches. Dual-controller configurations require software RAID-1 (host-based mirroring).

Figure 1–2 Installing a 1x2 Configuration With Software RAID-1

Illustration: The preceding context describes the graphic.

Figure 1–3 shows two storage arrays, and two have controllers. The two storage arrays are daisy-chained. The two storage arrays connect to a 2-node cluster through two switches.

Figure 1–3 Installing a 2x2 Configuration

Illustration: The preceding context describes the graphic.

Figure 1–4 shows four storage arrays, and two have controllers. All storage array are daisy-chained in the following order: alternate master, master, alternate master, and master. The two storage arrays with controllers connect to a 2-node cluster through two switches.

Figure 1–4 Installing a 2x4 Configuration

Illustration: The preceding context describes the graphic.

ProcedureHow to Install a Single-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a single-controller configuration before you install the Solaris operating environment and Sun Cluster software on your nodes. The following procedures contain instructions for other array-installation situations:

Steps
  1. Install the host adapters in the nodes that are to be connected to the storage array.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  4. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the Local Area Network (LAN).

    3. If necessary, install the interconnect cables between storage arrays.

    4. Connect the power cords to each storage array.

  5. Power on the storage array.

    Verify that all components are powered on and functional.


    Note –

    The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  6. Install any required controller firmware for the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  7. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. Ensure that the mp_support parameter for each storage array is set to none.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. On all nodes, install the Solaris operating environment. Apply any required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see your Sun Cluster software installation documentation.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  12. On each node, ensure that the mpxio-disable parameter is set to yes in the /kernel/drv/scsi_vhci.conf file.

See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

ProcedureHow to Install a Dual-Controller Configuration in a New Cluster

Use this procedure to install a storage array in a dual-controller configuration before you install the Solaris operating environment and Sun Cluster software on your nodes. The following procedures contain instructions for other array-installation situations:

Steps
  1. Install the host adapters in the nodes to be connected to the storage arrays.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure about how to install FC switches, see the documentation that shipped with your FC switch hardware.

  3. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  4. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which the new storage array is to reside.

    Use the RARP server to set up the following network settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up a RARP server, see the Sun StorEdge 6120 Array Installation Guide.

  5. Cable the storage arrays.

    For the procedures on how to connect your storage array, see the Sun StorEdge 6120 Array Installation Guide.

    1. Connect the storage arrays to the FC switches by using fiber-optic cables.

    2. Connect the Ethernet cables from each storage array to the LAN.

    3. If necessary, install the interconnect cables between storage arrays.

    4. Connect the power cords to each storage array.

    For the procedure about how to install fiber-optic, Ethernet, and interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  6. Power on the storage arrays.

    Verify that all components are powered on and functional.

    For the procedure about how to power on the storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  7. Install any required controller firmware for the storage arrays.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  8. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  9. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  10. On each node, ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  11. Ensure that all storage array controllers are ONLINE.

    For more information about how to bring controllers online, see the Sun StorEdge 6020 and 6120 Array System Manual.

  12. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. On all nodes, install the Solaris operating system and apply the required Solaris patches for Sun Cluster software and storage array support.

    For the procedure about how to install the Solaris operating environment, see How to Install Solaris Software in Sun Cluster Software Installation Guide for Solaris OS.

  14. Install any required patches or software for Sun StorEdge Traffic Manager software support to nodes and enable multipathing.

    For the procedure about how to install the Sun StorEdge Traffic Manager software, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

  15. Confirm that all storage arrays that you installed are visible to all nodes.


    # luxadm probe 
    
See Also

To continue with Sun Cluster software installation tasks, see your Sun Cluster software installation documentation.

How to Add a Single-Controller Configuration to an Existing Cluster

Use this procedure to add a single-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

Steps
  1. Power on the storage array.


    Note –

    the storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the mp_support parameter for each storage array is set to none.

    For more information about, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Install any required controller firmware for the storage arrays you are adding.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  6. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe 
    

ProcedureHow to Connect the Storage Array to FC Switches

Steps
  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage array and the Local Area Network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between the FC switch and the storage array.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Steps
  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node N.


    # scswitch -S -h from-node
    
  3. Do you need to install a host adapter in Node N?

  4. Is the host adapter that you are installing the first FC host adapter on Node N?

    • If no, skip to Step 6.

    • If yes, determine whether the required drivers for the host adapter are already installed on this node. For the required packages, see the documentation that shipped with your host adapters.

  5. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 6.

    • If no, install the packages.

    The storage array packages are located in the Product directory of the Solaris CD-ROM. Add any necessary packages.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  7. Install the host adapter in Node N.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter and node.

  8. Power on and boot Node N into noncluster mode.

    For the procedure about how to boot a node in noncluster mode, see your Sun Cluster system administration documentation.

  9. If necessary, upgrade the host adapter firmware on Node N.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. If necessary, install the required Solaris patches for storage array support on Node N.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge 6120 Array Release Notes.

  13. On the node, update the /devices and /dev entries.


    # devfsadm -C 
    
  14. Boot the node into cluster mode.

  15. On the node, update the paths to the DID instances.


    # scgdevs
    
  16. If necessary, label the new logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  17. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.


    # scdidadm -C
    # scdidadm -l
    
  18. Repeat Step 2 through Step 17 for each remaining node that you plan to connect to the storage array.

  19. (Optional) Return the resource groups and device groups that you identified in Step 1 to the original nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  20. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

How to Add a Dual-Controller Configuration to an Existing Cluster

Use this procedure to add a dual-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

Steps
  1. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    For the procedure about how to power on storage arrays, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card.

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that each storage array has a unique target address.

    For the procedure about how to assign a target address to a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  6. Install any required controller firmware for the storage arrays you are adding.

    Access the master controller unit and administer the storage arrays. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, when viewed from the rear of the storage arrays.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

  7. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

ProcedureHow to Connect the Storage Array to FC Switches

Steps
  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage arrays and the local area network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Steps
  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 18 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  2. Move all resource groups and device groups off Node N.


    # scswitch -S -h from-node
    
  3. Do you need to install host adapters in Node N?

  4. Is the host adapter that you that are installing the first host adapter on Node N?

    • If no, skip to Step 6.

    • If yes, determine whether the required drivers for the host adapter are already installed on this node. For the required packages, see the documentation that shipped with your host adapters.

  5. Are the required support packages already installed?

    • If yes, skip to Step 6.

    • If no, install the packages.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  7. Install the host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  9. If necessary, upgrade the host adapter firmware on Node N.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  13. Perform a reconfiguration boot on Node N to create the new Solaris device files and links.


    # boot -r
    
  14. On Node N, update the paths to the DID instances.


    # scgdevs
    
  15. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  16. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.


    # scdidadm -C
    # scdidadm -l
    
  17. Repeat Step 2 through Step 16 for each remaining node that you plan to connect to the storage array.

  18. (Optional) Return the resource groups and device groups that you identified in Step 1 to the original nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  19. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.

Configuring Storage Arrays

This section contains the procedures about how to configure a storage array in a running cluster. Table 1–2 lists these procedures.

Table 1–2 Task Map: Configuring a Storage Array

Task 

Information 

Create a LUN 

How to Create a Logical Volume

Remove a LUN 

How to Remove a Logical Volume

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6020 and 6120 Array System Manual for the following procedures.

ProcedureHow to Create a Logical Volume

Use this procedure to create a logical volume from unassigned storage capacity.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Follow the instructions in your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see Related Documentation.

    • Completely set up the logical volume. When you are finished, the volume must be created, mapped, mounted, and initialized.

    • If necessary, partition the volume.

    • To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.

  2. Are you using multipathing?

  3. Are any devices that are associated with the volume you created at an unconfigured state?


    # cfgadm -al | grep disk
    
    • If no, proceed to Step 4.

    • If yes, configure the Traffic Manager paths on each node that is connected to the storage device.


      cfgadm -o force_update -c configure controllerinstance
      

      For the procedure about how to configure Traffic Manager paths, see the Sun StorEdge Traffic Manager Installation and Configuration Guide.

  4. On one node that is connected to the storage device, use the format command to label the new logical volume.

  5. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    Note –

    You might have a volume management daemon such as vold running on your node, and have a CD-ROM drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is in the drive. This error is expected behavior. You can safely ignore this error message.


  6. To manage this volume with volume management software, use the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

See Also

ProcedureHow to Remove a Logical Volume

Use this procedure to remove a logical volume. This procedure defines Node A as the node with which you begin working.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Identify the logical volume that you are removing.

    Refer to your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation for more information.

  2. (Optional) Migrate all data off the logical volume that you are removing. Alternatively, back up that data.

  3. Check if the logical volume that you are removing is a quorum device.


    # scstat -q
    

    If yes, choose and configure another device as the quorum device. Then remove the old quorum device.

    For procedures about how to add and remove quorum devices, see Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

  4. If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that you are removing.

    For instructions about how to update the list of devices, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  5. If you are using volume management software, run the appropriate Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager commands to remove the logical volume from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.


    Note –

    Volumes that were managed by VERITAS Volume Manager must be completely removed from VERITAS Volume Manager control before you can delete them from the Sun Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from VERITAS Volume Manager control.


    # vxdisk offline Accessname
    # vxdisk rm Accessname
    
    Accessname

    Disk access name


  6. If you are using multipathing, unconfigure the volume in Sun StorEdge Traffic Manager.


    # cfgadm -o force_update -c unconfigure Logical_Volume
    
  7. Access the storage device and remove the logical volume.

    For the procedure about how to remove the volume, see your storage documentation. For a list of storage documentation, see Related Documentation.

  8. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 13 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  9. Move all resource groups and device groups off Node A.


    # scswitch -s -h from-node
    
  10. Shut down and reboot Node A.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  11. On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.


    # devfsadm -C
    # scdidadm -C
    
  12. For each additional node that is connected to the shared storage that hosted the logical volume, repeat Step 8 to Step 11.

  13. (Optional) Return the resource groups and device groups that you identified in Step 8 to all cluster nodes.

Maintaining Storage Arrays

This section contains the procedures about how to maintain a storage array in a running cluster. Table 1–3 lists these procedures.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


Table 1–3 Task Map: Maintaining a Storage Array

Task 

Information 

Upgrade storage array firmware. 

How to Upgrade Storage Array Firmware

Remove a storage array or partner group. 

How to Remove a Single-Controller Configuration

How to Remove a Dual-Controller Configuration

Replace a node-to-switch component. 

  • Node-to-switch fiber-optic cable

  • FC host adapter

  • FC switch

  • GBIC or SFP

Replacing a Node-to-Switch Component

Replace a node's host adapter. 

How to Replace a Host Adapter

Add a node to the storage array.

Sun Cluster system administration documentation 

Remove a node from the storage device.

Sun Cluster system administration documentation 

StorEdge 6120 Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge 6020 and 6120 Array System Manual for the following procedures.

ProcedureHow to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the scdidadm -R command for each affected device.


Steps
  1. Stop all I/O to the storage arrays you are upgrading.

  2. Apply the controller, disk drive, and loop-card firmware patches.

    For the list of required patches, see the Sun StorEdge 6120 Array Release Notes. For the procedure about how to apply firmware patches, see the firmware patch README file. For the procedure about how to verify the firmware level, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe
    
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

ProcedureHow to Remove a Single-Controller Configuration

Use this procedure to permanently remove a storage array from a running cluster. This storage array resides in a single-controller configuration. This procedure provides the flexibility to remove the host adapters from the nodes for the storage array that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage array that you are removing.


Steps
  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Detach the submirrors from the storage array that you are removing. Detach the submirrors to stop all I/O activity to the storage array.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node A.


    # scstat
    
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Is the storage array that you are removing the last storage array that is connected to Node A?

    • If yes, disconnect the fiber-optic cable between Node A and the FC switch that is connected to this storage array. Afterward, disconnect the fiber-optic cable between the FC switch and this storage array.

    • If no, proceed to Step 7.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Do you want to remove the host adapter from Node A?

    • If yes, power off Node A.

    • If no, skip to Step 10.

  8. Remove the host adapter from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  9. Without enabling the node to boot, power on Node A.

    For more information, see your Sun Cluster system administration documentation.

  10. Boot Node A into cluster mode.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  11. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  12. Is the storage array that you are removing the last storage array that is connected to the FC switch?

    • If yes, disconnect the fiber-optic cable that connects this FC switch and Node B.

    • If no, proceed to Step 13.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  13. Do you want to remove the host adapter from Node B?

    • If yes, power off Node B.

    • If no, skip to Step 16.

  14. Remove the host adapter from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  15. Without enabling the node to boot, power on Node B.

    For more information, see your Sun Cluster system administration documentation.

  16. Boot Node B into cluster mode.

    For the procedure about how to boot nodes, see your Sun Cluster system administration documentation.

  17. On all nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  18. Return the resource groups and device groups that you identified in Step 4 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

ProcedureHow to Remove a Dual-Controller Configuration

Use this procedure to remove a dual-controller configuration from a running cluster. This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on each partner group that you are removing.


Steps
  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that you are removing.

  2. If necessary, detach the submirrors from each storage array or partner group that you are removing. Detach the submirrors to stop all I/O activity to the storage array or partner group.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  3. Remove references to each LUN. This LUN belongs to the storage array or partner group that you are removing.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Disconnect from both storage arrays the fiber-optic cables that connect to the FC switches. Then disconnect the Ethernet cables.

  7. Is any storage array that you are removing the last storage array that is connected to an FC switch that is on Node A?

    • If no, skip to Step 11.

    • If yes, disconnect the fiber-optic cable between Node A and the FC switch that was connected to this storage array.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  8. Do you want to remove the host adapters from Node A?

    • If no, skip to Step 11.

    • If yes, power off Node A.

  9. Remove the host adapters from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.

  10. Without enabling the node to boot, power on Node A.

    For more information, see your Sun Cluster system administration documentation.

  11. Boot Node A into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  12. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  13. Is any storage array that you are removing the last storage array that is connected to an FC switch that is on Node B?

    • If no, proceed to Step 14.

    • If yes, disconnect the fiber-optic cable that connects this FC switch to Node B.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge 6020 and 6120 Array System Manual.

  14. Do you want to remove the host adapters from Node B?

    • If no, skip to Step 17.

    • If yes, power off Node B.

  15. Remove the host adapters from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  16. Without enabling the node to boot, power on Node B.

    For more information, see your Sun Cluster system administration documentation.

  17. Boot Node B into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  18. On all nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    
  19. Return the resource groups and device groups that you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

For the procedure about how to replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

Steps
  1. Is your configuration active-passive?

    If yes, and the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Steps
  1. Check if the physical data path failed.

    If no, proceed to Step 2.

    If yes:

    1. Replace the component.

      Refer to your hardware documentation for any component-specific instructions.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  2. Determine the resource groups and device groups that are running on Node A.


    # scstat
    
  3. Move all resource groups and device groups to another node.


    # scswitch -s -h from-node
    
  4. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  5. (Optional) If necessary, return the resource groups and device groups that you identified in Step 2 to Node A.


    # scswitch -z -g resource-group -h nodename
    # swswitch -z -D device-group -h nodename
    

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

Steps
  1. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 9 of this procedure to return resource groups and device groups to Node A.


    # scstat
    
  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  3. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  4. Power off Node A.

  5. Replace the failed host adapter.

    For the procedure about how to remove and add host adapters, see the documentation that shipped with your nodes.

  6. Do you need to upgrade the node's host adapter firmware?

    • If yes, boot Node A into noncluster mode. Proceed to Step 7.

      For more information about how to boot nodes, see your Sun Cluster system administration documentation.

    • If no, proceed to Step 8.

  7. Upgrade the host adapter firmware on Node A.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  8. Boot Node A into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  9. Return the resource groups and device groups you identified in Step 1 to Node A.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.