Sun Cluster 3.0 U1 Release Notes Supplement

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your StorEdge T3/T3+ disk trays from a single-controller (non-interconnected) configuration to a partner-group (interconnected) configuration.


Note -

Only trained, qualified Sun service providers should use this procedure. This procedure requires the Sun StorEdge T3 and T3+ Array Field Service Manual, which is available to trained Sun service providers only.


  1. Remove the non-interconnected disk trays that will be in your partner-group from the cluster configuration.

    Follow the procedure in "How to Remove StorEdge T3/T3+ Disk Trays From a Running Cluster".


    Note -

    Backup all data on the disk trays before removing them from the cluster configuration.



    Note -

    This procedure assumes that the two disk trays that will be in the partner-group configuration are correctly isolated from each other on separate FC switches. You must use FC switches when installing disk trays in a partner-group configuration. Do not disconnect the cables from the FC switches or nodes.


  2. Connect the single disk trays to form a partner-group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  3. Add the new partner-group to the cluster configuration:

    1. At each disk tray's prompt, use the port list command to ensure that each disk tray has a unique target address:


      t3:/:<#> port list
      

      If the disk trays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a disk tray, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    2. At each disk tray's prompt, use the sys list command to verify that the cache and mirror settings for each disk tray are set to auto:


      t3:/:<#> sys list
      

      If the two settings are not already set to auto, set them using the following commands at each disk tray's prompt:


      t3:/:<#> sys cache auto
      t3:/:<#> sys mirror auto
      

    3. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each disk tray is set to mpxio:


      t3:/:<#> sys list
      

      If mp_support is not already set to mpxio, set it using the following command at each disk tray's prompt:


      t3:/:<#> sys mp_support mpxio
      

    4. If necessary, upgrade the host adapter firmware on Node A.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    5. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node A.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download.

    6. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    7. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step f.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    8. Shut down Node A.


      # shutdown -y -g0 -i0
      
    9. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


      {0} ok boot -r
      
    10. On Node A, update the /devices and /dev entries:


      # devfsadm -C 
      

    11. On Node A, update the paths to the DID instances:


      # scdidadm -C
      
    12. Configure the new disk trays with the desired logical volumes.

      For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    13. Label the new disk tray logical volume.

      For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    14. If necessary, upgrade the host adapter firmware on Node B.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    15. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node B.

      For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 and T3+ Array Release Notes.

    16. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    17. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step p.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    18. Shut down Node B.


      # shutdown -y -g0 -i0
      
    19. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


      {0} ok boot -r
      
    20. On Node B, update the /devices and /dev entries:


      # devfsadm -C 
      

    21. On Node B, update the paths to the DID instances:


      # scdidadm -C
      
    22. (Optional) On Node B, verify that the DIDs are assigned to the new disk trays:


      # scdidadm -l
      

    23. On one node attached to the new disk trays, reset the SCSI reservation state:


      # scdidadm -R n
      

      Where n is the DID instance of a disk tray LUN you are adding to the cluster.


      Note -

      Repeat this command on the same node for each disk tray LUN you are adding to the cluster.


    24. Perform volume management administration to incorporate the new logical volumes into the cluster.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.