Sun Cluster Software Installation Guide for Solaris OS

Performing a Live Upgrade to Sun Cluster 3.2 Software

This section provides the following information to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software by using the live upgrade method:

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 software to Sun Cluster 3.2 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 8–3 Task Map: Performing a Live Upgrade to Sun Cluster 3.2 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. Remove the cluster from production, disable resources, and back up shared data and system disks. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. 

How to Prepare the Cluster for Upgrade (Live Upgrade)

3. Upgrade the Solaris software, if necessary, to a supported Solaris update. Upgrade to Sun Cluster 3.2 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. As needed, upgrade VERITAS Volume Manager (VxVM)software and disk groups and VERITAS File System (VxFS). 

How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade)

4. Verify successful completion of upgrade to Sun Cluster 3.2 software. 

How to Verify Upgrade of Sun Cluster 3.2 Software

5. Enable resources and bring resource groups online. Migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 Software

6. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for Upgrade (Live Upgrade)

Perform this procedure to prepare a cluster for live upgrade.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.


      phys-schost% scstat
      

      See the scstat(1M) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services will be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Become superuser on a node of the cluster.

  4. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  5. For a two-node cluster that uses Sun StorEdge Availability Suite software or Sun StorageTek Availability Suite software, ensure that the configuration data for availability services resides on the quorum disk.

    The configuration data must reside on a quorum disk to ensure the proper functioning of Availability Suite after you upgrade the cluster software.

    1. Become superuser on a node of the cluster that runs Availability Suite software.

    2. Identify the device ID and the slice that is used by the Availability Suite configuration file.


      phys-schost# /usr/opt/SUNWscm/sbin/dscfg
      /dev/did/rdsk/dNsS
      

      In this example output, N is the device ID and S the slice of device N.

    3. Identify the existing quorum device.


      phys-schost# scstat -q
      -- Quorum Votes by Device --
                           Device Name         Present Possible Status
                           -----------         ------- -------- ------
         Device votes:     /dev/did/rdsk/dQsS  1       1        Online

      In this example output, dQsS is the existing quorum device.

    4. If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device.


      phys-schost# dd if=`/usr/opt/SUNWesm/sbin/dscfg` of=/dev/did/rdsk/dQsS
      

      Note –

      You must use the name of the raw DID device, /dev/did/rdsk/, not the block DID device, /dev/did/dsk/.


    5. If you moved the configuration data, configure Availability Suite software to use the new location.

      As superuser, issue the following command on each node that runs Availability Suite software.


      phys-schost# /usr/opt/SUNWesm/sbin/dscfg -s /dev/did/rdsk/dQsS
      
  6. Ensure that all shared data is backed up.

  7. Ensure that each system disk is backed up.

Next Steps

Perform a live upgrade of the Solaris OS, Sun Cluster 3.2 software, and other software. Go to How to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade).

ProcedureHow to Upgrade the Solaris OS and Sun Cluster 3.2 Software (Live Upgrade)

Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method. The Sun Cluster live upgrade method uses the Solaris Live Upgrade feature. For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 software. See Supported Products in Sun Cluster 3.2 Release Notes for Solaris OS for more information.


Perform this procedure on each node in the cluster.


Tip –

You can use the cconsole utility to perform this procedure on all nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Live Upgrade) are completed.

  1. Ensure that a supported version of Solaris Live Upgrade software is installed on each node.

    If your operating system is already upgraded to Solaris 9 9/05 software or Solaris 10 11/06 software, you have the correct Solaris Live Upgrade software. If your operating system is an older version, perform the following steps:

    1. Insert the Solaris 9 9/05 software or Solaris 10 11/06 software media.

    2. Become superuser.

    3. Install the SUNWluu and SUNWlur packages.


      phys-schost# pkgadd -d path SUNWluu SUNWlur
      
      path

      Specifies the absolute path to the software packages.

    4. Verify that the packages have been installed.


      phys-schost# pkgchk -v SUNWluu SUNWlur
      
  2. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.


      phys-schost# scswitch -z -D setname -h node
      
      -z

      Changes mastery.

      -D devicegroup

      Specifies the name of the disk set.

      -h node

      Specifies the name of the node to become primary of the disk set.

    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  3. Build an inactive boot environment (BE).


    phys-schost# lucreate options-n BE-name
    
    -n BE-name

    Specifies the name of the boot environment that is to be upgraded.

    For information about important options to the lucreate command, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.

  4. If necessary, upgrade the Solaris OS software in your inactive BE.

    If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3.2 software, this step is optional.

    • If you use Solaris Volume Manager software, run the following command:


      phys-schost# luupgrade -u -n BE-name -s os-image-path
      
      -u

      Upgrades an operating system image on a boot environment.

      -s os-image-path

      Specifies the path name of a directory that contains an operating system image.

    • If you use VERITAS Volume Manager, follow live upgrade procedures in your VxVM installation documentation.

  5. Mount your inactive BE by using the lumount command.


    phys-schost# lumount -n BE-name -m BE-mount-point
    
    -m BE-mount-point

    Specifies the mount point of BE-name.

    For more information, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.

  6. Ensure that the /BE-mount-point/usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 software.

    1. Determine what directory the /BE-mount-point/usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /BE-mount-point/usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /BE-mount-point/usr/java -> /BE-mount-point/usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /BE-mount-point/usr/j2se/bin/java -version
      phys-schost# /BE-mount-point/usr/java1.2/bin/java -version
      phys-schost# /BE-mount-point/usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /BE-mount-point/usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /BE-mount-point/usr/java
      phys-schost# cd /mnt/usr
      phys-schost# ln -s j2se java
      
  7. Apply any necessary Solaris patches.

    You might need to patch your Solaris software to use the Live Upgrade feature. For details about the patches that the Solaris OS requires and where to download them, see Managing Packages and Patches With Solaris Live Upgrade in Solaris 9 9/04 Installation Guide or Upgrading a System With Packages or Patches in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  8. If necessary and if your version of the VERITAS Volume Manager (VxVM) software supports it, upgrade your VxVM software.

    Refer to your VxVM software documentation to determine whether your version of VxVM can use the live upgrade method.

  9. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  10. If your cluster hosts software applications that require an upgrade and that you can upgrade by using the live upgrade method, upgrade those software applications.

    If your cluster hosts software applications that cannot use the live upgrade method, you will upgrade them later in Step 25.

  11. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0/ directory.

  12. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  13. Start the installation wizard program to direct output to a state file.

    Specify the name to give the state file and the absolute or relative path where the file should be created.

    • To create a state file by using the graphical interface, use the following command:


      phys-schost# ./installer -no -saveState statefile
      
    • To create a state file by using the text-based interface, use the following command:


      phys-schost# ./installer -no -nodisplay -saveState statefile
      

    See Generating the Initial State File in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  14. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.

    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  15. Exit the installation wizard program.

  16. Run the installer program in silent mode and direct the installation to the alternate boot environment.


    Note –

    The installer program must be the same version that you used to create the state file.



    phys-schost# ./installer -nodisplay -noconsole -state statefile -altroot BE-mount-point
    

    See To Run the Installer in Silent Mode in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  17. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  18. Upgrade your Sun Cluster software by using the scinstall command.


    phys-schost# ./scinstall -u update -R BE-mount-point
    
    -u update

    Specifies that you are performing an upgrade of Sun Cluster software.

    -R BE-mount-point

    Specifies the mount point for your alternate boot environment.

    For more information, see the scinstall(1M) man page.

  19. Upgrade your data services by using the scinstall command.


    phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all  \
    -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
    
  20. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  21. Unmount the inactive BE.


    phys-schost# luumount -n BE-name
    
  22. Activate the upgraded inactive BE.


    phys-schost# luactivate BE-name
    
    BE-name

    The name of the alternate BE that you built in Step 3.

  23. Repeat Step 1 through Step 22 for each node in the cluster.


    Note –

    Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.


  24. Reboot all nodes.


    phys-schost# shutdown -y -g0 -i6
    

    Note –

    Do not use the reboot or halt command. These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE.


    The nodes reboot into cluster mode using the new, upgraded BE.

  25. (Optional) If your cluster hosts software applications that require upgrade for which you cannot use the live upgrade method, perform the following steps.


    Note –

    Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete.


    1. Shut down the node.


      phys-schost# shutdown -y -g0 -i0
      
    2. Boot each node into noncluster mode.

      • On SPARC based systems, perform the following command:


        ok boot -x
        
      • On x86 based systems, perform the following commands:

        1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

          The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

          For more information about GRUB based booting, see Chapter 11, GRUB Based Booting (Tasks), in System Administration Guide: Basic Administration.

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

          The GRUB boot parameters screen appears similar to the following:


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot                                     |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.
        3. Add -x to the command to specify that the system boot into noncluster mode.


          [ Minimal BASH-like line editing is supported. For the first word, TAB
          lists possible command completions. Anywhere else TAB lists the possible
          completions of a device/filename. ESC at any time exits. ]
          
          grub edit> kernel /platform/i86pc/multiboot -x
          
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.


          GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
          +----------------------------------------------------------------------+
          | root (hd0,0,a)                                                       |
          | kernel /platform/i86pc/multiboot -x                                  |
          | module /platform/i86pc/boot_archive                                  |
          +----------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press 'b' to boot, 'e' to edit the selected command in the
          boot sequence, 'c' for a command-line, 'o' to open a new line
          after ('O' for before) the selected line, 'd' to remove the
          selected line, or escape to go back to the main menu.-
        5. Type b to boot the node into noncluster mode.


          Note –

          This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


        If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

    3. Upgrade each software application that requires an upgrade.

      Remember to boot into noncluster mode if you are directed to reboot, until all applications have been upgraded.

    4. Boot each node into cluster mode.

      • On SPARC based systems, perform the following command:


        ok boot
        
      • On x86 based systems, perform the following commands:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

Example 8–1 Live Upgrade to Sun Cluster 3.2 Software

This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Solaris 10 OS, Sun Cluster 3.2 framework, and all Sun Cluster data services that support the live upgrade method. In this example, sc31u2 is the original boot environment (BE). The new BE that is upgraded is named sc32 and uses the mount point /sc32. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Solaris 10 OS. The Java ES installer state file is named sc32state.

The following commands typically produce copious output. This output is shown only where necessary for clarity.


phys-schost# lucreate sc31u2 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32
…
lucreate: Creation of Boot Environment sc32 successful.

phys-schost# luupgrade -u -n sc32 -s /net/installmachine/export/solaris10/OS_image/
The Solaris upgrade of the boot environment sc32 is complete.
Apply patches

phys-schost# lumount sc32 /sc32
phys-schost# ls -l /sc32/usr/java
lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /sc32/usr/java -> /sc32/usr/j2se/

Insert the Sun Java Availability Suite DVD-ROM.
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
phys-schost# ./installer -no -saveState sc32state
phys-schost# ./installer -nodisplay -noconsole -state sc32state -altroot /sc32
phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools
phys-schost# ./scinstall -u update -R /sc32
phys-schost# /sc32/usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 -R /sc32
phys-schost# cd /
phys-schost# eject cdrom

phys-schost# luumount sc32
phys-schost# luactivate sc32
Activation of boot environment sc32 successful.
Upgrade all other nodes

Boot all nodes
phys-schost# shutdown -y -g0 -i6
ok boot

At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.


Troubleshooting

DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:

Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.

New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Failure Recovery: Falling Back to the Original Boot Environment (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Chapter 10, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:

mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy

WARNING - Unable to mount one or more of the following filesystem(s):     /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default

Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group.

Next Steps

Go to How to Verify Upgrade of Sun Cluster 3.2 Software.

See Also

You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.

You can also maintain the inactive BE. For information about how to maintain the environment, see Chapter 37, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 11, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.