Sun Cluster Upgrade Guide for Solaris OS

ProcedureHow to Upgrade the Solaris OS and Sun Cluster 3.2 11/09 Software (Live Upgrade)

Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method. The Sun Cluster live upgrade method uses the Solaris Live Upgrade feature. For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 11/09 software. See Supported Products in Sun Cluster Release Notes for more information.


Perform this procedure on each node in the cluster.


Tip –

You can use the cconsole utility to perform this procedure on all nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Sun Cluster Software Installation Guide for Solaris OS for more information.


Before You Begin
  1. Ensure that a supported version of Solaris Live Upgrade software is installed on each node.

    If your operating system is already upgraded to Solaris 10 10/09 software, you have the correct Solaris Live Upgrade software. If your operating system is an older version, perform the following steps:

    1. Insert the Solaris 9 9/05 software or Solaris 10 10/09 software media.

    2. Become superuser.

    3. Install the Live Upgrade packages.

    4. If you are upgrading to the Solaris 10 5/09 or Solaris 10 10/09 OS, apply the following required patch.

      • SPARC – 137321–01 (minimum)

      • x86 – 137322–01 (minimum)

  2. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Sun Cluster Software Installation Guide for Solaris OS for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Sun Cluster Software Installation Guide for Solaris OS.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 11/09 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.

      • On Sun Cluster 3.1 8/05 software, use the following command:


        phys-schost# scswitch -z -D setname -h node
        
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# cldevicegroup switch -n node devicegroup
        
    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  3. On each node, temporarily change the name of the global devices entry in the /etc/vfstab file from the DID name to the physical name.

    This name change is necessary for live upgrade software to recognize the global-devices file system. You will restore the DID names after the live upgrade is completed.

    1. Back up the /etc/vfstab file.


      phys-schost# cp /etc/vfstab /etc/vfstab.old
      
    2. Open the /etc/vfstab file for editing.

    3. Locate and edit the line that corresponds to /global/.device/node@N.

      • Change the DID names to the physical names by changing /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.

      • Remove global from the entry.

      The following example shows the names of DID device d3s3, which corresponds to /global/.devices/node@2, changed to its physical device names and the global entry removed:


      Original:
      /dev/did/dsk/d3s3  /dev/did/rdsk/d3s3  /global/.devices/node@2  ufs  2  no  global
      
      Changed:
      dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3  /global/.devices/node@2  ufs  2  no  -
  4. Build an inactive boot environment (BE).


    phys-schost# lucreate options-n BE-name
    
    -n BE-name

    Specifies the name of the boot environment that is to be upgraded.

    For information about important options to the lucreate command, see Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.

  5. If necessary, upgrade the Solaris OS software in your inactive BE.

    If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3.2 11/09 software, this step is optional.

    • If you use Solaris Volume Manager software, run the following command:


      phys-schost# luupgrade -u -n BE-name -s os-image-path
      
      -u

      Upgrades an operating system image on a boot environment.

      -s os-image-path

      Specifies the path name of a directory that contains an operating system image.

    • If you use Veritas Volume Manager, follow live upgrade procedures in your VxVM installation documentation.

  6. Mount your inactive BE by using the lumount command.


    phys-schost# lumount -n BE-name -m BE-mount-point
    
    -m BE-mount-point

    Specifies the mount point of BE-name.

    For more information, see Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.

  7. Ensure that the /BE-mount-point/usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 11/09 software.

    1. Determine what directory the /BE-mount-point/usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /BE-mount-point/usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /BE-mount-point/usr/java -> /BE-mount-point/usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /BE-mount-point/usr/j2se/bin/java -version
      phys-schost# /BE-mount-point/usr/java1.2/bin/java -version
      phys-schost# /BE-mount-point/usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /BE-mount-point/usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /BE-mount-point/usr/java
      phys-schost# cd /BE-mount-point/usr
      phys-schost# ln -s j2se java
      
  8. Apply any necessary Solaris patches.

    You might need to patch your Solaris software to use Solaris Live Upgrade. For details about the patches that the Solaris OS requires and where to download them, see Managing Packages and Patches With Solaris Live Upgrade in Solaris 9 9/04 Installation Guide or Upgrading a System With Packages or Patches in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  9. If necessary and if your version of the Veritas Volume Manager (VxVM) software supports it, upgrade your VxVM software.


    Note –

    You must install patch 122058-06. This patch is required for the live upgrade of VxVM to succeed.


    Refer to your VxVM software documentation to determine whether your version of VxVM can use the live upgrade method.

  10. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  11. If your cluster hosts software applications that require an upgrade and that you can upgrade by using Solaris Live Upgrade, upgrade those software applications.

    However, if some software applications to upgrade cannot use Solaris Live Upgrade, such as Sun QFS software, upgrade the applications in Step 29.

  12. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  13. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  14. Start the installation wizard program to direct output to a state file.

    Specify the name to give the state file and the absolute or relative path where the file should be created.

    • To create a state file by using the graphical interface, use the following command:


      phys-schost# ./installer -no -saveState statefile
      
    • To create a state file by using the text-based interface, use the following command:


      phys-schost# ./installer -no -nodisplay -saveState statefile
      

    See Generating the Initial State File in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for more information.

  15. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.

    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  16. Exit the installation wizard program.

  17. Run the installer program in silent mode and direct the installation to the alternate boot environment.


    Note –

    The installer program must be the same version that you used to create the state file.



    phys-schost# ./installer -nodisplay -noconsole -state statefile -altroot BE-mount-point
    

    See To Run the Installer in Silent Mode in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for more information.

  18. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86and where ver is 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  19. Upgrade your Sun Cluster software by using the scinstall command.


    phys-schost# ./scinstall -u update -R BE-mount-point
    
    -u update

    Specifies that you are performing an upgrade of Sun Cluster software.

    -R BE-mount-point

    Specifies the mount point for your alternate boot environment.

    For more information, see the scinstall(1M) man page.

  20. Apply Sun Cluster patches to the alternate BE.

  21. Upgrade your data services by using the scinstall command.


    phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all  \
    -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
    
  22. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  23. Repeat, starting from Step 1, for each node in the cluster.


    Note –

    Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.


  24. On each cluster node, restore the DID names of the global-devices entry in the /etc/vfstab file.

    1. On the current, unupgraded BE, restore the original /etc/vfstab file.


      phys-schost# cp /etc/vstab.old /etc/vfstab
      
    2. In the alternate BE, open the /etc/vfstab file for editing.

    3. Locate the line that corresponds to /global/.devices/node@N and replace the dash (-) at the end of the entry with the word global.


      /dev/dsk/cNtXdYsZ /dev/rdsk/cNtXdYsZ /global/.devices/node@N ufs 2 no global
      

      When the node is rebooted into the upgraded alternate BE, the DID names are substituted in the /etc/vfstab file automatically.

  25. Unmount the inactive BE.


    phys-schost# luumount -n BE-name
    
  26. Activate the upgraded inactive BE.


    phys-schost# luactivate BE-name
    
    BE-name

    The name of the alternate BE that you built in Step 4.

  27. Reboot all nodes to use the upgraded BE.


    Note –

    Do not use the reboot or halt command. These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE.


    • If you have no additional software to upgrade, boot the upgraded BE into cluster mode.


      phys-schost# shutdown -y -g0 -i6
      

      The nodes reboot into cluster mode using the new, upgraded BE. The cluster upgrade is completed.

    • If one of the following conditions applies, perform the following steps to boot the upgraded BE into noncluster mode.

      • You upgraded from Sun Cluster 3.1 8/05 software and you want to configure zone clusters (Solaris 10 only).

      • Your cluster hosts software applications that require upgrade and for which you cannot use Solaris Live Upgrade.

      • (Optional) You want to change the private-network IP address range.

      1. Shut down the nodes.

        • On Sun Cluster 3.1 software, on each node of the cluster use the following command:


          phys-schost# shutdown -g0 -y
          
        • On Sun Cluster 3.2 software, on one node of the cluster use the following command:


          phys-schost# cluster shutdown -g0 -y
          
      2. Boot each node into noncluster mode.

        • On SPARC based systems, perform the following command:


          ok boot -x
          
        • On x86 based systems, perform the following commands:

          1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

            The GRUB menu appears similar to the following:


            GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
            +----------------------------------------------------------------------+
            | Solaris 10 /sol_10_x86                                               |
            | Solaris failsafe                                                     |
            |                                                                      |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press enter to boot the selected OS, 'e' to edit the
            commands before booting, or 'c' for a command-line.

            For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

          2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

            The GRUB boot parameters screen appears similar to the following:


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot                                     |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.
          3. Add -x to the command to specify that the system boot into noncluster mode.


            [ Minimal BASH-like line editing is supported. For the first word, TAB
            lists possible command completions. Anywhere else TAB lists the possible
            completions of a device/filename. ESC at any time exits. ]
            
            grub edit> kernel /platform/i86pc/multiboot -x
            
          4. Press Enter to accept the change and return to the boot parameters screen.

            The screen displays the edited command.


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot -x                                  |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.-
          5. Type b to boot the node into noncluster mode.


            Note –

            This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


          If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

    The upgraded BE now runs in noncluster mode.

  28. Reconfigure the private-network address range.

    Perform this step to increase or decrease the size of the IP address range that is used by the private interconnect. The IP address range that you configure must minimally support the number of nodes and private networks in the cluster. See Private Network in Sun Cluster Software Installation Guide for Solaris OS for more information.


    Note –

    This step is required if you upgraded from Sun Cluster 3.1 8/05 software and want to configure zone clusters (on Solaris 10 OS only).

    This step is optional if you upgraded from Sun Cluster 3.1 8/05 and do not want to configure zone clusters, or if you upgraded from Sun Cluster 3.2 software.


    1. From one node, start the clsetup utility.

      When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

    2. Type the option number for Change IP Address Range and press the Return key.

      The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

    3. To change either the private-network IP address or the IP address range, type yes and press the Return key.

      The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

    4. Change or accept the private-network IP address.

      • To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

        The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.

      • To change the default private-network IP address, perform the following substeps.

        1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

          The clsetup utility will prompt for the new private-network IP address.

        2. Type the new IP address and press the Return key.

          The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

    5. Change or accept the default private-network IP address range.

      • On the Solaris 10 OS, the default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 10 private networks, and up to 12 zone clusters in the cluster.

      • On the Solaris 9 OS, the default netmask is 255.255.248.0. This default IP address range supports up to 64 nodes and up to 10 private networks in the cluster.

      • To accept the default IP address range, type yes and press the Return key.

        Then skip to the next step.

      • To change the IP address range, perform the following substeps.

        1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

          When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.

        2. Enter the number of nodes and private networks that you expect to configure in the cluster.

          From these numbers, the clsetup utility calculates two proposed netmasks:

          • The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.

          • The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.

        3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks.

    6. Type yes in response to the clsetup utility's question about proceeding with the update.

    7. When finished, exit the clsetup utility.

  29. Upgrade any software applications for which you cannot use Solaris Live Upgrade.


    Note –

    Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete, as described in Step 27.


    1. Upgrade each software application that requires an upgrade.

      Remember to boot into noncluster mode if you are directed to reboot, until all applications are upgraded.

    2. After all nodes are upgraded, reboot the nodes into cluster mode.

      1. Shut down each node.


        phys-schost# shutdown -g0 -y
        
      2. Boot each node into cluster mode.

        • On SPARC based systems, perform the following command:


          ok boot
          
        • On x86 based systems, perform the following commands:

          When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

    The cluster upgrade is completed.


Example 4–1 Live Upgrade to Sun Cluster 3.2 11/09 Software

This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Solaris 10 OS, Sun Cluster 3.2 11/09 framework, and all Sun Cluster data services that support the live upgrade method. In this example, sc31u4 is the original boot environment (BE). The new BE that is upgraded is named sc32u3 and uses the mount point /sc32u3. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Solaris 10 OS. The Java ES installer state file is named sc32u3state.

The following commands typically produce copious output. This output is shown only where necessary for clarity.


phys-schost# lucreate sc31u4 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32u3
…
lucreate: Creation of Boot Environment sc32u3 successful.

phys-schost# luupgrade -u -n sc32u3 -s /net/installmachine/export/solaris10/OS_image/
The Solaris upgrade of the boot environment sc32u3 is complete.
Apply patches

phys-schost# lumount sc32u3 /sc32u3
phys-schost# ls -l /sc32u3/usr/java
lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /sc32/usr/java -> /sc32u3/usr/j2se/

Insert the Sun Java Availability Suite DVD-ROM.
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
phys-schost# ./installer -no -saveState sc32u3state
phys-schost# ./installer -nodisplay -noconsole -state sc32u3state -altroot /sc32u3
phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools
phys-schost# ./scinstall -u update -R /sc32u3
phys-schost# /sc32u3/usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 -R /sc32u3
phys-schost# cd /
phys-schost# eject cdrom

phys-schost# luumount sc32u3
phys-schost# luactivate sc32u3
Activation of boot environment sc32u3 successful.
Upgrade all other nodes

Boot all nodes
phys-schost# shutdown -y -g0
ok boot

At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.


Troubleshooting

DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:

Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.

New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Failure Recovery: Falling Back to the Original Boot Environment (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:

mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy

WARNING - Unable to mount one or more of the following filesystem(s):     /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default

Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group in Sun Cluster Software Installation Guide for Solaris OS.

Next Steps

Go to Chapter 6, Completing the Upgrade.

See Also

You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.

You can also maintain the inactive BE. For information about how to maintain the environment, see Chapter 37, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 7, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.