Sun Cluster Upgrade Guide for Solaris OS

Chapter 4 Performing a Live Upgrade to Sun Cluster 3.2 1/09 Software

This chapter provides the following information to upgrade from Sun Cluster 3.2 1/09 software to Sun Cluster 3.2 software by using the live upgrade method:

Performing a Live Upgrade of a Cluster

The following table lists the tasks to perform to upgrade from Sun Cluster 3.1 or 3.2 software to Sun Cluster 3.2 1/09 software. You also perform these tasks to upgrade only the version of the Solaris OS. If you upgrade the Solaris OS to a new marketing release, such as from Solaris 9 to Solaris 10 software, you must also upgrade the Sun Cluster software and dependency software to the version that is compatible with the new version of the Solaris OS.

Table 4–1 Task Map: Performing a Live Upgrade to Sun Cluster 3.2 1/09 Software

Task 

Instructions 

1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs. 

Upgrade Requirements and Software Support Guidelines

Choosing a Sun Cluster Upgrade Method

2. If Sun Cluster Geographic Edition software is installed, uninstall it. 

How to Prepare the Cluster for Upgrade (Live Upgrade)

3. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. Upgrade the Solaris software, if necessary, to a supported Solaris update. Upgrade to Sun Cluster 3.2 1/09 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. As needed, upgrade Veritas Volume Manager (VxVM)software and disk groups and Veritas File System (VxFS). 

How to Upgrade the Solaris OS and Sun Cluster 3.2 1/09 Software (Live Upgrade)

4. Use the scversions command to commit the cluster to the upgrade.

How to Commit the Upgraded Cluster to Sun Cluster 3.2 1/09 Software

5. Verify successful completion of upgrade to Sun Cluster 3.2 1/09 software. 

How to Verify Upgrade of Sun Cluster 3.2 1/09 Software

6. Enable resources and bring resource groups online. Migrate existing resources to new resource types. 

How to Finish Upgrade to Sun Cluster 3.2 1/09 Software

7. (Optional) SPARC: Upgrade the Sun Cluster module for Sun Management Center, if needed.

SPARC: How to Upgrade Sun Cluster Module Software for Sun Management Center

ProcedureHow to Prepare the Cluster for Upgrade (Live Upgrade)

Perform this procedure to prepare a cluster for live upgrade.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.

    1. View the current status of the cluster by running the following command from any node.

      • On Sun Cluster 3.1 software, use the following command:


        phys-schost% scstat
        
      • On Sun Cluster 3.2 software, use the following command:


        phys-schost% cluster status
        

      See the scstat(1M) or cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.

    3. Check the volume-manager status.

  2. If necessary, notify users that cluster services will be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. If Sun Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Sun Cluster Geographic Edition software.

  4. Become superuser on a node of the cluster.

  5. Ensure that all shared data is backed up.

  6. Ensure that each system disk is backed up.

Next Steps

Perform a live upgrade of the Solaris OS, Sun Cluster 3.2 software, and other software. Go to How to Upgrade the Solaris OS and Sun Cluster 3.2 1/09 Software (Live Upgrade).

ProcedureHow to Upgrade the Solaris OS and Sun Cluster 3.2 1/09 Software (Live Upgrade)

Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method. The Sun Cluster live upgrade method uses the Solaris Live Upgrade feature. For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:


Note –

The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3.2 1/09 software. See Supported Products in Sun Cluster Release Notes for more information.


Perform this procedure on each node in the cluster.


Tip –

You can use the cconsole utility to perform this procedure on all nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Sun Cluster Software Installation Guide for Solaris OS for more information.


Before You Begin
  1. Ensure that a supported version of Solaris Live Upgrade software is installed on each node.

    If your operating system is already upgraded to Solaris 9 9/05 software or Solaris 10 5/08 software, you have the correct Solaris Live Upgrade software. If your operating system is an older version, perform the following steps:

    1. Insert the Solaris 9 9/05 software or Solaris 10 5/08 software media.

      You can use either of these versions of the Solaris OS to install Solaris Live Upgrade software on a Solaris 8 configuration.

    2. Become superuser.

    3. Install the Live Upgrade packages.

  2. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Sun Cluster Software Installation Guide for Solaris OS for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.


      phys-schost# medstat -s setname
      
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Sun Cluster Software Installation Guide for Solaris OS.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.2 1/09 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.

      • On Sun Cluster 3.1 software, use the following command:


        phys-schost# scswitch -z -D setname -h node
        
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 software, use the following command:


        phys-schost# cldevicegroup switch -n node devicegroup
        
    4. Unconfigure all mediators for the disk set.


      phys-schost# metaset -s setname -d -m mediator-host-list
      
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.

  3. On each node, temporarily change the name of the global devices entry in the /etc/vfstab file from the DID name to the physical name.

    This name change is necessary for live upgrade software to recognize the global-devices file system. You will restore the DID names after the live upgrade is completed.

    1. Back up the /etc/vfstab file.


      phys-schost# cp /etc/vfstab /etc/vfstab.old
      
    2. Open the /etc/vfstab file for editing.

    3. Locate and edit the line that corresponds to /global/.device/node@N.

      • Change the DID names to the physical names by changing /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.

      • Remove global from the entry.

      The following example shows the names of DID device d3s3, which corresponds to /global/.devices/node@2, changed to its physical device names and the global entry removed:


      Original:
      /dev/did/dsk/d3s3  /dev/did/rdsk/d3s3  /global/.devices/node@2  ufs  2  no  global
      
      Changed:
      dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3  /global/.devices/node@2  ufs  2  no  -
  4. Build an inactive boot environment (BE).


    phys-schost# lucreate options-n BE-name
    
    -n BE-name

    Specifies the name of the boot environment that is to be upgraded.

    For information about important options to the lucreate command, see Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.

  5. If necessary, upgrade the Solaris OS software in your inactive BE.

    If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3.2 1/09 software, this step is optional.

    • If you use Solaris Volume Manager software, run the following command:


      phys-schost# luupgrade -u -n BE-name -s os-image-path
      
      -u

      Upgrades an operating system image on a boot environment.

      -s os-image-path

      Specifies the path name of a directory that contains an operating system image.

    • If you use Veritas Volume Manager, follow live upgrade procedures in your VxVM installation documentation.

  6. Mount your inactive BE by using the lumount command.


    phys-schost# lumount -n BE-name -m BE-mount-point
    
    -m BE-mount-point

    Specifies the mount point of BE-name.

    For more information, see Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.

  7. Ensure that the /BE-mount-point/usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.5.0_06 of Java software. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.2 1/09 software.

    1. Determine what directory the /BE-mount-point/usr/java/ directory is symbolically linked to.


      phys-schost# ls -l /BE-mount-point/usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /BE-mount-point/usr/java -> /BE-mount-point/usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      phys-schost# /BE-mount-point/usr/j2se/bin/java -version
      phys-schost# /BE-mount-point/usr/java1.2/bin/java -version
      phys-schost# /BE-mount-point/usr/jdk/jdk1.5.0_06/bin/java -version
      
    3. If the /BE-mount-point/usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.5.0_06 software.


      phys-schost# rm /BE-mount-point/usr/java
      phys-schost# cd /BE-mount-point/usr
      phys-schost# ln -s j2se java
      
  8. Apply any necessary Solaris patches.

    You might need to patch your Solaris software to use Solaris Live Upgrade. For details about the patches that the Solaris OS requires and where to download them, see Managing Packages and Patches With Solaris Live Upgrade in Solaris 9 9/04 Installation Guide or Upgrading a System With Packages or Patches in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  9. If necessary and if your version of the Veritas Volume Manager (VxVM) software supports it, upgrade your VxVM software.


    Note –

    You must install patch 122058–06. This patch is required for the live upgrade of VxVM to succeed.


    Refer to your VxVM software documentation to determine whether your version of VxVM can use the live upgrade method.

  10. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  11. If your cluster hosts software applications that require an upgrade and that you can upgrade by using Solaris Live Upgrade, upgrade those software applications.

    However, if some software applications to upgrade cannot use Solaris Live Upgrade, such as Sun StorageTekTM QFS software, upgrade the applications in Step 27.

  12. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  13. Change to the installation wizard directory of the DVD-ROM.

    • If you are installing the software packages on the SPARC platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the software packages on the x86 platform, type the following command:


      phys-schost# cd /cdrom/cdrom0/Solaris_x86
      
  14. Start the installation wizard program to direct output to a state file.

    Specify the name to give the state file and the absolute or relative path where the file should be created.

    • To create a state file by using the graphical interface, use the following command:


      phys-schost# ./installer -no -saveState statefile
      
    • To create a state file by using the text-based interface, use the following command:


      phys-schost# ./installer -no -nodisplay -saveState statefile
      

    See Generating the Initial State File in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  15. Follow the instructions on the screen to select and upgrade Shared Components software packages on the node.

    The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.

  16. Exit the installation wizard program.

  17. Run the installer program in silent mode and direct the installation to the alternate boot environment.


    Note –

    The installer program must be the same version that you used to create the state file.



    phys-schost# ./installer -nodisplay -noconsole -state statefile -altroot BE-mount-point
    

    See To Run the Installer in Silent Mode in Sun Java Enterprise System 5 Installation Guide for UNIX for more information.

  18. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 (Solaris 10 only) and where ver is 9 for Solaris 9 or 10 for Solaris 10 .


    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
    
  19. Upgrade your Sun Cluster software by using the scinstall command.


    phys-schost# ./scinstall -u update -R BE-mount-point
    
    -u update

    Specifies that you are performing an upgrade of Sun Cluster software.

    -R BE-mount-point

    Specifies the mount point for your alternate boot environment.

    For more information, see the scinstall(1M) man page.

  20. Upgrade your data services by using the scinstall command.


    phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all  \
    -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
    
  21. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      phys-schost# eject cdrom
      
  22. Repeat, starting from Step 1, for each node in the cluster.


    Note –

    Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.


  23. On each cluster node, restore the DID names of the global-devices entry in the /etc/vfstab file.

    1. On the current, unupgraded BE, restore the original /etc/vfstab file.


      phys-schost# cp /etc/vstab.old /etc/vfstab
      
    2. In the alternate BE, open the /etc/vfstab file for editing.

    3. Locate the line that corresponds to /global/.devices/node@N and replace the dash (-) at the end of the entry with the word global.


      /dev/dsk/cNtXdYsZ  /dev/rdsk/cNtXdYsZ  /global/.devices/node@N  ufs  2  no  global
      

      When the node is rebooted into the upgraded alternate BE, the DID names are substituted in the /etc/vfstab file automatically.

  24. Unmount the inactive BE.


    phys-schost# luumount -n BE-name
    
  25. Activate the upgraded inactive BE.


    phys-schost# luactivate BE-name
    
    BE-name

    The name of the alternate BE that you built in Step 4.

  26. Reboot all nodes to use the upgraded BE.


    Note –

    Do not use the reboot or halt command. These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE.


    • If you have no additional software to upgrade, boot the upgraded BE into cluster mode.


      phys-schost# shutdown -y -g0 -i6
      

      The nodes reboot into cluster mode using the new, upgraded BE. The cluster upgrade is completed.

    • If your cluster hosts software applications that require upgrade and for which you cannot use Solaris Live Upgrade, perform the following steps to boot the upgraded BE into noncluster mode.

      1. Shut down the nodes.

        • On Sun Cluster 3.1 software, on each node of the cluster use the following command:


          phys-schost# shutdown -g0 -y
          
        • On Sun Cluster 3.2 software, on one node of the cluster use the following command:


          phys-schost# cluster shutdown -g0 -y
          
      2. Boot each node into noncluster mode.

        • On SPARC based systems, perform the following command:


          ok boot -x
          
        • On x86 based systems, perform the following commands:

          1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

            The GRUB menu appears similar to the following:


            GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
            +----------------------------------------------------------------------+
            | Solaris 10 /sol_10_x86                                               |
            | Solaris failsafe                                                     |
            |                                                                      |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press enter to boot the selected OS, 'e' to edit the
            commands before booting, or 'c' for a command-line.

            For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

          2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

            The GRUB boot parameters screen appears similar to the following:


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot                                     |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.
          3. Add -x to the command to specify that the system boot into noncluster mode.


            [ Minimal BASH-like line editing is supported. For the first word, TAB
            lists possible command completions. Anywhere else TAB lists the possible
            completions of a device/filename. ESC at any time exits. ]
            
            grub edit> kernel /platform/i86pc/multiboot -x
            
          4. Press Enter to accept the change and return to the boot parameters screen.

            The screen displays the edited command.


            GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
            +----------------------------------------------------------------------+
            | root (hd0,0,a)                                                       |
            | kernel /platform/i86pc/multiboot -x                                  |
            | module /platform/i86pc/boot_archive                                  |
            +----------------------------------------------------------------------+
            Use the ^ and v keys to select which entry is highlighted.
            Press 'b' to boot, 'e' to edit the selected command in the
            boot sequence, 'c' for a command-line, 'o' to open a new line
            after ('O' for before) the selected line, 'd' to remove the
            selected line, or escape to go back to the main menu.-
          5. Type b to boot the node into noncluster mode.


            Note –

            This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


          If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

    The upgraded BE now runs in noncluster mode.

  27. Upgrade any software applications for which you cannot use Solaris Live Upgrade.


    Note –

    Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete, as described in Step 26.


    1. Upgrade each software application that requires an upgrade.

      Remember to boot into noncluster mode if you are directed to reboot, until all applications are upgraded.

    2. After all nodes are upgraded, reboot the nodes into cluster mode.

      1. Shut down each node.


        phys-schost# shutdown -g0 -y
        
      2. Boot each node into cluster mode.

        • On SPARC based systems, perform the following command:


          ok boot
          
        • On x86 based systems, perform the following commands:

          When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


          GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
          +-------------------------------------------------------------------------+
          | Solaris 10 /sol_10_x86                                                  |
          | Solaris failsafe                                                        |
          |                                                                         |
          +-------------------------------------------------------------------------+
          Use the ^ and v keys to select which entry is highlighted.
          Press enter to boot the selected OS, 'e' to edit the
          commands before booting, or 'c' for a command-line.

    The cluster upgrade is completed.


Example 4–1 Live Upgrade to Sun Cluster 3.2 1/09 Software

This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Solaris 10 OS, Sun Cluster 3.2 1/09 framework, and all Sun Cluster data services that support the live upgrade method. In this example, sc31u2 is the original boot environment (BE). The new BE that is upgraded is named sc32 and uses the mount point /sc32. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Solaris 10 OS. The Java ES installer state file is named sc32state.

The following commands typically produce copious output. This output is shown only where necessary for clarity.


phys-schost# lucreate sc31u2 -m /:/dev/dsk/c0t4d0s0:ufs -n sc32
…
lucreate: Creation of Boot Environment sc32 successful.

phys-schost# luupgrade -u -n sc32 -s /net/installmachine/export/solaris10/OS_image/
The Solaris upgrade of the boot environment sc32 is complete.
Apply patches

phys-schost# lumount sc32 /sc32
phys-schost# ls -l /sc32/usr/java
lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /sc32/usr/java -> /sc32/usr/j2se/

Insert the Sun Java Availability Suite DVD-ROM.
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
phys-schost# ./installer -no -saveState sc32state
phys-schost# ./installer -nodisplay -noconsole -state sc32state -altroot /sc32
phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools
phys-schost# ./scinstall -u update -R /sc32
phys-schost# /sc32/usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0 -R /sc32
phys-schost# cd /
phys-schost# eject cdrom

phys-schost# luumount sc32
phys-schost# luactivate sc32
Activation of boot environment sc32 successful.
Upgrade all other nodes

Boot all nodes
phys-schost# shutdown -y -g0
ok boot

At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.


Troubleshooting

DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:

Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.

New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Failure Recovery: Falling Back to the Original Boot Environment (Command-Line Interface) in Solaris 9 9/04 Installation Guide or Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:

mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy

WARNING - Unable to mount one or more of the following filesystem(s):     /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default

Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group in Sun Cluster Software Installation Guide for Solaris OS.

Next Steps

Go to Chapter 6, Completing the Upgrade.

See Also

You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.

You can also maintain the inactive BE. For information about how to maintain the environment, see Chapter 37, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 9 9/04 Installation Guide or Chapter 7, Maintaining Solaris Live Upgrade Boot Environments (Tasks), in Solaris 10 5/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning.