JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Upgrade Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Preparing to Upgrade Oracle Solaris Cluster Software

2.  Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 5/11 Software

3.  Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 5/11 Software

4.  Performing a Live Upgrade to Oracle Solaris Cluster 3.3 5/11 Software

Performing a Live Upgrade of a Cluster

How to Upgrade Quorum Server Software

How to Prepare the Cluster for Upgrade (Live Upgrade)

How to Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 5/11 Software (Live Upgrade)

5.  Performing a Rolling Upgrade

6.  Completing the Upgrade

7.  Recovering From an Incomplete Upgrade

8.  SPARC: Upgrading Sun Management Center Software

Index

Performing a Live Upgrade of a Cluster

The following table lists the tasks to perform to upgrade to Oracle Solaris Cluster 3.3 5/11 software. You also perform these tasks to upgrade only the Oracle Solaris OS.

Table 4-1 Task Map: Performing a Live Upgrade to Oracle Solaris Cluster 3.3 5/11 Software

Task
Instructions
1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs.
2. If a quorum server is used, upgrade the Quorum Server software.
3. If Oracle Solaris Cluster Geographic Edition software is installed, uninstall it.
4. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. Upgrade the Oracle Solaris software, if necessary, to a supported Oracle Solaris update. Upgrade to Oracle Solaris Cluster 3.3 5/11 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators, reconfigure the mediators. As needed, upgrade Veritas Volume Manager (VxVM)software and disk groups and Veritas File System (VxFS).
5. Use the scversions command to commit the cluster to the upgrade.
6. Verify successful completion of upgrade to Oracle Solaris Cluster 3.3 5/11 software.
7. Enable resources and bring resource groups online. Migrate existing resources to new resource types. Upgrade to Oracle Solaris Cluster Geographic Edition 3.3 5/11 software, if used.
8. (Optional) SPARC: Upgrade the Oracle Solaris Cluster module for Sun Management Center, if needed.

How to Upgrade Quorum Server Software

If the cluster uses a quorum server, upgrade the Quorum Server software on the quorum server before you upgrade the cluster.


Note - If more than one cluster uses the quorum server, perform these steps for each of those clusters.


Perform all steps as superuser on the cluster and on the quorum server.

  1. If the cluster has two nodes and the quorum server is the cluster's only quorum device, temporarily add a second quorum device.

    See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.

    If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 5/11 version of Quorum Server software.

  2. Unconfigure the quorum server from each cluster that uses the quorum server.
    phys-schost# clquorum remove quorumserver
  3. From the quorum server to upgrade, verify that the quorum server no longer serves any cluster.
    quorumserver# clquorumserver show +

    If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.


    Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.


  4. From the quorum server to upgrade, halt all quorum server instances.
    quorumserver# clquorumserver stop +
  5. Uninstall the Quorum Server software from the quorum server to upgrade.
    1. Navigate to the directory where the uninstaller is located.
      quorumserver# cd /var/sadm/prod/SUNWentsysver
      ver

      The version that is installed on your system.

    2. Start the uninstallation wizard.
      quorumserver# ./uninstall
    3. Follow instructions on the screen to uninstall the Quorum Server software from the quorum-server host computer.

      After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.

    4. (Optional) Clean up or remove the quorum server directories.

      By default, this directory is /var/scqsd.

  6. Install the Oracle Solaris Cluster 3.3 5/11 Quorum Server software, reconfigure the quorum server, and start the quorum server daemon.

    Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.

  7. From a cluster node, configure the upgraded quorum server as a quorum device.

    Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.

  8. If you configured a temporary quorum device, unconfigure it.
    phys-schost# clquorum remove tempquorum

How to Prepare the Cluster for Upgrade (Live Upgrade)

Perform this procedure to prepare a cluster for live upgrade.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.
    1. View the current status of the cluster by running the following command from any node.
      phys-schost% cluster status

      See the cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
    3. Check the volume-manager status.
  2. If necessary, notify users that cluster services will be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. If Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Geographic Edition software.

  4. Become superuser on a node of the cluster.
  5. Ensure that all shared data is backed up.
  6. Ensure that each system disk is backed up.

Next Steps

Perform a live upgrade of the Oracle Solaris OS, Oracle Solaris Cluster 3.3 5/11 software, and other software. Go to How to Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 5/11 Software (Live Upgrade).

How to Upgrade the Solaris OS and Oracle Solaris Cluster 3.3 5/11 Software (Live Upgrade)

Perform this procedure to upgrade the Oracle Solaris OS, volume-manager software, and Oracle Solaris Cluster software by using the live upgrade method. The Oracle Solaris Cluster live upgrade method uses the Oracle Solaris Live Upgrade feature. For information about live upgrade of the Oracle Solaris OS, refer to the following Oracle Solaris documentation:


Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Oracle Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 5/11 software. See Supported Products in Oracle Solaris Cluster 3.3 5/11 Release Notes for more information.


Perform this procedure on each node in the cluster.


Tip - You can use the cconsole utility to perform this procedure on multiple nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Oracle Solaris Cluster Software Installation Guide for more information.


Before You Begin

  1. Install a supported version of Oracle Solaris Live Upgrade software.

    Follow instructions in Solaris Live Upgrade System Requirements in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning and Installing Solaris Live Upgrade in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  2. If you will upgrade the Oracle Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.
      phys-schost# medstat -s setname
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Oracle Solaris Cluster 3.3 5/11 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
      phys-schost# cldevicegroup switch -n node devicegroup
    4. Unconfigure all mediators for the disk set.
      phys-schost# metaset -s setname -d -m mediator-host-list
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.
  3. On each node that uses a UFS root file system, temporarily change the name of the global devices entry in the /etc/vfstab file from the DID name to the physical name.

    This name change is necessary for live upgrade software to recognize the global-devices file system. You will restore the DID names after the live upgrade is completed.

    1. Back up the /etc/vfstab file.
      phys-schost# cp /etc/vfstab /etc/vfstab.old
    2. Open the /etc/vfstab file for editing.
    3. Locate and edit the line that corresponds to /global/.device/node@N.
      • Change the DID names to the physical names by changing /dev/did/{r}dsk/dYsZ to /dev/{r}dsk/cNtXdYsZ.

      • Remove global from the entry.

      The following example shows the names of DID device d3s3, which corresponds to /global/.devices/node@2, changed to its physical device names and the global entry removed:

      Original:
      /dev/did/dsk/d3s3  /dev/did/rdsk/d3s3  /global/.devices/node@2  ufs  2  no  global
      
      Changed:
      dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3  /global/.devices/node@2  ufs  2  no  -
    4. Temporarily comment out any entries for highly available local file systems that are managed by HAStoragePlus.
  4. Build an inactive boot environment (BE).
    phys-schost# lucreate options-n BE-name
    -n BE-name

    Specifies the name of the boot environment that is to be upgraded.

    For information about important options to the lucreate command, see Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate(1M) man page.

  5. If necessary, upgrade the Oracle Solaris OS software in your inactive BE.

    If the cluster already runs on a properly patched version of the Oracle Solaris OS that supports Oracle Solaris Cluster 3.3 5/11 software, this step is optional.

    • If you use Solaris Volume Manager software, run the following command:
      phys-schost# luupgrade -u -n BE-name -s os-image-path
      -u

      Upgrades an operating system image on a boot environment.

      -s os-image-path

      Specifies the path name of a directory that contains an operating system image.

    • If you use Veritas Volume Manager, follow live upgrade procedures in your Veritas Storage Foundation installation documentation for upgrading the operating system.
  6. Mount your inactive BE by using the lumount command.
    phys-schost# lumount -n BE-name -m BE-mount-point
    -m BE-mount-point

    Specifies the mount point of BE-name.

    For more information, see Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lumount(1M) man page.

  7. Apply any necessary Oracle Solaris patches.

    You might need to patch your Oracle Solaris software to use Oracle Solaris Live Upgrade. For details about the patches that the Oracle Solaris OS requires and where to download them, see Upgrading a System With Packages or Patches in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  8. If necessary, upgrade your VxVM software.

    Refer to your Veritas Storage Foundation installation documentation for procedures to use the live upgrade method.

  9. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  10. If your cluster hosts software applications that require an upgrade and that you can upgrade by using Oracle Solaris Live Upgrade, upgrade those software applications.

    However, if some software applications to upgrade cannot use Oracle Solaris Live Upgrade, such as Sun QFS software, wait to upgrade those applications until Step 24.

  11. Load the Oracle Solaris Cluster installation DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  12. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 10 for Oracle Solaris 10 .
    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
  13. Upgrade Oracle Solaris Cluster software.
    phys-schost# ./scinstall -u update -R BE-mount-point
    -u update

    Specifies that you are performing an upgrade of Oracle Solaris Cluster software.

    -R BE-mount-point

    Specifies the mount point for your alternate boot environment.

    For more information, see the scinstall(1M) man page.

  14. Apply Oracle Solaris Cluster patches to the inactive BE.
  15. Upgrade data services.
    phys-schost# BE-mount-point/usr/cluster/bin/scinstall -u update -s all  \
    -d /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents -R BE-mount-point
  16. Unload the installation DVD-ROM from the DVD-ROM drive.
    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.
    2. Eject the DVD-ROM.
      phys-schost# eject cdrom
  17. Repeat all steps, starting from Step 1, on each node in the cluster.

    Note - Do not reboot any node until all nodes in the cluster are upgraded on their inactive BE.


  18. On each cluster node that uses a UFS root file system, restore the DID names of the global-devices entry in the /etc/vfstab file.
    1. On the current, unupgraded BE, restore the original /etc/vfstab file.
      phys-schost# cp /etc/vstab.old /etc/vfstab
    2. In the alternate BE, open the /etc/vfstab file for editing.
    3. Locate the line that corresponds to /global/.devices/node@N and replace the dash (-) at the end of the entry with the word global.
      /dev/dsk/cNtXdYsZ /dev/rdsk/cNtXdYsZ /global/.devices/node@N ufs 2 no global

      When the node is rebooted into the upgraded alternate BE, the DID names are substituted in the /etc/vfstab file automatically.

    4. Uncomment the entries for highly available local file systems that you commented out in Step 3.
  19. On each node, unmount the inactive BE.
    phys-schost# luumount -n BE-name
  20. On each node, activate the upgraded inactive BE.
    phys-schost# luactivate BE-name
    BE-name

    The name of the alternate BE that you built in Step 4.

  21. Shut down each node in the cluster.

    Note - Do not use the reboot or halt command. These commands do not activate a new BE.


    phys-schost# shutdown -y -g0 -i0
  22. Determine your next step.
    • If your cluster hosts software applications that require upgrade and for which you cannot use Oracle Solaris Live Upgrade, go to Step 23 to boot each node into noncluster mode.
    • If you have no additional software to upgrade, skip to Step 25 to boot each node into cluster mode.
  23. To perform additional upgrade tasks, boot into noncluster mode.

    Ensure that all nodes in the cluster are shut down before you boot nodes into noncluster mode.

    • On SPARC based systems, perform the following command:
      ok boot -x
    • On x86 based systems, perform the following commands:
      1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               |
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.
        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


      If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

    The upgraded BE now runs in noncluster mode.

  24. Upgrade any software applications that require an upgrade and for which you cannot use Oracle Solaris Live Upgrade.

    Note - If an upgrade process directs you to reboot, always reboot into noncluster mode, as described in Step 23, until all upgrades are complete.


  25. After all nodes are upgraded, boot the nodes into cluster mode.
    1. Shut down each node.
      phys-schost# shutdown -g0 -y -i0
    2. When all nodes are shut down, boot each node into cluster mode.
      • On SPARC based systems, perform the following command:
        ok boot
      • On x86 based systems, perform the following commands:

        When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter. The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

      The cluster upgrade is completed.

Example 4-1 Live Upgrade to Oracle Solaris Cluster 3.3 5/11 Software

This example shows a live upgrade of a cluster node. The example upgrades the SPARC based node to the Oracle Solaris 10 OS, Oracle Solaris Cluster 3.3 5/11 framework, and all Oracle Solaris Cluster data services that support the live upgrade method. In this example, sc31u4 is the original boot environment (BE). The new BE that is upgraded is named sc33u1 and uses the mount point /sc33u1. The directory /net/installmachine/export/solaris10/OS_image/ contains an image of the Oracle Solaris 10 OS. The installer state file is named sc33u1state.

The following commands typically produce copious output. This output is shown only where necessary for clarity.

phys-schost# lucreate sc31u4 -m /:/dev/dsk/c0t4d0s0:ufs -n sc33u1
…
lucreate: Creation of Boot Environment sc33u1 successful.

phys-schost# luupgrade -u -n sc33u1 -s /net/installmachine/export/solaris10/OS_image/
The Solaris upgrade of the boot environment sc33u1 is complete.
Apply patches

phys-schost# lumount sc33u1 /sc33u1

Insert the installation DVD-ROM.
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
phys-schost# ./installer -no -saveState sc33u1state
phys-schost# ./installer -nodisplay -noconsole -state sc33u1state -altroot /sc33u1
phys-schost# cd /cdrom/cdrom0/Solaris_sparc/sun_cluster/Sol_9/Tools
phys-schost# ./scinstall -u update -R /sc33u1
phys-schost# /sc33u1/usr/cluster/bin/scinstall -u update -s all \
-d /cdrom/cdrom0 -R /sc33u1
phys-schost# cd /
phys-schost# eject cdrom

phys-schost# luumount sc33u1
phys-schost# luactivate sc33u1
Activation of boot environment sc33u1 successful.
Upgrade all other nodes

Shut down all nodes
phys-schost# shutdown -y -g0 -i0
When all nodes are shut down, boot each node into cluster mode
ok boot

At this point, you might upgrade data-service applications that cannot use the live upgrade method, before you reboot into cluster mode.

Troubleshooting

DID device name errors - During the creation of the inactive BE, if you receive an error that a file system that you specified with its DID device name, /dev/dsk/did/dNsX, does not exist, but the device name does exist, you must specify the device by its physical device name. Then change the vfstab entry on the alternate BE to use the DID device name instead. Perform the following steps:

Mount point errors - During creation of the inactive boot environment, if you receive an error that the mount point that you supplied is not mounted, mount the mount point and rerun the lucreate command.

New BE boot errors - If you experience problems when you boot the newly upgraded environment, you can revert to your original BE. For specific information, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks), in Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Global-devices file-system errors - After you upgrade a cluster on which the root disk is encapsulated, you might see one of the following error messages on the cluster console during the first reboot of the upgraded BE:

mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy Trying to remount /global/.devices/node@1 mount: /dev/vx/dsk/bootdg/node@1 is already mounted or /global/.devices/node@1 is busy
WARNING - Unable to mount one or more of the following filesystem(s):     /global/.devices/node@1 If this is not repaired, global devices will be unavailable. Run mount manually (mount filesystem...). After the problems are corrected, please clear the maintenance flag on globaldevices by running the following command: /usr/sbin/svcadm clear svc:/system/cluster/globaldevices:default
Dec 6 12:17:23 svc.startd[8]: svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96. [ system/cluster/globaldevices:default misconfigured (see 'svcs -x' for details) ] Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab. Dec 6 12:17:25 Cluster.CCR: /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

These messages indicate that the vxio minor number is the same on each cluster node. Reminor the root disk group on each node so that each number is unique in the cluster. See How to Assign a New Minor Number to a Device Group in Oracle Solaris Cluster Software Installation Guide.

Next Steps

Go to Chapter 6, Completing the Upgrade.

See Also

You can choose to keep your original, and now inactive, boot environment for as long as you need to. When you are satisfied that your upgrade is acceptable, you can then choose to remove the old environment or to keep and maintain it.

You can also maintain the inactive BE. For information about how to maintain the environment, see the appropriate version of the procedure Maintaining Solaris Live Upgrade Boot Environments (Tasks), for your original Solaris OS version.