JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Upgrade Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Preparing to Upgrade Oracle Solaris Cluster Software

2.  Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software

3.  Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software

4.  Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software

5.  Performing a Rolling Upgrade

Performing a Rolling Upgrade of a Cluster

How to Upgrade Quorum Server Software

How to Prepare a Cluster Node for a Rolling Upgrade

How to Perform a Rolling Upgrade of a Solaris Maintenance Update

How to Perform a Rolling Upgrade of Oracle Solaris Cluster 3.3 Software

6.  Completing the Upgrade

7.  Recovering From an Incomplete Upgrade

8.  SPARC: Upgrading Sun Management Center Software

Index

Performing a Rolling Upgrade of a Cluster

Table 5-1 Task Map: Performing a Rolling Upgrade to Oracle Solaris Cluster 3.3 Software

Task
Instructions
1. Read the upgrade requirements and restrictions.
2. If a quorum server is used, upgrade the Quorum Server software.
3. On one node of the cluster, move resource groups and device groups to another cluster node, and ensure that shared data and system disks are backed up. If Oracle Solaris Cluster Geographic Edition software is installed, uninstall it. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. Then reboot the node into noncluster mode.
4. Upgrade the Solaris OS on the cluster node, if necessary, to a supported Solaris update release.
5. Upgrade the cluster node to Oracle Solaris Cluster 3.3 framework software. Optionally, upgrade data-service software. If necessary, upgrade applications.
6. Repeat Tasks 3 through 4 on each remaining node to upgrade.
7. Use the scversions command to commit the cluster to the upgrade.
8. Verify successful completion of upgrade to Oracle Solaris Cluster 3.3 software.
9. Enable resources and bring resource groups online. Migrate existing resources to new resource types. Upgrade to the Oracle Solaris Cluster Geographic Edition 3.3 software, if used.
10. (Optional) SPARC: Upgrade the Oracle Solaris Cluster module to Sun Management Center.

How to Upgrade Quorum Server Software

If the cluster uses a quorum server, upgrade the Quorum Server software on the quorum server before you upgrade the cluster.


Note - If more than one cluster uses the quorum server, perform these steps for each of those clusters.


Perform all steps as superuser on the cluster and on the quorum server.

  1. If the cluster has two nodes and the quorum server is the cluster's only quorum device, temporarily add a second quorum device.

    See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.

    If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 version of Quorum Server software.

  2. Unconfigure the quorum server from each cluster that uses the quorum server.
    phys-schost# clquorum remove quorumserver
  3. From the quorum server to upgrade, verify that the quorum server no longer serves any cluster.
    quorumserver# clquorumserver show +

    If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.


    Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.


  4. From the quorum server to upgrade, halt all quorum server instances.
    quorumserver# clquorumserver stop +
  5. Uninstall the Quorum Server software from the quorum server to upgrade.
    1. Navigate to the directory where the uninstaller is located.
      quorumserver# cd /var/sadm/prod/SUNWentsysver
      ver

      The version that is installed on your system.

    2. Start the uninstallation wizard.
      quorumserver# ./uninstall
    3. Follow instructions on the screen to uninstall the Quorum Server software from the quorum-server host computer.

      After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.

    4. (Optional) Clean up or remove the quorum server directories.

      By default, this directory is /var/scqsd.

  6. Install the Oracle Solaris Cluster 3.3 Quorum Server software, reconfigure the quorum server, and start the quorum server daemon.

    Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.

  7. From a cluster node, configure the upgraded quorum server as a quorum device.

    Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.

  8. If you configured a temporary quorum device, unconfigure it.
    phys-schost# clquorum remove tempquorum

How to Prepare a Cluster Node for a Rolling Upgrade

Perform this procedure on one node at a time. You will take the upgraded node out of the cluster while the remaining nodes continue to function as active cluster members.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.
    1. View the current status of the cluster by running the following command from any node.
      phys-schost% cluster status

      See the cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
    3. Check the volume-manager status.
  2. If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. If you are upgrading Oracle Solaris Cluster 3.3 software and Oracle Solaris Cluster Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Oracle Solaris Cluster Geographic Edition software.

  4. Become superuser on a node of the cluster.
  5. Move all resource groups and device groups that are running on the node to upgrade.
    phys-schost# clnode evacuate node-to-evacuate

    See the clnode(1CL) man page for more information.

  6. Verify that the move was completed successfully.
    phys-schost# cluster status -t devicegroup,resourcegroup
  7. Ensure that the system disk, applications, and all data are backed up.
  8. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide for more information.

    1. Run the following command to verify that no mediator data problems exist.
      phys-schost# medstat -s setname
      -s setname

      Specifies the disk set name

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Commit the Upgraded Cluster to Oracle Solaris Cluster 3.3 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
      phys-schost# cldevicegroup switch -n node devicegr
    4. Unconfigure all mediators for the disk set.
      phys-schost# metaset -s setname -d -m mediator-host-list
      -s setname

      Specifies the disk-set name

      -d

      Deletes from the disk set

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat these steps for each remaining disk set that uses mediators.
  9. Shut down the node that you want to upgrade and boot it into noncluster mode.
    • On SPARC based systems, perform the following commands:
      phys-schost# shutdown -y -g0
      ok boot -x
    • On x86 based systems, perform the following commands:
      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.
        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


    The other nodes of the cluster continue to function as active cluster members.

Next Steps

To upgrade the Solaris software to a Maintenance Update release, go to How to Perform a Rolling Upgrade of a Solaris Maintenance Update.


Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support Oracle Solaris Cluster 3.3 software. See the Oracle Solaris Cluster 3.3 Release Notes for information about supported releases of the Solaris OS.


Otherwise, if you do not intend to upgrade the Solaris OS, go to How to Perform a Rolling Upgrade of Oracle Solaris Cluster 3.3 Software.

How to Perform a Rolling Upgrade of a Solaris Maintenance Update

Perform this procedure to upgrade the Solaris OS to a supported Maintenance Update release.


Note - You cannot perform a rolling upgrade to upgrade a cluster from Solaris 9 to Oracle Solaris 10 software. Go to Choosing an Oracle Solaris Cluster Upgrade Method to identify the appropriate upgrade method to use.


Before You Begin

Ensure that all steps in How to Prepare a Cluster Node for a Rolling Upgrade are completed.

  1. Temporarily comment out all entries for globally mounted file systems in the node's /etc/vfstab file.

    Perform this step to prevent the Solaris upgrade from attempting to mount the global devices.

  2. Follow the instructions in the Solaris maintenance update installation guide to install the Maintenance Update release.

    Note - Do not reboot the node when prompted to reboot at the end of installation processing.


  3. Uncomment all entries in the /a/etc/vfstab file for globally mounted file systems that you commented out in Step 1.
  4. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    Note - Do not reboot the node until Step 5.


  5. Reboot the node into noncluster mode.
    • On SPARC based systems, perform the following commands:
      phys-schost# shutdown -y -g0
      ok boot -x
    • On x86 based systems, perform the following commands:
      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.
        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


Next Steps

Go to How to Perform a Rolling Upgrade of Oracle Solaris Cluster 3.3 Software.

How to Perform a Rolling Upgrade of Oracle Solaris Cluster 3.3 Software

Perform this procedure to upgrade a node that runs Oracle Solaris Cluster 3.3 software while the remaining cluster nodes are in cluster mode.


Note - Until all nodes of the cluster are upgraded and the upgrade is committed, new features that are introduced by the new release might not be available.


  1. Become superuser on the node of the cluster.
  2. If you upgraded the Solaris OS but do not need to upgrade to an Oracle Solaris Cluster update release, skip to Step 13.
  3. Load the installation DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  4. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86and where ver is 10 for Oracle Solaris 10 .
    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
  5. Start the scinstall utility.
    phys-schost# ./scinstall

    Note - Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the installation DVD-ROM.


    The scinstall Main Menu is displayed.

  6. Choose the menu item, Upgrade This Cluster Node.
      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  4

    The Upgrade Menu is displayed.

  7. Choose the menu item, Upgrade Oracle Solaris Cluster Framework on This Node.
  8. Follow the menu prompts to upgrade the cluster framework.

    Upgrade processing is finished when the system displays the message Completed Oracle Solaris Cluster framework upgrade and prompts you to press Enter to continue.

  9. Quit the scinstall utility.
  10. (Optional) Upgrade data service packages.

    Note - For HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.


    1. Start the upgraded interactive scinstall utility.
      phys-schost# /usr/cluster/bin/scinstall

      Note - Do not use the scinstall utility that is on the installation media to upgrade data service packages.


      The scinstall Main Menu is displayed.

    2. Choose the menu item, Upgrade This Cluster Node.

      The Upgrade Menu is displayed.

    3. Choose the menu item, Upgrade Oracle Solaris Cluster Data Service Agents on This Node.
    4. Follow the menu prompts to upgrade Oracle Solaris Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

    5. When the system displays the message Completed upgrade of Oracle Solaris Cluster data services agents, press Return.

      The Upgrade Menu is displayed.

  11. Quit the scinstall utility.
  12. Unload the installation DVD-ROM from the DVD-ROM drive.
    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.
    2. Eject the DVD-ROM.
      phys-schost# eject cdrom
  13. If you have HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    Note - If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.


    As of the Sun Cluster 3.2 release, LOFS is no longer disabled by default during Oracle Solaris Cluster software installation or upgrade. To disable LOFS, ensure that the /etc/system file contains the following entry:

    exclude:lofs

    This change becomes effective at the next system reboot.

  14. As needed, manually upgrade any custom data services that are not supplied on the product media.
  15. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  16. Upgrade software applications that are installed on the cluster.

    If you want to upgrade VxVM and did not upgrade the Solaris OS, follow procedures in Veritas Storage Foundation installation documentation to upgrade VxVM without upgrading the operating system.


    Note - If any upgrade procedure instruct you to perform a reboot, you must add the -x option to the boot command. This option boots the cluster into noncluster mode.


    Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster and Solaris software. See your application documentation for installation instructions.

  17. Shut down the node.
    phys-schost# shutdown -g0 -y
  18. Reboot the node into the cluster.
    • On SPARC based systems, perform the following command:

      ok boot
    • On x86 based systems, perform the following commands:

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:

      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.
  19. Return to How to Prepare a Cluster Node for a Rolling Upgrade and repeat all upgrade procedures on the next node to upgrade.

    Repeat this process until all nodes in the cluster are upgraded.

Next Steps

When all nodes in the cluster are upgraded, go to Chapter 6, Completing the Upgrade.