JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Upgrade Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Preparing to Upgrade Oracle Solaris Cluster Software

2.  Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software

Performing a Standard Upgrade of a Cluster

How to Upgrade Quorum Server Software

How to Prepare the Cluster for Upgrade (Standard)

How to Upgrade the Solaris OS and Volume Manager Software (Standard)

How to Upgrade Oracle Solaris Cluster 3.3 Software (Standard)

3.  Performing a Dual-Partition Upgrade to Oracle Solaris Cluster 3.3 Software

4.  Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software

5.  Performing a Rolling Upgrade

6.  Completing the Upgrade

7.  Recovering From an Incomplete Upgrade

8.  SPARC: Upgrading Sun Management Center Software

Index

Performing a Standard Upgrade of a Cluster

The following table lists the tasks to perform to upgrade to Oracle Solaris Cluster 3.3 software. You also perform these tasks to upgrade only the Solaris OS.


Note - If you upgrade the Solaris OS to a new marketing release, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new OS version.


Table 2-1 Task Map: Performing a Standard Upgrade to Oracle Solaris Cluster 3.3 Software

Task
Instructions
1. Read the upgrade requirements and restrictions. Determine the proper upgrade method for your configuration and needs.
2. If a quorum server is used, upgrade the Quorum Server software.
3. Remove the cluster from production and back up shared data. If Oracle Solaris Cluster Geographic Edition software is installed, uninstall it.
4. Upgrade the Solaris software, if necessary, to a supported Solaris update. If the cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure the mediators. As needed, upgrade Veritas Volume Manager (VxVM) and Veritas File System (VxFS). Solaris Volume Manager software is automatically upgraded with the Solaris OS.
5. Upgrade to Oracle Solaris Cluster 3.3 framework and data-service software. If necessary, upgrade applications. If the cluster uses dual-string mediators and you upgraded the Solaris OS, reconfigure the mediators. If you upgraded VxVM, upgrade disk groups.
6. Use the scversions command to commit the cluster to the upgrade.
7. Verify successful completion of upgrade to Oracle Solaris Cluster 3.3 software.
8. Enable resources and bring resource groups online. Migrate existing resources to new resource types. Upgrade to Oracle Solaris Cluster Geographic Edition 3.3 software, if used.
9. (Optional) SPARC: Upgrade the Oracle Solaris Cluster module for Sun Management Center, if needed.

How to Upgrade Quorum Server Software

If the cluster uses a quorum server, upgrade the Quorum Server software on the quorum server before you upgrade the cluster.


Note - If more than one cluster uses the quorum server, perform on each cluster the steps to remove the quorum server and later the steps to add back the quorum server.


Perform all steps as superuser on the cluster and on the quorum server.

  1. If the cluster has two nodes and the quorum server is the cluster's only quorum device, temporarily add a second quorum device.

    See Adding a Quorum Device in Oracle Solaris Cluster System Administration Guide.

    If you add another quorum server as a temporary quorum device, the quorum server can run the same software version as the quorum server that you are upgrading, or it can run the 3.3 version of Quorum Server software.

  2. Unconfigure the quorum server from each cluster that uses the quorum server.
    phys-schost# clquorum remove quorumserver
  3. From the quorum server to upgrade, verify that the quorum server no longer serves any cluster.
    quorumserver# clquorumserver show +

    If the output shows any cluster is still served by the quorum server, unconfigure the quorum server from that cluster. Then repeat this step to confirm that the quorum server is no longer configured with any cluster.


    Note - If you have unconfigured the quorum server from a cluster but the clquorumserver show command still reports that the quorum server is serving that cluster, the command might be reporting stale configuration information. See Cleaning Up Stale Quorum Server Cluster Information in Oracle Solaris Cluster System Administration Guide.


  4. From the quorum server to upgrade, halt all quorum server instances.
    quorumserver# clquorumserver stop +
  5. Uninstall the Quorum Server software from the quorum server to upgrade.
    1. Navigate to the directory where the uninstaller is located.
      quorumserver# cd /var/sadm/prod/SUNWentsysver
      ver

      The version that is installed on your system.

    2. Start the uninstallation wizard.
      quorumserver# ./uninstall
    3. Follow instructions on the screen to uninstall the Quorum Server software from the quorum-server host computer.

      After removal is finished, you can view any available log. See Chapter 8, Uninstalling, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the uninstall program.

    4. (Optional) Clean up or remove the quorum server directories.

      By default, this directory is /var/scqsd.

  6. Install the Oracle Solaris Cluster 3.3 Quorum Server software, reconfigure the quorum server, and start the quorum server daemon.

    Follow the steps in How to Install and Configure Quorum Server Software in Oracle Solaris Cluster Software Installation Guide for installing the Quorum Server software.

  7. From a cluster node, configure the upgraded quorum server as a quorum device.

    Follow the steps in How to Configure Quorum Devices in Oracle Solaris Cluster Software Installation Guide.

  8. If you configured a temporary quorum device, unconfigure it.
    phys-schost# clquorum remove tempquorum

How to Prepare the Cluster for Upgrade (Standard)

Perform this procedure to remove the cluster from production before you perform a standard upgrade. Perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  1. Ensure that the cluster is functioning normally.
    1. View the current status of the cluster by running the following command from any node.
      • On Sun Cluster 3.1 8/05 software, use the following command:

        phys-schost% scstat
      • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

        phys-schost% cluster status

      See the scstat(1M) or cluster(1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
    3. Check the volume-manager status.
  2. Notify users that cluster services will be unavailable during the upgrade.
  3. If Geographic Edition software is installed, uninstall it.

    For uninstallation procedures, see the documentation for your version of Geographic Edition software.

  4. Become superuser on a node of the cluster.
  5. Take each resource group offline and disable all resources.

    Take offline all resource groups in the cluster, including those that are in non-global zones. Then disable all resources, to prevent the cluster from bringing the resources online automatically if a node is mistakenly rebooted into cluster mode.

    • If you want to use the scsetup or clsetup utility, perform the following steps:
      1. Start the utility.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scsetup
        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clsetup

        The Main Menu is displayed.

      2. Choose the menu item, Resource Groups.

        The Resource Group Menu is displayed.

      3. Choose the menu item, Online/Offline or Switchover a Resource Group.
      4. Follow the prompts to take offline all resource groups and to put them in the unmanaged state.
      5. When all resource groups are offline, type q to return to the Resource Group Menu.
      6. Exit the scsetup utility.

        Type q to back out of each submenu or press Ctrl-C.

    • To use the command line, perform the following steps:
      1. Take each resource offline.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scswitch -F -g resource-group
          -F

          Switches a resource group offline.

          -g resource-group

          Specifies the name of the resource group to take offline.

        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clresource offline resource-group
      2. From any node, list all enabled resources in the cluster.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scrgadm -pv | grep "Res enabled"
          (resource-group:resource) Res enabled: True
        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clresource show -p Enabled
          === Resources ===
          
          Resource:                                       resource
            Enabled{nodename1}:                               True
            Enabled{nodename2}:                               True
          …
      3. Identify those resources that depend on other resources.
        phys-schost# clresource show -p resource_dependencies
        === Resources ===
        
        Resource:                                       node
          Resource_dependencies:                           node

        You must disable dependent resources first before you disable the resources that they depend on.

      4. Disable each enabled resource in the cluster.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scswitch -n -j resource
          -n

          Disables.

          -j resource

          Specifies the resource.

        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clresource disable resource

        See the scswitch(1M) or clresource(1CL) man page for more information.

      5. Verify that all resources are disabled.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scrgadm -pv | grep "Res enabled"
          (resource-group:resource) Res enabled: False
        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clresource show -p Enabled
          === Resources ===
          
          Resource:                                       resource
            Enabled{nodename1}:                               False
            Enabled{nodename2}:                               False
          …
      6. Move each resource group to the unmanaged state.
        • On Sun Cluster 3.1 8/05 software, use the following command:

          phys-schost# scswitch -u -g resource-group
          -u

          Moves the specified resource group to the unmanaged state.

          -g resource-group

          Specifies the name of the resource group to move into the unmanaged state.

        • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

          phys-schost# clresourcegroup unmanage resource-group
  6. Verify that all resources on all nodes are Offline and that all resource groups are in the Unmanaged state.
    • On Sun Cluster 3.1 8/05 software, use the following command:

      phys-schost# scstat -g
    • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

      phys-schost# cluster status -t resource,resourcegroup
  7. Stop all applications that are running on each node of the cluster.
  8. Ensure that all shared data is backed up.
  9. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators.

    See Configuring Dual-String Mediators in Oracle Solaris Cluster Software Installation Guide for more information about mediators.

    1. Run the following command to verify that no mediator data problems exist.
      phys-schost# medstat -s setname
      -s setname

      Specifies the disk set name.

      If the value in the Status field is Bad, repair the affected mediator host. Follow the procedure How to Fix Bad Mediator Data in Oracle Solaris Cluster Software Installation Guide.

    2. List all mediators.

      Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Oracle Solaris Cluster 3.3 Software.

    3. For a disk set that uses mediators, take ownership of the disk set if no node already has ownership.
      • On Sun Cluster 3.1 8/05 software, use the following command:

        phys-schost# scswitch -z -D setname -h node
        -z

        Changes mastery.

        -D devicegroup

        Specifies the name of the disk set.

        -h node

        Specifies the name of the node to become primary of the disk set.

      • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

        phys-schost# cldevicegroup switch -n node devicegroup
    4. Unconfigure all mediators for the disk set.
      phys-schost# metaset -s setname -d -m mediator-host-list
      -s setname

      Specifies the disk set name.

      -d

      Deletes from the disk set.

      -m mediator-host-list

      Specifies the name of the node to remove as a mediator host for the disk set.

      See the mediator(7D) man page for further information about mediator-specific options to the metaset command.

    5. Repeat Step c through Step d for each remaining disk set that uses mediators.
  10. From one node, shut down the cluster.
    • On Sun Cluster 3.1 8/05 software, use the following command:

      phys-schost# scshutdown -g0 -y
    • On Sun Cluster 3.2 or Oracle Solaris Cluster 3.3 software, use the following command:

      phys-schost# cluster shutdown -g0 -y

    See the scshutdown(1M)man page for more information.

  11. Boot each node into noncluster mode.
    • On SPARC based systems, perform the following command:
      ok boot -x
    • On x86 based systems, perform the following commands:
      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:

        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                               | 
        | Solaris failsafe                                                     |
        |                                                                      |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:

        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       | 
        | kernel /platform/i86pc/multiboot                                     | 
        | module /platform/i86pc/boot_archive                                  | 
        |+----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.
        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

        GNU GRUB version 0.97 (639K lower / 1047488K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.


  12. Ensure that each system disk is backed up.

Next Steps

Upgrade software on each node.

How to Upgrade the Solaris OS and Volume Manager Software (Standard)

Perform this procedure on each node in the cluster to upgrade the Solaris OS and optionally also VxVM, if used. Perform all steps from the global zone only. If the cluster already runs on a version of the Solaris OS that supports Oracle Solaris Cluster 3.3 software, further upgrade of the Solaris OS is optional.

If you do not intend to upgrade the Solaris OS or volume management software, proceed to How to Upgrade Oracle Solaris Cluster 3.3 Software (Standard).


Note - The cluster must already run on, or be upgraded to, at least the minimum required level of the Oracle Solaris OS to support upgrade to Oracle Solaris Cluster 3.3 software. See Supported Products in Oracle Solaris Cluster 3.3 Release Notes for more information.


Before You Begin

Ensure that all steps in How to Prepare the Cluster for Upgrade (Standard) are completed.

  1. Become superuser on the cluster node to upgrade.

    If you are performing a dual-partition upgrade, the node must be a member of the partition that is in noncluster mode.

  2. Determine whether the following Apache run-control scripts exist and are enabled or disabled:
    /etc/rc0.d/K16apache
    /etc/rc1.d/K16apache
    /etc/rc2.d/K16apache
    /etc/rc3.d/S50apache
    /etc/rcS.d/K16apache

    Some applications, such as Oracle Solaris Cluster HA for Apache, require that Apache run control scripts be disabled.

    • If these scripts exist and contain an uppercase K or S in the file name, the scripts are enabled. No further action is necessary for these scripts.

    • If these scripts do not exist, in Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

    • If these scripts exist but the file names contain a lowercase k or s, the scripts are disabled. In Step 7 you must ensure that any Apache run control scripts that are installed during the Solaris OS upgrade are disabled.

  3. Comment out all entries for globally mounted file systems in the node's /etc/vfstab file.
    1. For later reference, make a record of all entries that are already commented out.
    2. Temporarily comment out all entries for globally mounted file systems in the /etc/vfstab file.

      Entries for globally mounted file systems contain the global mount option. Comment out these entries to prevent the Solaris upgrade from attempting to mount the global devices.

  4. Determine which procedure to follow to upgrade the Solaris OS.
    • To use Live Upgrade, go instead to Chapter 4, Performing a Live Upgrade to Oracle Solaris Cluster 3.3 Software.

    • To upgrade a cluster that uses Solaris Volume Manager by a method other than Live Upgrade, follow upgrade procedures in Solaris installation documentation.

    • To upgrade a cluster that uses Veritas Volume Manager by a method other than Live Upgrade, follow upgrade procedures in Veritas Storage Foundation installation documentation.


    Note - If your cluster has VxVM installed and you are upgrading the Solaris OS, you must reinstall or upgrade to VxVM software that is compatible with the version of Oracle Solaris 10 that you upgrade to.


  5. Upgrade the Solaris software, following the procedure that you selected in Step 4.

    Note - Do not perform the final reboot instruction in the Solaris software upgrade. Instead, do the following:

    1. Return to this procedure to perform Step 6 and Step 7.

    2. Reboot into noncluster mode in Step 8 to complete Solaris software upgrade.


    • When prompted, choose the manual reboot option.

    • When you are instructed to reboot a node during the upgrade process, always reboot into noncluster mode. For the boot and reboot commands, add the -x option to the command. The -x option ensures that the node reboots into noncluster mode. For example, either of the following two commands boot a node into single-user noncluster mode:

    • On SPARC based systems, perform either of the following commands:
      phys-schost# reboot -- -xs
      or
      ok boot -xs

      If the instruction says to run the init S command, use the reboot -- -xs command instead.

    • On x86 based systems, perform the following command:
      phys-schost# shutdown -g -y -i0
      Press any key to continue
      1. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.

        The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.

        The GRUB boot parameters screen appears similar to the following:

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot                                     |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.
      3. Add -x to the command to specify that the system boot into noncluster mode.
        [ Minimal BASH-like line editing is supported. For the first word, TAB
        lists possible command completions. Anywhere else TAB lists the possible
        completions of a device/filename. ESC at any time exits. ]
        
        grub edit> kernel /platform/i86pc/multiboot -x
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

        GNU GRUB version 0.95 (615K lower / 2095552K upper memory)
        +----------------------------------------------------------------------+
        | root (hd0,0,a)                                                       |
        | kernel /platform/i86pc/multiboot -x                                  |
        | module /platform/i86pc/boot_archive                                  |
        +----------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press 'b' to boot, 'e' to edit the selected command in the
        boot sequence, 'c' for a command-line, 'o' to open a new line
        after ('O' for before) the selected line, 'd' to remove the
        selected line, or escape to go back to the main menu.-
      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command.


      If the instruction says to run the init S command, shut down the system then change the GRUB kernel boot command to /platform/i86pc/multiboot -sx instead.

  6. In the /a/etc/vfstab file, uncomment those entries for globally mounted file systems that you commented out in Step 3.
  7. If Apache run control scripts were disabled or did not exist before you upgraded the Solaris OS, ensure that any scripts that were installed during Solaris upgrade are disabled.

    To disable Apache run control scripts, use the following commands to rename the files with a lowercase k or s.

    phys-schost# mv /a/etc/rc0.d/K16apache /a/etc/rc0.d/k16apache 
    phys-schost# mv /a/etc/rc1.d/K16apache /a/etc/rc1.d/k16apache
    phys-schost# mv /a/etc/rc2.d/K16apache /a/etc/rc2.d/k16apache
    phys-schost# mv /a/etc/rc3.d/S50apache /a/etc/rc3.d/s50apache
    phys-schost# mv /a/etc/rcS.d/K16apache /a/etc/rcS.d/k16apache

    Alternatively, you can rename the scripts to be consistent with your normal administration practices.

  8. Reboot the node into noncluster mode.

    Include the double dashes (--) in the following command:

    phys-schost# reboot -- -x
  9. If your cluster runs VxVM and you are upgrading it as well as upgrading the Solaris OS, perform the remaining steps in the procedure to reinstall or upgrade VxVM.

    Make the following changes to the procedure:

    • After VxVM upgrade is complete but before you reboot, verify the entries in the /etc/vfstab file.

      If any of the entries that you uncommented in Step 6 were commented out, make those entries uncommented again.

    • If the VxVM procedures instruct you to perform a final reconfiguration reboot, do not use the -r option alone. Instead, reboot into noncluster mode by using the -rx options.
      • On SPARC based systems, perform the following command:
        phys-schost# reboot -- -rx
      • On x86 based systems, perform the shutdown and boot procedures that are described in Step 5 except add -rx to the kernel boot command instead of -sx.

    Note - If you see a message similar to the following, type the root password to continue upgrade processing. Do not run the fsck command nor type Ctrl-D.

    WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
    Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk_13vol). Exit the 
    shell when done to continue the boot process.
    
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance):  Type the root password

  10. (Optional) SPARC: Upgrade VxFS.

    Follow procedures that are provided in your VxFS documentation.

  11. Install any required Solaris software patches and hardware-related patches, and download any needed firmware that is contained in the hardware patches.

    Note - Do not reboot after you add patches. Wait to reboot the node until after you upgrade the Oracle Solaris Cluster software.


    See Patches and Required Firmware Levels in the Oracle Solaris Cluster 3.3 Release Notes for the location of patches and installation instructions.

Next Steps

If you are only upgrading the Solaris OS to a Solaris update release and are not upgrading the Oracle Solaris Cluster software, skip to Chapter 6, Completing the Upgrade.

Otherwise, upgrade to Oracle Solaris Cluster 3.3 software. Go to How to Upgrade Oracle Solaris Cluster 3.3 Software (Standard).


Note - To complete the upgrade to a new marketing release of the Solaris OS, such as from Solaris 9 to Oracle Solaris 10 software, you must also upgrade the Oracle Solaris Cluster software and dependency software to the version that is compatible with the new version of the OS.


How to Upgrade Oracle Solaris Cluster 3.3 Software (Standard)

Perform this procedure to upgrade each node of the cluster to Oracle Solaris Cluster 3.3 software. You must also perform this procedure after you upgrade to a different marketing release of the Solaris OS, such as from Solaris 9 to Oracle Solaris 10 software.

Perform all steps from the global zone only.


Tip - You can use the cconsole utility to perform this procedure on multiple nodes simultaneously. See How to Install Cluster Control Panel Software on an Administrative Console in Oracle Solaris Cluster Software Installation Guide for more information.


Before You Begin

Perform the following tasks:

  1. Become superuser on a node of the cluster.
  2. Load the installation DVD-ROM into the DVD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.

  3. Change to the /Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 10 for Oracle Solaris 10.
    phys-schost# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools
  4. Start the scinstall utility.
    phys-schost# ./scinstall

    Note - Do not use the /usr/cluster/bin/scinstall command that is already installed on the node. You must use the scinstall command that is located on the installation DVD-ROM.


    The scinstall Main Menu is displayed.

  5. Choose the menu item, Upgrade This Cluster Node.
      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
            1) Create a new cluster or add a cluster node
            2) Configure a cluster to be JumpStarted from this install server
          * 3) Manage a dual-partition upgrade
          * 4) Upgrade this cluster node
          * 5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
    
        Option:  4

    The Upgrade Menu is displayed.

  6. Choose the menu item, Upgrade Oracle Solaris Cluster Framework on This Node.
  7. Follow the menu prompts to upgrade the cluster framework.

    During the Oracle Solaris Cluster upgrade, scinstall might make one or more of the following configuration changes:

    • Rename the ntp.conf file to ntp.conf.cluster, if ntp.conf.cluster does not already exist on the node.

    • Set the local-mac-address? variable to true, if the variable is not already set to that value.

    Upgrade processing is finished when the system displays the message Completed Oracle Solaris Cluster framework upgrade and prompts you to press Enter to continue.

  8. Quit the scinstall utility.
  9. Upgrade data service packages.

    You must upgrade all data services to the Oracle Solaris Cluster 3.3 version.


    Note - For HA for SAP Web Application Server, if you are using a J2EE engine resource or a web application server component resource or both, you must delete the resource and recreate it with the new web application server component resource. Changes in the new web application server component resource includes integration of the J2EE functionality. For more information, see Oracle Solaris Cluster Data Service for SAP Web Application Server Guide.


    1. Start the upgraded interactive scinstall utility.
      phys-schost# /usr/cluster/bin/scinstall

      Note - Do not use the scinstall utility that is on the installation media to upgrade data service packages.


      The scinstall Main Menu is displayed.

    2. Choose the menu item, Upgrade This Cluster Node.

      The Upgrade Menu is displayed.

    3. Choose the menu item, Upgrade Oracle Solaris Cluster Data Service Agents on This Node.
    4. Follow the menu prompts to upgrade Oracle Solaris Cluster data service agents that are installed on the node.

      You can choose from the list of data services that are available to upgrade or choose to upgrade all installed data services.

    5. When the system displays the message Completed upgrade of Oracle Solaris Cluster data services agents, press Enter.

      The Upgrade Menu is displayed.

  10. Quit the scinstall utility.
  11. Unload the installation DVD-ROM from the DVD-ROM drive.
    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.
    2. Eject the DVD-ROM.
      phys-schost# eject cdrom
  12. If you have HA for NFS configured on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.

    Note - If you have non-global zones configured, LOFS must remain enabled. For guidelines about using LOFS and alternatives to disabling it, see Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.


    To disable LOFS, ensure that the /etc/system file contains the following entry:

    exclude:lofs

    This change becomes effective at the next system reboot.

  13. As needed, manually upgrade any custom data services that are not supplied on the product media.
  14. Verify that each data-service update is installed successfully.

    View the upgrade log file that is referenced at the end of the upgrade output messages.

  15. Upgrade software applications that are installed on the cluster.

    If you want to upgrade VxVM and did not upgrade the Solaris OS, follow procedures in Veritas Storage Foundation installation documentation to upgrade VxVM without upgrading the operating system.


    Note - If any upgrade procedure instruct you to perform a reboot, you must add the -x option to the boot command. This option boots the cluster into noncluster mode.


    Ensure that application levels are compatible with the current versions of Oracle Solaris Cluster and Solaris software. See your application documentation for installation instructions.

  16. If you upgraded from Sun Cluster 3.1 8/05 software, reconfigure the private-network address range.

    Perform this step if you want to increase or decrease the size of the IP address range that is used by the private interconnect. The IP address range that you configure must minimally support the number of nodes and private networks in the cluster. See Private Network in Oracle Solaris Cluster Software Installation Guide for more information.

    If you also expect to configure zone clusters, you specify that number in How to Finish Upgrade to Oracle Solaris Cluster 3.3 Software, after all nodes are back in cluster mode.

    1. From one node, start the clsetup utility.

      When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.

    2. Choose the menu item, Change IP Address Range.

      The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.

    3. To change either the private-network IP address or the IP address range, type yes and press the Return key.

      The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.

    4. Change or accept the private-network IP address.
      • To accept the default private-network IP address and proceed to changing the IP address range, type yes and press the Return key.

        The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.

      • To change the default private-network IP address, perform the following substeps.
        1. Type no in response to the clsetup utility question about whether it is okay to accept the default address, then press the Return key.

          The clsetup utility will prompt for the new private-network IP address.

        2. Type the new IP address and press the Return key.

          The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.

    5. Change or accept the default private-network IP address netmask and range.

      The default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, up to 10 private networks, and up to 12 zone clusters in the cluster. If you choose to change the netmask, you specify in the following substeps the number of nodes and private networks that you expect in the cluster.

      If you also expect to configure zone clusters, you specify that number in How to Finish Upgrade to Oracle Solaris Cluster 3.3 Software, after all nodes are back in cluster mode.

      • To accept the default IP address netmask and range, type yes and press the Return key.

        Then skip to the next step.

      • To change the IP address netmask and range, perform the following substeps.
        1. Type no in response to the clsetup utility's question about whether it is okay to accept the default address range, then press the Return key.

          When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks that you expect to configure in the cluster.

        2. Enter the number of nodes and private networks that you expect to configure in the cluster.

          From these numbers, the clsetup utility calculates two proposed netmasks:

          • The first netmask is the minimum netmask to support the number of nodes and private networks that you specified.

          • The second netmask supports twice the number of nodes and private networks that you specified, to accommodate possible future growth.

        3. Specify either of the calculated netmasks, or specify a different netmask that supports the expected number of nodes and private networks.
    6. Type yes in response to the clsetup utility's question about proceeding with the update.
    7. When finished, exit the clsetup utility.
  17. After all nodes in the cluster are upgraded, reboot the upgraded nodes.
    1. Shut down each node.
      phys-schost# shutdown -g0 -y
    2. Boot each node into cluster mode.
      • On SPARC based systems, do the following:
        ok boot
      • On x86 based systems, do the following:

        When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:

        GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
        +-------------------------------------------------------------------------+
        | Solaris 10 /sol_10_x86                                                  |
        | Solaris failsafe                                                        |
        |                                                                         |
        +-------------------------------------------------------------------------+
        Use the ^ and v keys to select which entry is highlighted.
        Press enter to boot the selected OS, 'e' to edit the
        commands before booting, or 'c' for a command-line.

        For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in System Administration Guide: Basic Administration.

Next Steps

Go to Chapter 6, Completing the Upgrade.