Oracle® Solaris Cluster Upgrade Guide

Exit Print View

Updated: July 2014, E39644–01
 
 

How to Prepare the Cluster for Upgrade (Dual-Partition)

Perform this procedure to prepare a multiple-node cluster for a dual-partition upgrade. These procedures refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.


Note -  If you are upgrading a single-node cluster, do not use this upgrade method. Instead, go to How to Prepare the Cluster for Upgrade (Standard Upgrade).

Perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  • Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.

  • Have available the installation media, documentation, and software updates for all software products that you are upgrading, including the following software:

    • Oracle Solaris OS

    • Oracle Solaris Cluster

    • Applications that are managed by Oracle Solaris Cluster data services

    • Any other third-party applications to upgrade

    For instructions on updating single or multiple packages, see Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide .

  • If you use role-based access control (RBAC) instead of the root role to access the cluster nodes, ensure that you can become an administrator with rights for all Oracle Solaris Cluster commands. This series of upgrade procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See User Rights Management in Securing Users and Processes in Oracle Solaris 11.2 for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.

  1. Ensure that the cluster is functioning normally.
    1. View the current status of the cluster by running the following command from any node.
      phys-schost% cluster status

      See the cluster (1CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
    3. Check the volume-manager status.
  2. If Geographic Edition software is installed, ensure that all application resource groups have the Auto_start_on_new_cluster property set to False.

    This setting prevents the unintended starting of application resource groups before cluster upgrade is completed.

    # clresourcegroup show -p Auto_start_on_new_cluster resource-group

    If necessary, change the property value to False.

    # clresourcegroup set -p Auto_start_on_new_cluster=False resource-group
  3. If necessary, notify users that cluster services might be temporarily interrupted during the upgrade.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  4. Assume the root role.
  5. Ensure that the RG_system property of all resource groups in the cluster is set to FALSE.

    A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.

    1. On each node, determine whether any resource groups are set to RG_system=TRUE.
      phys-schost# clresourcegroup show -p RG_system

      Make note of which resource groups to change. Save this list to use when you restore the setting after upgrade is completed.

    2. For each resource group that is set to RG_system=TRUE, change the setting to FALSE.
      phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
  6. If you are upgrading a two-node cluster, skip to Step 14.

    Otherwise, proceed to Step 7 to determine the partitioning scheme to use. You determine which nodes each partition will contain, but interrupt the partitioning process. You then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.

  7. Assume the root role on a node of the cluster.
  8. Start the scinstall utility in interactive mode.
    phys-schost# /usr/cluster/bin/scinstall

    The scinstall Main Menu is displayed.

  9. Choose the menu item, Manage a Dual-Partition Upgrade.

    The Manage a Dual-Partition Upgrade Menu is displayed.

  10. Choose the menu item, Display and Select Possible Partitioning Schemes.
  11. Follow the prompts to perform the following tasks:
    1. Display the possible partitioning schemes for your cluster.
    2. Choose a partitioning scheme.
    3. Choose which partition to upgrade first.

      Note -  Stop and do not respond yet when prompted, Do you want to begin the dual-partition upgrade?, but do not exit the scinstall utility. You respond to this prompt in Step 18 of this procedure.
  12. Make note of which nodes belong to each partition in the partition scheme.
  13. On another node of the cluster, become superuser.
  14. Ensure that any critical data services can switch over between partitions.

    For a two-node cluster, each node will be the only node in its partition.

    When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.

    1. Display the node list of each resource group that you require to remain in service during the entire upgrade.
      phys-schost# clresourcegroup show -p nodelist
      === Resource Groups and Resources ===
      
      Resource Group:                                 resourcegroup
      Nodelist:                                        node1 node2
    2. If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.
      phys-schost# clresourcegroup add-node -n node resourcegroup
  15. Determine your next step.
    • If you are upgrading a two-node cluster, return to Step 14 through Step 11 to designate your partitioning scheme and upgrade order.

      When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 18.

    • If you are upgrading a cluster with three or more nodes, return to the node that is running the interactive scinstall utility.

      Proceed to Step 18.

  16. At the interactive scinstall prompt Do you want to begin the dual-partition upgrade?, type Yes.

    The command verifies that a remote installation method is available.

  17. When prompted, press Enter to continue each stage of preparation for dual-partition upgrade.

    The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.

  18. After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.
    • SPARC:
      ok boot -x
    • x86:
      1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

        For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.2 Systems .

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
      3. Add -x to the multiboot command to specify that the system boot into noncluster mode.
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the –x option to the kernel boot parameter command.
  19. Ensure that each system disk is backed up.
  20. If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to upgrade those nodes.

    During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.

    1. Create the scripts that you need to stop applications that are not under RGM control.
      • Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.

      • To stop applications that are running on more than one node in the partition, write the scripts accordingly.

      • Use any name and directory path for your scripts that you prefer.

    2. Ensure that each node in the cluster has its own copy of your scripts.
    3. On each node, modify the following Oracle Solaris Cluster scripts to call the scripts that you placed on that node.
      • /etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.

      • /etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.

      The Oracle Solaris Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.

Next Steps

Upgrade software on each node in the first partition. Go to How to Upgrade the Software (Dual-Partition).