Go to main content

Updating Your Oracle® Solaris Cluster 4.4 Environment

Exit Print View

Updated: March 2019

How to Prepare the Cluster for Update (Dual-Partition)

Perform this procedure to prepare a multiple-node cluster for a dual-partition update. These procedures refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition continue cluster services while you update the nodes in the first partition. After all nodes in the first partition are updated, you switch cluster services to the first partition and update the second partition. After all nodes in the second partition are updated, you boot the nodes into cluster mode to rejoin the nodes from the first partition.

Note -  If you are updating a single-node cluster, do not use this update method. Instead, go to How to Prepare the Cluster for Update (Standard Update).

Perform all steps from the global zone only.

Before You Begin

Perform the following tasks:

  • Ensure that the configuration meets the requirements for update. See Overview of Updating Oracle Solaris Cluster Software.

  • Have available the installation media, documentation, and software updates for all software products that you are updating, including the following software:

    • Oracle Solaris OS

    • Oracle Solaris Cluster

    • Applications that are managed by Oracle Solaris Cluster data services

    • Any other third-party applications to update

    For instructions on updating single or multiple packages, see Updating Software Packages.

  • If you use role-based access control (RBAC) instead of the root role to access the cluster nodes, ensure that you can become an administrator with rights for all Oracle Solaris Cluster commands. This series of update procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See User Rights Management in Securing Users and Processes in Oracle Solaris 11.4 for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.

  1. Ensure that the cluster is functioning normally.
    1. View the current status of the cluster by running the following command from any node.
      phys-schost% cluster status

      See the cluster(8CL) man page for more information.

    2. Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
    3. Check the volume-manager status.
  2. If necessary, notify users that cluster services might be temporarily interrupted during the update.

    Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.

  3. Assume the root role.
  4. Ensure that the RG_system property of all resource groups in the cluster is set to FALSE.

    A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.

    1. On each node, determine whether any resource groups are set to RG_system=TRUE.
      phys-schost# clresourcegroup show -p RG_system

      Make note of which resource groups to change. Save this list to use when you restore the setting after update is completed.

    2. For each resource group that is set to RG_system=TRUE, change the setting to FALSE.
      phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup

      Compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.

  5. Assume the root role on a node of the cluster.
  6. Start the scinstall utility in interactive mode.
    phys-schost# scinstall

    The scinstall Main Menu is displayed.

  7. Choose the menu item, Manage a Dual-Partition Update.

    The Manage a Dual-Partition Update Menu is displayed.

  8. Choose the menu item, Display and Select Possible Partitioning Schemes.
  9. Follow the prompts to perform the following tasks:
    1. Display the possible partitioning schemes for your cluster.
    2. Choose a partitioning scheme.
    3. Choose which partition to update first.

      Note -  Stop and do not respond yet when prompted, Do you want to begin the dual-partition update?, but do not exit the scinstall utility. You respond to this prompt in Step 13 of this procedure.
  10. Make note of which nodes belong to each partition in the partition scheme.
  11. On another node of the cluster, become superuser.
  12. Ensure that any critical data services can switch over between partitions.

    For a two-node cluster, each node will be the only node in its partition.

    When the nodes of a partition are shut down in preparation for dual-partition update, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each update partition.

    1. Display the node list of each resource group that you require to remain in service during the entire update.
      phys-schost# clresourcegroup show -p nodelist
      === Resource Groups and Resources ===
      Resource Group:                                 resourcegroup
      Nodelist:                                        node1 node2
    2. If the node list of a resource group does not contain at least one member of each partition, redefine the node list to include a member of each partition as a potential primary node.
      phys-schost# clresourcegroup add-node -n node resourcegroup
  13. At the interactive scinstall prompt Do you want to begin the dual-partition update?, type Yes.

    The command verifies that a remote installation method is available.

  14. When prompted, press Enter to continue each stage of preparation for dual-partition update.

    The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.

  15. After all nodes in the first partition are shut down, boot each node in that partition into noncluster mode.
    • SPARC:
      ok boot -x
    • x86:
      1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

        For more information about GRUB based booting, see About Run Level Booting in Booting and Shutting Down Oracle Solaris 11.4 Systems.

      2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
      3. Add -x to the multiboot command to specify that the system boot into noncluster mode.
      4. Press Enter to accept the change and return to the boot parameters screen.

        The screen displays the edited command.

      5. Type b to boot the node into noncluster mode.

        Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the –x option to the kernel boot parameter command.
  16. Ensure that each system disk is backed up.
  17. If any applications that are running in the second partition are not under control of the Resource Group Manager (RGM), create scripts to halt the applications before you begin to update those nodes.

    During dual-partition update processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.

    1. Create the scripts that you need to stop applications that are not under RGM control.
      • Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.

      • To stop applications that are running on more than one node in the partition, write the scripts accordingly.

      • Use any name and directory path for your scripts that you prefer.

    2. Ensure that each node in the cluster has its own copy of your scripts.
    3. On each node, modify the following Oracle Solaris Cluster scripts to call the scripts that you placed on that node.
      • /etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.

      • /etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.

      The Oracle Solaris Cluster scripts are issued from one arbitrary node in the partition during post-update processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.

Next Steps

Update software on each node in the first partition. Go to How to Update the Software (Dual-Partition).