Perform this procedure to prepare a multiple-node cluster for a dual-partition update. These procedures refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition continue cluster services while you update the nodes in the first partition. After all nodes in the first partition are updated, you switch cluster services to the first partition and update the second partition. After all nodes in the second partition are updated, you boot the nodes into cluster mode to rejoin the nodes from the first partition.
Perform all steps from the global zone only.
Before You Begin
Perform the following tasks:
Ensure that the configuration meets the requirements for update. See Overview of Updating Oracle Solaris Cluster Software.
Have available the installation media, documentation, and software updates for all software products that you are updating, including the following software:
Oracle Solaris OS
Oracle Solaris Cluster
Applications that are managed by Oracle Solaris Cluster data services
Any other third-party applications to update
For instructions on updating single or multiple packages, see Updating Software Packages.
If you use role-based access control (RBAC) instead of the root role to access the cluster nodes, ensure that you can become an administrator with rights for all Oracle Solaris Cluster commands. This series of update procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See User Rights Management in Securing Users and Processes in Oracle Solaris 11.4 for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost% cluster status
See the cluster(8CL) man page for more information.
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.
phys-schost# clresourcegroup show -p RG_system
Make note of which resource groups to change. Save this list to use when you restore the setting after update is completed.
phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
Compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.
phys-schost# scinstall
The scinstall Main Menu is displayed.
The Manage a Dual-Partition Update Menu is displayed.
For a two-node cluster, each node will be the only node in its partition.
When the nodes of a partition are shut down in preparation for dual-partition update, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each update partition.
phys-schost# clresourcegroup show -p nodelist === Resource Groups and Resources === Resource Group: resourcegroup Nodelist: node1 node2 …
phys-schost# clresourcegroup add-node -n node resourcegroup
The command verifies that a remote installation method is available.
The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.
ok boot -x
For more information about GRUB based booting, see About Run Level Booting in Booting and Shutting Down Oracle Solaris 11.4 Systems.
The screen displays the edited command.
During dual-partition update processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.
Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.
To stop applications that are running on more than one node in the partition, write the scripts accordingly.
Use any name and directory path for your scripts that you prefer.
/etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.
/etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.
The Oracle Solaris Cluster scripts are issued from one arbitrary node in the partition during post-update processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.
Next Steps
Update software on each node in the first partition. Go to How to Update the Software (Dual-Partition).