Perform this procedure to prepare a multiple-node cluster for a dual-partition upgrade. These procedures refer to the two groups of nodes as the first partition and the second partition. The nodes that you assign to the second partition continue cluster services while you upgrade the nodes in the first partition. After all nodes in the first partition are upgraded, you switch cluster services to the first partition and upgrade the second partition. After all nodes in the second partition are upgraded, you boot the nodes into cluster mode to rejoin the nodes from the first partition.
Perform all steps from the global zone only.
Before You Begin
Perform the following tasks:
Ensure that the configuration meets the requirements for upgrade. See Upgrade Requirements and Software Support Guidelines.
Have available the installation media, documentation, and software updates for all software products that you are upgrading, including the following software:
Oracle Solaris OS
Oracle Solaris Cluster
Applications that are managed by Oracle Solaris Cluster data services
Any other third-party applications to upgrade
For instructions on updating single or multiple packages, see Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide .
If you use role-based access control (RBAC) instead of the root role to access the cluster nodes, ensure that you can become an administrator with rights for all Oracle Solaris Cluster commands. This series of upgrade procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:
See User Rights Management in Securing Users and Processes in Oracle Solaris 11.2 for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost% cluster status
See the cluster (1CL) man page for more information.
This setting prevents the unintended starting of application resource groups before cluster upgrade is completed.
# clresourcegroup show -p Auto_start_on_new_cluster resource-group
If necessary, change the property value to False.
# clresourcegroup set -p Auto_start_on_new_cluster=False resource-group
Service interruption will be approximately the amount of time that your cluster normally takes to switch services to another node.
A setting of RG_system=TRUE would restrict certain operations that the dual-partition software must perform.
phys-schost# clresourcegroup show -p RG_system
Make note of which resource groups to change. Save this list to use when you restore the setting after upgrade is completed.
phys-schost# clresourcegroup set -p RG_system=FALSE resourcegroup
Otherwise, proceed to Step 7 to determine the partitioning scheme to use. You determine which nodes each partition will contain, but interrupt the partitioning process. You then compare the node lists of all resource groups against the node members of each partition in the scheme that you will use. If any resource group does not contain a member of each partition, you must change the node list.
The scinstall Main Menu is displayed.
The Manage a Dual-Partition Upgrade Menu is displayed.
For a two-node cluster, each node will be the only node in its partition.
When the nodes of a partition are shut down in preparation for dual-partition upgrade, the resource groups that are hosted on those nodes switch over to a node in the other partition. If a resource group does not contain a node from each partition in its node list, the resource group cannot switch over. To ensure successful switchover of all critical data services, verify that the node list of the related resource groups contains a member of each upgrade partition.
phys-schost# clresourcegroup show -p nodelist === Resource Groups and Resources === Resource Group: resourcegroup Nodelist: node1 node2 …
phys-schost# clresourcegroup add-node -n node resourcegroup
When you reach the prompt Do you want to begin the dual-partition upgrade?, skip to Step 18.
Proceed to Step 18.
The command verifies that a remote installation method is available.
The command switches resource groups to nodes in the second partition, and then shuts down each node in the first partition.
ok boot -x
For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.2 Systems .
The screen displays the edited command.
During dual-partition upgrade processing, these scripts would be called to stop applications such as Oracle RAC before the nodes in the second partition are halted.
Create separate scripts for those applications that you want stopped before applications under RGM control are stopped and for those applications that you want stop afterwards.
To stop applications that are running on more than one node in the partition, write the scripts accordingly.
Use any name and directory path for your scripts that you prefer.
/etc/cluster/ql/cluster_pre_halt_apps - Use this file to call those scripts that you want to run before applications that are under RGM control are shut down.
/etc/cluster/ql/cluster_post_halt_apps - Use this file to call those scripts that you want to run after applications that are under RGM control are shut down.
The Oracle Solaris Cluster scripts are issued from one arbitrary node in the partition during post-upgrade processing of the partition. Therefore, ensure that the scripts on any node of the partition will perform the necessary actions for all nodes in the partition.
Upgrade software on each node in the first partition. Go to How to Upgrade the Software (Dual-Partition).