3 Concepts of Multi-Node Patch Orchestration Using OPatchAuto

OPatchAuto is Oracle's strategic tool for binary and configuration patching. For the supported environments (Fusion Middlware and Grid Infrastructure), OPatchAuto sequences and executes all required steps, on all nodes, for comprehensive patch application. Because OPatchAuto can patch full systems in one invocation, it removes the burdens of:

  • The physical effort of going host to host and executing commands

  • The mental effort of remembering the sequence of commands across the nodes in your system

Your product's patching documentation (Database, Fusion Middleware, Enterprise Manager Cloud Control) explains how to use OPatchAuto to patch your specific product. This book augments those guides, by providing deeper conceptual and reference material for OPatchAuto, in a product independent manner.

3.1 Patch Orchestration Concepts

Applying a patch involves an orchestrated series of steps. As OPatchAuto's name indicates, the customer does not need to understand these steps. They can just apply the patch.

However, the following underlying orchestration concepts are necessary for the customer to learn when the following are true:

  • The patching operation has failed and they need to trouble-shoot.

  • They see advantage to interleaving their own commands into the patching sequence in order to take advantage of their production system's downtime window. In this case, understanding phases is required.

3.2 OPatchAuto Pre-requisite

  • It is recommended to use bash shell as the default shell for executing OPatchAuto commands.

  • SSH user equivalence is necessary for GI/RAC installation. The steps to create SSH user equivalence can be found in the Grid Infrastructure Installation Guide. SSH user equivalence is also mandatory to use OPatchAuto.

  • SSH user equivalence must be created for each home owner separately.

  • You must have the latest version of OPatch in all the homes of the nodes before you request for patching. It is required for both single node as well as multinode patching.

3.3 Phases and Sessions

The conceptual related steps of patching operation is called a phase. Executing all phases leads to a completed patching operation on the target; skipping a phase means the patch is not correctly applied. For example, the phase/sub-phase of applying the bits to an Oracle Home is called offline:binary-patching in OPatchAuto.

Each invocation of opatchauto apply generates a new session, whether you are executing all phases in one go (default) or just a sub-phase (advanced).

Phase input to the command is both optional and an advanced feature. However, if the customer wishes to interleave commands between the phases, they will need to invoke OPatchAuto multiple times, specifying the specific phase in the correct sequence so that all phases are executed. Phases are composed of sub-phases. The user may also invoke the tool at the sub-phase granularity. The –help option lists the available phases and sub-phases.

Phases are idem-potent; you may execute them repeatedly. However, the tool will not inform you if you do not follow the correct sequence of invocations. (See ER 21553825.)

The high-level phases follow. They can be specified as (a) book-keeping, (b), life-cycle operations, of (c) configuration change operations

  • Init: Bookeeping operation to initialize internal state needed for correction patching.

  • Shutdown: Life-cycle operation that brings down run-time entities to permit patching.

  • Offline: Configuration change operation to apply the patch content with the system down. Bits application happens here, for instance. So the opatch patch will be recorded to the homes OUI inventory in this phase.

  • Start-up: Life-cycle operation that brings the shutdown entities back up again.

  • Online: Configuration change operation apply patch content that requires that the systems be up. If these configuration changes have a system inventory, they will also be recorded to that system's inventory at this point.

  • Finalize: Book-keeping operation to record that patch operation is complete.

In the product's documentation, you might see sub-phases that include “prepare" and “binary/product" variants. Prepare means "ready materials but do not make a configuration change." Binary operations only change Oracle Homes. Product operations change the configured system, such as domain configurations or database dictionaries.

The specific content of the patch determines precisely what specific Oracle Home and configuration changes occur. Most Fusion Middleware patches, for example, only include offline content changes. But, of course, some include configuration changes as well.

In general, the session is an implicit parameter, set internally to the last session. It is visible to the user in the logs, communicated as a session ID, but there is no requirement for it to be supplied by the user. As a convenience for specifying the rollback parameters, you can specify the session ID. In this case, OPatchAuto knows to query that session for the patch you wish to rollback.

3.4 Patch Plans

Patch plans describe, independent of a specific product instance's topology, the sequence of steps to execute in order to deploy the patch. Patch plans are life-cycle programs, developed by Oracle life-cycle management experts, specifically for the given product being patched.

They are optional and advanced inputs. However, internally, OPatchAuto always selects a patch plan to guide its execution. For example, internally, OPatchAuto apply and OPatchAuto rollback automatically select different patch plans implicitly.

Users will input patch plans to OPatchAuto when executing more complex life-cycle operation, such as Zero Downtime Patching. The product documentation for these life-cycle operations will list the names of valid patch plans.

3.5 OPatch Automation (OPatchAuto)

With OPatchAuto, you can automatically patch the typical Grid Infrastructure (GI) and RAC home directories with minimal intervention.

OPatchAuto performs many of the pre-patch checks (see "Using OPatch") as well as the post-patch verification. The power of OPatchAuto lies in its ability to perform end-to-end configuration patching. Configuration patching is the process of patching a GI or RAC home based on its configuration. By incorporating the configuration information into the patch process, OPatchAuto streamlines patching tasks by automating most of the steps.

OPatchAuto uses your GI/RAC configuration and, from that information, automatically generates patching instructions specific to your site configuration. OPatchAuto then uses OPatch to implement these instructions and perform the actual application of the patch.

3.5.1 Supported Patch Format

Beginning with Oracle Database 12c, patches have been converted to a System patch format in order to support patch automation.

What is a System Patch?

A System patch contains several sub-patches whose locations are determined by a file called bundle.xml in the top level directory of the patch. The sub-patches are intended for different sub-systems of a system that correspond with the database home organization.

A typical System patch format is organized as follows:

<System patch location - directory>
|_____ Readme.txt (or) Readme.html
       bundle.xml
       automation
               |_____ apply_automation.xml
       |_____ rollback_automation.xml
       Sub-patch1
                |_____ etc/config/inventory.xml
                |_____ etc/config/actions.xml
                |_____ files/Subpatch1 'payload'
       Sub-patch2
                |_____ etc/config/inventory.xml
                |_____ etc/config/actions.xml
                |_____ files/Subpatch1 'payload'

Notes:

  • For database releases prior to 12c, OPatchAuto is not supported for the released one-off patches. For older releases, you must use OPatch and follow the patch README instructions.

  • OPatchAuto and System patches are only supported by Oracle Database 12c and above.

Additional Supported Patch Types/Configurations

  • One ore more one-off patches

  • One composite patch

  • One system patch / bundle patch

  • One system patch / bundle patch and 1 or more one-off patches

  • One system patch / bundle patch which has a composite patch in it

  • One system patch / bundle patch which has a composite patch in it and 1 or more one-off patches

  • One system patch / bundle patch and 1 or more one-off patches and 1 composite patch

3.5.2 Supported Target Configurations

OPatchAuto can be applied to the following general configurations:

  • GI Home Shared

  • GI Home Not Shared

  • RAC Home Shared

  • RAC Home Not Shared

  • SIHA and SIDB

  • Shard and OGG

3.5.2.1 Shared Versus Non-Shared (GI or RAC) Homes

The configuration differences between shared and non-shared Homes come into play when determining the patching mode in which OPatchAuto is used. See Patch Application Modes.

3.5.2.2 Patch Application Modes

OPatchAuto supports two modes of patching a GI or RAC Home - Rolling and Non-rolling. When a patching session is started off (on the first node), the stack has to be up and running on this node. This applies to both rolling and non-rolling modes of patching.

Rolling Mode (Default Mode): When performing patching in Rolling mode, the ORACLE_HOME processes on a particular node are shut down, the patch is applied, then the node is brought back up again. This process is repeated for each node in the GI or RAC environment until all nodes are patched. This is the most efficient mode of applying an interim patch to an Oracle RAC setup because this results in no downtime. Not all patches can be applied using Rolling mode. Whether or not a patch can be applied in this way is generally specified in the patch metadata. The patch README also specifies whether or not a patch can be applied in Rolling mode. The node (GI Home) from which the opatchauto command is executed is considered the LOCAL node and all other nodes are considered REMOTE nodes.

When you begin a rolling mode session, at least 1 remote node has to be up and running.

OPatchAuto applies patches in rolling mode by default.

Non-rolling Mode: Prior to 12c, a non-rolling upgrade was defined as shutting down Oracle processes on all nodes. Beginning with 12c, non-rolling patching requires the GI stack to be up on local node. The patching operation on first and last node have special steps to perform hence the operation needs to be handled separately but not in parallel with other nodes. The non-rolling patching can ve described as three phases:

Beginning with 12c, non-rolling patching occurs in three phases:

  1. Patch Node 1

  2. Patch Node 2 through n-1

  3. Patch Node n

When you start a non-rolling mode session none of the remote nodes can be up and running: Services in all nodes (excluding the first node) must be stopped.

As shown in the following figure, given n nodes, you begin the non-rolling patch session by patching a single node, then patch nodes two through n-1 in parallel, and finally patch node n to finish the patching session.


non-rolling mode

To run OPatchAuto in non-rolling mode, you must explicitly specify the -nonrolling option.

Patch Application Mode Conflict

As mentioned earlier, OPatchAuto applies patches in rolling mode by default. If the patch is applied in rolling mode but the patch content is not rollable (content does not support application in rolling mode), OPatchAuto will error out when attempting to run rootcrs.pl -prepatch.

3.5.3 Configuration Support

OPatchAuto supports the following platforms:

  • Oracle Solaris on x86-64 (64-bit)

  • Linux x86-64

  • Oracle Solaris on SPARC (64-bit)

  • IBM AIX on POWER Systems (64-bit)

  • HP-UX Itanium"

  • Linux (32-bit)

OPatchAuto supports shared and non-shared Oracle homes. It supports patching cluster configurations managing mixed versions of the Oracle Database, though it of course only patches those databases with versions matching the input patch content.

Note:

Microsoft Windows is not supported.