5 Patching of Grid Infrastructure and RAC DB Environment Using OPatchAuto

OPatchAuto automates patch application to the Grid Infrastructure (GI) cluster, by applying patches to both the GI and the managed Oracle Real Application Cluster (RAC) homes.

Patch orchestration is the automated execution of the patching steps, such as the execution of pre-patch checks, stopping services, applying the binary patches, and starting the services. Patch orchestration for Oracle Database 12c applies the patch to the GI/RAC configuration on that machine, including all of its databases. The OPatchAuto patch orchestration utility is available with version 12.1 of the OPatch utility.

This chapter covers the following topics:

Note:

This chapter applies to Oracle Database 12c only.

5.1 Configuration Support

OPatchAuto supports the following platforms:

  • Oracle Solaris on x86-64 (64-bit)

  • Linux x86-64

  • Oracle Solaris on SPARC (64-bit)

  • IBM AIX on POWER Systems (64-bit)

  • HP-UX Itanium"

  • Linux (32-bit)

OPatchAuto supports shared and non-shared Oracle homes. It supports patching cluster configurations managing mixed versions of the Oracle Database, though it of course only patches those databases with versions matching the input patch content.

Note:

Microsoft Windows is not supported.

5.2 Preparing to Use OPatchAuto

To ensure successful patching, there are several prerequisites you should complete to prepare your environment for running OPatchAuto, such as obtaining the latest version of OPatch, obtaining required patches from My Oracle Support, and backing up the environment.

For more information on preparing your environment, see the following topics:

  • Patching Your Environment Using OPatchAuto

    OPatchAuto is installed with the OPatch utility as a part of your installation. OPatchAuto provides several commands that you can use to Patching Your Environment Using OPatchAuto automate the application and roll back of a patch in a single host or multi-host environment.

  • Locating and Obtaining the Latest Version of OPatch and OPatchAuto

    Before you run OPatchAuto, find the OPatchAuto utility in the Oracle home and verify that you have the latest version. You must have the latest version of OPatch in all the homes of all the nodes before you request for patching.

  • Obtaining Patches Required For Your Installation

    You can search for and download the latest patches for your installation from My Oracle Support.

  • Configuring Node Manager to Support Start and Stop Operations

    To ensure that OPatchAuto can properly stop and start your system during patching, you must configure the Node Manager(s) to support the start and stop operations.

  • Backup and Recovery Considerations for Patching

    It is highly recommended that you back up the Oracle home before any patch operation. You can back up the Oracle home using your preferred method.

5.2.1 OPatchAuto Environment Variable

Before you run OPatchAuto, ensure that you set the required ORACLE_HOME environment variable. The ORACLE_HOME environment variable is used to identify the Oracle home you are planning to patch.

5.2.2 Running OPatchAuto on a Single Node

Note:

  • Latest OPatch must be present on all the homes requested for patching on all the nodes.

Note:

The following conditions apply only for the first node, such as when the session is first started on the cluster.

5.2.2.1 Node Availability during Patching (Rolling vs. Non-rolling)

In order to start a new patching session, the following conditions must be met.

  • The utility must be executed by an operating system (OS) user with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.

  • Local node must be up for both rolling and non-rolling modes.

  • At least one of the remote nodes must be up in order to start a rolling mode session.

  • All the remote nodes must be down in order to start a non-rolling session.

  • If the GI cluster is a flex setup, ensure that the first and last node where opatchauto is executed is a hub node and not a leaf node.

5.2.2.2 OPatchAuto Apply\Rollback steps

Add the directory containing the opatchauto to the $PATH environment variable. For example:

# export PATH=$PATH:<GI_HOME>/OPatch

To patch the GI home and all Oracle RAC database homes of the same version:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/<Patch-id>

To patch only the GI home:

# opatchauto apply <UNZIPPED_PATCH_LOCATION>/<Patch-id> -oh <GI_HOME>

To patch one or more Oracle RAC database homes:

# export PATH=$PATH: <oracle_home1_path>/OPatch
# opatchauto apply <UNZIPPED_PATCH_LOCATION>/<Patch-id> -oh
      <oracle_home1_path>,<oracle_home2_path>

To roll back the patch from the GI home and each Oracle RAC database home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/<Patch-id>

To roll back the patch from the GI home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/<Patch-id> -oh <path to GI home>

To roll back the patch from the Oracle RAC database home:

# opatchauto rollback <UNZIPPED_PATCH_LOCATION>/<Patch-id> -oh <oracle_home1_path>,<oracle_home2_path>

5.2.3 Patching Session Output

The following patching session output examples illustrate successful OPatchAuto apply and rollback sessions.

Example 5-1 OPatchAuto Apply/Rollback Session in Analyze Mode

--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:myhostq
CRS Home:/scratch/aime_ordb_myhostq/crso1/crshome_crso1
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /tmp/patch_gipsu_12024/patch/22191349/21436941
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-21-21PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948341
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-21-21PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948344
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-21-21PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948354
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-21-21PM_1.log
 
Host:myhostr
CRS Home:/scratch/aime_ordb_myhostq/crso1/crshome_crso1
 
==Following patches were SUCCESSFULLY analyzed to be applied:
 
Patch: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21436941
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-24-56PM_1.log
 
Patch: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948341
 
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-24-56PM_1.log
 
Patch: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948344
 
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-24-56PM_1.log
 
Patch: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948354
 
Log: /scratch/aime_ordb_myhostq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-24-56PM_1.log
 
OPatchAuto successful.

Example 5-2 OPatchAuto Apply Session

--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:mymachineemq
CRS Home:/scratch/aime_ordb_mymachineemq/crso1/crshome_crso1
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /tmp/patch_gipsu_12024/patch/22191349/21436941
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-41-38PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948341
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-41-38PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948344
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-41-38PM_1.log
Patch: /tmp/patch_gipsu_12024/patch/22191349/21948354
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-41-38PM_1.log
Host:mymachineemr
CRS Home:/scratch/aime_ordb_mymachineemq/crso1/crshome_crso1
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21436941
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-59-15PM_1.log
Patch: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948341
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-59-15PM_1.log
Patch: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948344
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-59-15PM_1.log
Patch: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/OPatch/auto/dbtmp/22191349/21948354
Log: /scratch/aime_ordb_mymachineemq/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_23-59-15PM_1.log
OPatchAuto successful.

Example 5-3 OPatchAuto Rollback Session

--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:mymachineemm
CRS Home:/scratch/aime_ordb_mymachineemm/crso1/crshome_crso1
Summary:
==Following patches were SUCCESSFULLY rolled back:
Patch: /tmp/patch_gipsu_12019/patch/22191492/17077442
Log: /scratch/aime_ordb_mymachineemm/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_19-25-46PM_1.log
Patch: /tmp/patch_gipsu_12019/patch/22191492/17303297
Log: /scratch/aime_ordb_mymachineemm/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_19-25-46PM_1.log
Patch: /tmp/patch_gipsu_12019/patch/22191492/21951844
Log: /scratch/aime_ordb_mymachineemm/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-03-08_19-25-46PM_1.log
OPatchAuto successful.

Example 5-4 OPatchAuto Apply/Rollback Failure

---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /scratch/aime/app/aime/product/11.2.0/dbhome_2, host: mymachineemg.
Command failed:  /bin/sh -c 'ORACLE_HOME=/scratch/aime/app/aime/product/11.2.0/dbhome_2 /scratch/aime/app/aime/product/11.2.0/dbhome_2/bin/srvctl stop home -o /scratch/aime/app/aime/product/11.2.0/dbhome_2 -n mymachineemg -f -t TRANSACTIONAL -s /scratch/aime/app/aime/product/11.2.0/dbhome_2/cfgtoollogs/opatchautodb/statfile/mymachineemg/OracleHome-eca39d53-5b51-4cdf-9c79-ce9d9312d86a_mymachineemg.stat'
Command failure output:
PRCH-1000 : Failed to stop resources running from Oracle home /scratch/aime/app/aime/product/11.2.0/dbhome_2
PRCH-1029 : One or more resources failed to stop: PRCH-1006 : Failed to stop Listener
PRCR-1014 : Failed to stop resource ora.LISTENER2.lsnr
PRCR-1065 : Failed to stop resource ora.LISTENER2.lsnr
CRS-5016: Process "/scratch/aime/app/aime/product/11.2.0/dbhome_2/bin/lsnrctl" spawned by agent "/scratch/aime_ordb_mymachineemg/crso1/crshome_crso1/bin/oraagent.bin" for action "stop" failed: details at "(:CLSN00010:)" in "/scratch/aime_ordb_mymachineemg/crso1/crshome_crso1/log/mymachineemg/agent/crsd/oraagent_aime/oraagent_aime.log"
CRS-2675: Stop of 'ora.LISTENER2.lsnr' on 'mymachineemg' failed
 
After fixing the cause of failure Run opatchauto resume with session id "J5A3"
]
OPATCHAUTO-68061: The orchestration engine failed.
 
OPATCHAUTO-68061: The orchestration engine failed with return code 1
 
OPATCHAUTO-68061: Check the log for more details.

5.2.4 Sample Console Output

Example 5-5 opatchauto apply -analyze

System initialization log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchautodb/systemconfig2016-05-05_01-55-58PM.log.

Session log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/opatchauto2016-05-05_01-56-18PM.log

WARNING: the option -ocmrf is deprecated and no longer needed. OPatch no longer checks for OCM configuration. It will be removed in a future release.

The id for this session is MDAN
[init:init] Executing OPatchAutoBinaryAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1

Executing OPatch prereq operations to verify patch applicability on CRS Home........
 
[init:init] OPatchAutoBinaryAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
[init:init] Executing GIRACPrereqAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Executing prereq operations before applying on CRS Home........
 
[init:init] GIRACPrereqAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
OPatchAuto successful.
 
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
 
Host:mymachineelu
 
CRS Home:/scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
==Following patches were SUCCESSFULLY analyzed to be applied:
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17077442
 
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_13-56-25PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17303297
 
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_13-56-25PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/22291141
 
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_13-56-25PM_1.log

Example 5-6 opatchatuo apply

System initialization log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchautodb/systemconfig2016-05-05_02-22-02PM.log.
 
Session log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/opatchauto2016-05-05_02-22-19PM.log
 
WARNING: the option -ocmrf is deprecated and no longer needed. OPatch no longer checks for OCM configuration. It will be removed in a future release.
 
The id for this session is WLR9
 
[init:init] Executing OPatchAutoBinaryAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Executing OPatch prereq operations to verify patch applicability on CRS Home........
 
[init:init] OPatchAutoBinaryAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[init:init] Executing GIRACPrereqAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Executing prereq operations before applying on CRS Home........
 
[init:init] GIRACPrereqAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[shutdown:shutdown] Executing GIShutDownAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Performing prepatch operations on CRS Home........
 
Prepatch operation log file location: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/crsconfig/crspatch_mymachineelu_2016-05-05_02-22-52PM.log
 
[shutdown:shutdown] GIShutDownAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[offline:binary-patching] Executing OPatchAutoBinaryAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Start applying binary patches on CRS Home........
 
[offline:binary-patching] OPatchAutoBinaryAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[startup:startup] Executing GIStartupAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Performing postpatch operations on CRS Home........
 
Postpatch operation log file location: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/crsconfig/crspatch_mymachineelu_2016-05-05_02-27-03PM.log
 
[startup:startup] GIStartupAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[finalize:finalize] Executing OracleHomeLSInventoryGrepAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Verifying patches applied on CRS Home.
[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
OPatchAuto successful.
 
--------------------------------Summary--------------------------------
 Patching is completed successfully. Please find the summary as follows:
 
Host:mymachineelu
CRS Home:/scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
Summary:
 
==Following patches were SUCCESSFULLY applied:
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17077442
 
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_14-23-38PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17303297
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_14-23-38PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/22291141
 
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_14-23-38PM_1.log

Example 5-7 opatchauto rollback

System initialization log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchautodb/systemconfig2016-05-05_04-34-39PM.log.
 
Session log file is /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/opatchauto2016-05-05_04-35-00PM.log
The id for this session is K5BA
[init:init] Executing OPatchAutoBinaryAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Executing OPatch prereq operations to verify patch applicability on CRS Home........
 
[init:init] OPatchAutoBinaryAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[init:init] Executing GIRACPrereqAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Executing prereq operations before rolling back on CRS Home........
 
[init:init] GIRACPrereqAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
 
[shutdown:shutdown] Executing GIShutDownAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Performing prepatch operations on CRS Home........
 
Prepatch operation log file location: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/crsconfig/crspatch_mymachineelu_2016-05-05_04-35-22PM.log
 
[shutdown:shutdown] GIShutDownAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfull 
[offline:binary-patching] Executing OPatchAutoBinaryAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Start rolling back binary patches on CRS Home........
 
[offline:binary-patching] OPatchAutoBinaryAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully
[startup:startup] Executing GIStartupAction action on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1
 
Performing postpatch operations on CRS Home........
 
Postpatch operation log file location: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/crsconfig/crspatch_mymachineelu_2016-05-05_04-38-59PM.log
 
[startup:startup] GIStartupAction action completed on home /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 successfully 
OPatchAuto successful.
 
--------------------------------Summary--------------------------------
 
Patching is completed successfully. Please find the summary as follows:
 
Host:mymachineelu
CRS Home:/scratch/aime_ordb_mymachineelu/crso1/crshome_crso1 
Summary:
 
==Following patches were SUCCESSFULLY rolled back:
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17077442
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_16-36-04PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/17303297
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_16-36-04PM_1.log
 
Patch: /tmp/patch_gipsu_12019/patch/22654153/22291141
Log: /scratch/aime_ordb_mymachineelu/crso1/crshome_crso1/cfgtoollogs/opatchauto/core/opatch/opatch2016-05-05_16-36-04PM_1.log

5.2.5 OPatchAuto Apply

When you run OPatchAuto's apply command, numerous operations are performed to implement the complete patch application cycle. These operations vary depending on the environment to be patched. The following environment is representative of the vast majority of patching environments in which OPatchAuto is used. For example, a typical patching environment would be one GI Home managing two RAC Homes. When you run opatchauto apply, OPatchAuto will perform the operations shown in Figure 5-1.

Figure 5-1 Patching with OPatchAuto: Process Flow


Opatchauto process flow

5.2.6 OPatchAuto: System Reboot Request

Depending on the patch or the home directory configuration, you may encounter a request to reboot the system. After a reboot during the patching process, you need to invoke the opatchauto utility again so that it seamlessly continues with rest of the patch application process.

Typically an error message, as shown in the following example, will be displayed when a problem arises.

Example 5-8 OPatchAuto Console Error

# OPatch/opatchauto apply /scratch/aime/sh/RDBMS_12.1.0.1.0_LINUX.X64_130418/patches/v2/nosql/gipsu/11111111 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
 ... 
 
CLSRSC-400: A system reboot is required to continue installing.
 
...

Apply Summary:
Following patch(es) are successfully installed:
GI_HOME=/u01/GI12/app/12.1.0/grid:13852018, 22222222, 123456788
DB_HOME=/scratch/aime/DB12_2/app/aime/product/12.1.0/dbhome_1:13852018, 123456788
DB_HOME=/scratch/aime1/DB12N/app/aime1/product/12.1.0/dbhome_1:13852018, 123456788
 
opatchauto failed with error code 1.

When you receive an error like this, follow the reboot instructions specified in the console. The following example shows a system reboot request issued by the user.

Example 5-9 Rebooting the System

# OPatch/opatchauto resume -reboot
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.
 
 
OPatchauto version : 12.1.0.1.1
OUI version        : 12.1.0.1.0
Running from       : /u01/GI12/app/12.1.0/grid
Log file location  : /u01/GI12/app/12.1.0/grid/cfgtoollogs/opatch/opatch2013-05-16_13-36-59PM_1.log
 
Opatchauto will attempt to resume from reboot patching session. This might take several minutes...
 
Command "/usr/bin/perl /u01/GI12/app/12.1.0/grid/crs/install/rootcrs.pl -postpatch" is successfully resumed.
Command "/scratch/aime1/DB12N/app/aime1/product/12.1.0/dbhome_1/bin/srvctl start home -o /scratch/aime1/DB12N/app/aime1/product/12.1.0/dbhome_1 -n slc00epi -s /scratch/aime1/DB12N/app/aime1/product/12.1.0/dbhome_1/OracleHome-50b8f1a0-e220-4b8e-98d7-49177979991f.stat " is successfully resumed.
Command "/scratch/aime/DB12_2/app/aime/product/12.1.0/dbhome_1/bin/srvctl start home -o /scratch/aime/DB12_2/app/aime/product/12.1.0/dbhome_1 -n slc00epi -s /scratch/aime/DB12_2/app/aime/product/12.1.0/dbhome_1/OracleHome-58232a10-3130-4930-b588-0c8594cf8c87.stat " is successfully resumed.
Opatchauto was able to resume from the previous reboot patching session and complete successfully.
 
opatchauto succeeded.

5.3 Using OPatchAuto to Patch a GI/RAC Environment

Applying a patch with OPatchAuto involves a series of steps that must be performed to ensure successful patching.

The following table summarizes the typical steps required to patch your existing GI/RAC environment using OPatchAuto.

Table 5-1 Using OPatchAuto

Task Description Documentation

Acquire patches required for your installation

Log in, search for, and download the patches required for your specific installation.

You do not need to worry about whether OPatchAuto supports a particular patch type. If OPatchAuto does not support a particular patch type, you will be notified when you run the tool.

Obtaining the Patches You Need

Review the README.txt file for the patch.

Each patch archive includes a README file that contains important information and instructions that must be followed prior to applying your patch. It is important to review the README file because it provides any unique steps or other information specific to the patch.

The README.txt file that is packaged within the patch archive.

Check for patch prerequisites.

The OPatchAuto apply -analyze command will identify that the prerequisites for the patch have been met.

If you are patching a single host environment, see Verifying the Prerequisites for Applying a Patch on a Single Host.

If you are patching a multi-host environment, see Verifying the Prerequisites for Applying a Patch on Multiple Hosts.

Apply the patch.

After you determine the Oracle home to which you need to apply the patch, and you have read the README file, then you should apply the patch with the opatchauto apply command.

If you are patching a multi-host environment, see Applying a Patch on Multiple Hosts Using the Apply Command

Verify the patch was applied to the Oracle home successfully.

The OPatch lsinventory command will show what patches have been applied to the Oracle home. .

Using the OPatch lsinventory Command to Verify the Patches Applied to an Oracle Home

Verify that your software runs properly after you apply the patch.

After the patching is complete and your servers are restarted, you should check your product software to verify that the issue has been resolved.

Verifying Your Installation After Applying a Patch

Troubleshoot the application of a patch.

If there are problems applying a patch, your first troubleshooting task is to review the log file for the OPatchAuto session.

Troubleshooting a Patch by Viewing the OPatchAuto Log File

Roll back the application of a patch

If for some reason the result is not satisfactory, you can use the opatchauto rollback command to remove the patch from the Oracle home.

If additional assistance is required, go to My Oracle Support (formerly OracleMetaLink).

For a single host environment, see Rolling Back a Patch You Have Applied on a Single Host.

For a multi-host environment, see Rolling Back a Patch You Have Applied on Multiple Hosts.

5.4 Patching a Sharded Database

The session has to be initiated from the catalog host by providing details of the catalog database. It should provide a top level view to the end user about patching a SDB, making it easier to understanding the flow of opatchautoSDB.

Sharding is an application-managed scaling technique using many (hundreds /thousands of) independent databases. With sharding, data is split into multiple databases (shards) with each database holding a subset of data. Shards can be replicated for high availability and scalability.

OPatchAuto supports end-to-end patching of sharded databases across multiple regions, along with the grid infrastructure (GI) that supports the clustered databases/shards and shards that are managed by Oracle Golden Gate or by Oracle DataGuard.

OPatchAuto supports all sharding and replication methods.

Supported Configurations:

  • Shard Types: Databases running on GI/SIHA and standalone databases

  • Different versions across Data Guards (currently, only version 12.2 is available)

  • Multiple shards sharing the same cluster/grid

  • Multiple versions of shards in an OGG replicated SDB

5.4.1 Selectively Patching Subset Entities

A sharded database can span multiple regions and clusters. To manage the patching cycle more efficiently, you may want to scale down the patching effort by patching only specific subset entities. OPatchAuto provides three options for selecting the subset entities:

1. Data Guards

2. Shard Groups

3. Shard Spaces

You can select instances of any one of the above entities. Note: You can only specify a single entity for a given patching cycle. For all of the subset entities, OPatchAuto identifies the targets on the basis of their names:

1. –dg: Comma-separated list of names of the primary database

2. –shardgroup: Comma-separated list of names of the shard groups

3. –shardspace: Comma-separated list of names of the shard spaces

For all of the above options, the Grid homes of the databases will also be patched. The CRS/RAC homes will be patched across all their nodes in a rolling manner. Similarly, the Golden Gate home will also be patched.

5.4.1.1 Data Guard

By selecting the -dg <primaryDB name> option, only the databases (along with their cluster homes) that belong to the selected Data Guard will be patched. All these databases will be patched in a rolling manner starting with the standby shards and ending with the primary shard.

The entire list of databases in a sharded database can be collected from the catalog database table GSMADMIN_INTERNAL.DATABASE under the column NAME.

5.4.1.2 Shard Group

By selecting the -shardgroup <shardgroup name> option, the databases of the selected shard group will be patched in a rolling manner. In a Data Guard-replicated configuration, a shard group can host ONLY one member of each Data Guard. Hence, this option is NOT supported in a sharded database that employs Data Guard replication because an entire Data Guard, while being spread across multiple shard groups, needs to be patched together in a defined sequence starting with standby databases and ending with the primary database. Patching individual shard groups poses a major risk of breaking that sequence.

The entire list of shard groups in a sharded database can be collected from the catalog database table GSMADMIN_INTERNAL.SHARD_GROUP under the column NAME.

5.4.1.3 Shard Space

By specifying the -shardspace <shardspace name> option, the databases of the entire shard space are patched in a rolling manner. These databases could be spread across multiple shard groups, depending upon the configuration of the sharded database.

The entire list of shard spaces in a sharded database can be collected from the catalog database table GSMADMIN_INTERNAL.SHARD_SPACE under the column NAME.

5.4.2 Sharded Database Command Option

The OPatchAuto sdb command option allows you to patch sharded databases. This operation patches all the shards of the Sharded database.

In a sharded database with Data Guard replication, patching a Data Guard involves patching all its standby databases first, followed by its primary database.In Oracle Golden Gate based replication, for a user-defined configuration, the operation involves patching around the shard spaces. For system-managed and composite configuration, the operation involves patching around the shard groups.The values of -host, -port and -sid are used to connect to the catalog database and hence they should form the required connect string for the catalog database.All the database(s) are patched first.Thereafter, the Grid/HAS homes that are host to any of the databases belonging to the sharded database are patched.The catalog database and the GSM's will not be patched. In order to patch these, OPatchAuto needs to be run separately on these databases.

The following syntax illustrates command line usage:

opatchauto apply <patch-location>
    -sdb
    -wallet <wallet>
    [ -phBaseDir <patch.base.directory> ] 
    [ -logLevel <log_priority> ]
    [ -analyze ]
    [ -host <tns-host> ]
    [ -dg <primary.database.name> ]    
    [ -shardgroup <shardgroup> ]
    [ -shardspace <shardspace> ]
    [ -rolling ]
    [ -service <service> ]
    [ -inplace ]
    [ -sid <sid> ]
    [ -port <port> ]

Parameters

  • patch-location

    The patch location.

Options

    • phBaseDir <patch.base.directory>

      The location of base patch directory.

    • logLevel <log_priority>

      The log level (defaults to "INFO").

      Supported values: OFF, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, ALL

    • analyze

      If this option is selected, the environment will be analysed for suitability of the patch on each home, without affecting the home.

      The patch will not be applied or rolled back, and targets will not be shut down.

    • host <tns-host>

      The tns-host of the catalog database. This should match the 'HOST' used in the network configuration file of the catalog database. The default host is set as the local 'hostname' without appending the domain.

    • wallet <wallet location> (Refer to Creating Wallet Using OPatchAuto Wallet Tool)

      The location of the wallet file.

      The entries made in the wallet file must satisfy these requirements:

      • It is mandatory to provide the credentials of all home owners of every node that would be patched as part of the session in the wallet. The homes that would be patched on the node also includes GI/SIHA when installed. The list of nodes refers to all hostnames/ip-address listed in GSMADMIN_INTERNAL.SHA_DATABASES.DB_HOST column of the Sharded database catalog.

      • It must also contain the credentials for the database user with 'sysdba' privilege for the catalog database.

      Additionally, the host user provided in the wallet must meet the following requirements:

      • The user must be able to change to root using ‘sudo’ if the node belongs to GI/SIHA environment.

    • dg <primary.database.name>

      This is used to restrict patching to databases of the selected dataguard. All the standby databases of the dataguard are patched first, followed by its primary database.

    • shardgroup <shardgroup>

      This is used to restrict patching to databases of the selected shard group.

    • shardspace <shardspace>

      This is used to restrict patching to databases of the selected shard space.

    • sdb (Required)

      To signify patching sharded database. Run 'opatchauto <apply|rollback> -sdb -help' to get more help on patching a sharded database.

    • rolling

      Enables sdb rolling mode where database(s) are patched one after the other.

    • service <service>

      Service name of the catalog database.

    • inplace

      This option can be used to perform in place patching through opatchauto. Here opatchauto performs the patching operation on the original Oracle Home, so there will be down time and high availability of services will get affected. The default patching mode from opatchauto is inplace.

    • sid <sid>

      This option can be used for both Shard patching as well as standalone SIDB patching. In context to standalone SIDB patching it will take the instance name of the database as its value. In context to shard patching it will take the catalog database name. If -service or -sid is not provided in the commandline, the default sid is used from the environment variable 'ORACLE_SID'.

    • port <port>

      This signifies the port for connecting to the catalog database. The default port is set as'1521'.

    The following examples demonstrate how to use the various OPatchAuto command options when patching a sharded database.

    Example 5-10 Patching a Sharded Database

    <CATALOG_DB_HOME>/OPatch/opatchauto apply <patch location> -sdb -wallet <wallet file location> -sid <sid of catalog db> -port <sid configured port>
    

    Example 5-11 Patching a Data Guard in Sharded Database

    <CATALOG_DB_HOME>/OPatch/opatchauto apply <patch location> -sdb -dg <primary_database_name1,primary_database_name2,...> -wallet <wallet file location> -sid <sid of catalog db> -port <sid configured port>
    

    Example 5-12 Patching a Shardgroup in a Sharded Database

    <CATALOG_DB_HOME>/OPatch/opatchauto apply <patch location> -sdb -shardgroup <shardgroup_name1,shardgroup_name2,...> -wallet <wallet file location> -sid <sid of catalog db> -port <sid configured port>
    

    Example 5-13 Patching a Shardspace in a Sharded Database

    <CATALOG_DB_HOME>/OPatch/opatchauto apply <patch location> -sdb -shardspace <shardspace_name1,shardspace_name2,...> -wallet <wallet file location> -sid <sid of catalog db> -port <sid configured port>
    

    Example 5-14 Listing Wallet Content

    <CATALOG_DB_HOME>/OPatch/auto/core/bin/patchingWallet.sh -walletDir <wallet.location> -list
    

    Example 5-15 Adding a New Host Credential Entry to the Wallet

    <CATALOG_DB_HOME>/OPatch/auto/core/bin/patchingWallet.sh -walletDir <wallet location> -create <username>:<hostname>:ssh
    <CATALOG_DB_HOME>/OPatch/auto/core/bin/patchingWallet.sh -walletDir <wallet location> -create oracle:myhost:ssh
    <CATALOG_DB_HOME>/OPatch/auto/core/bin/patchingWallet.sh -walletDir <wallet location> -create oracle:127.50.50.50:ssh 
    

    Example 5-16 Adding a New Catalog Database Credential Entry to the Wallet

    <CATALOG_DB_HOME>/OPatch/auto/core/bin/patchingWallet.sh -walletDir <wallet location> -create <username>:<sid of catalog db>:jdbc
    

    5.4.3 About the Wallet File

    Sharded database patching requires credentials to access the targets. The entries made in the wallet file must satisfy the following requirements:

    • The wallet file must contain credentials for each home owner in the entire sharding setup. This includes all the shard homes as well as the GI home owners.

    • It must also contain the credentials for the database user with ‘sysdba’ privilege for the catalog database.

    • The GI home owner must have a privilege to run commands as root using sudo.

    Oracle Wallet for Credential Input

    OPatchAuto accepts credentials, in Oracle wallet format, for accessing run-time entities, such as databases and Admin Servers. A Wallet file contains credentials for the hosts which are part of the cluster that requires patching. You input a wallet on the command line; if you do not supply one, and OPatchAuto needs one, it will prompt you for one on the command line. Successful usage depends on the user possessing both the wallet and the wallet password.

    5.4.3.1 Creating Wallet Using OPatchAuto Wallet Tool

    You can use the command line OPatchAuto wallet tool to generate a wallet file. The wallet file contains credentials for the hosts that are part of the cluster which needs to be patched. The wallet tool works seamlessly during the OPatchAuto patch orchestration process by passing the wallet file path as parameter during patching operations.

    The command line tool is as follows:

    <ORACLE_HOME>/OPatch/auto/core/bin/patchingWallet.[sh|cmd]
    [-log log_file] [-log_priority log_priority]
    { -create | -delete | -list } alias1 alias2 ... 
    

    Table 5-2 patchingWallet Command Options

    Option Description

    -create

    (Required) Create secrets for each alias given on the command line. If a given alias already exists in the wallet, its secret is overwritten without warning.

    -delete

    (Required) Delete given aliases from the wallet. Aliases that do not exist in the wallet are ignored.

    -list

    (Required) List aliases defined in the wallet. The secrets associated with the aliases are not displayed. The alias command line arguments are ignored.

    -walletDir

    (Optional) The path to the wallet directory. If omitted, the default location, if defined, will be used. If wallet does not exist at the specified location, it will be created when then -create option is used.

    -useStdin

    (Optional) When creating aliases, specifies that the passwords should be read from STDIN rather than the console device. Passwords will be read in the order specified by the alias options, with one per line. There will be no prompt.

    -log

    (Optional) Name of the log file.

    -log_priority

    (Optional) The priority setting for the log file. Use a Java Logging Level string or a log4j priority string.

    Valid Java logging values are off, severe, info, warning, config, fine, finer, finest, and all.

    Valid log4j priority strings are debug, info, warn, error, and fatal. The priority string values correspond to the levels defined in the Level class.

    More information about log4j priority strings can be found at the following Web site: http://logging.apache.org/log4j/docs/api/org/apache/log4j/Level.html.

    5.4.4 Log Files

    Summary on the console shows the location of the following log files that can be accessed for further details about the patching process and troubleshooting:

    • Main log file of the sharding session

    • Log file for each shard group

    • Log file for each dataguard/shard space

    • Log file for each individual database

    • Log file for each individual grid home

    In addition to appearing in the console, these log files will be available on their respective nodes.