6 Upgrade to Oracle Fusion Applications Release 12

This section describes the steps that must be performed to upgrade to Oracle Fusion Applications Release 12 (11.12.x.0.0). The following topics are discussed:

6.1 Perform Pre-Upgrade Steps During Downtime

The following steps must be performed before starting the upgrade during downtime:

6.1.1 Run the LCM Schema Seed Utility to Add LCM Schemas

Perform the steps in this section only if the upgrade to Release 12 is from Release 8 or Release 9. Skip this step if your starting point is Release 10. Starting in Release 10, all LifeCycle Management (LCM) operations use LCM schemas instead of SYS schemas. The LCM Schema Seed utility updates the environment so that Release 12 upgrade tasks use LCM users instead of the SYS user.

Before running this utility, check if the environment has Database Vault enabled. If it is enabled, then DVOWNER credentials must be available in the Credential Store Framework (CSF).

To run the utility, perform the following steps :

  1. Create a work directory with read and write permissions, referred to as WORK_DIR.

  2. Download and unzip patch 21167623 in WORK_DIR, which creates the following directories:

    • bin

    • ext/jlib/ext_jlib_jars/fapatchset/techpatch.jar

    • pcu/pcubundle.zip

    • sql

    • config

    • patches (the required patches are located in the following subdirectory)

      • fusionapps/patch/

    • rcu (this directory is used for the next step)

  3. Download appsrcu from the REPOSITORY_LOCATION to the rcu directory created in the previous step.

    cp REPOSITORY_LOCATION/installers/apps_rcu/linux/rcuHome_fusionapps_linux.zip WORK_DIR/rcu
    cd WORK_DIR/rcu
    unzip rcuHome_fusionapps_linux.zip
    
  4. Run the lcmSchemaSeedUtil.sh utility from the bin directory created in Step 3 as follows:

    This utility assumes that rcuHome_fusionapps_linux.zip was unzipped in WORK_DIR/rcu unless a different location using the -rculoc parameter is specified.

    cd WORK_DIR/bin
    lcmSchemaSeeding.sh -appbase APPLICATIONS_BASE [-rculoc directory_name]
    
  5. Review the log files located in APPLICATIONS_CONFIG/lcm/lss_logs.

LCM Schema Seed Utility for Solaris

The LCM Schema Seed Utility performs the following activities, which are internally orchestrated using the Tech Patch Utility (TPU) framework:

  • Applies the required LCM patches using OPatch

  • Runs a Password Change Utility (PCU) to seed the credentials for the 6 new schemas in the CSF

  • Runs the Repository Creation Utility (RCU) to create the new schemas

  • Runs the various SQL grant scripts to configure the new schemas properly

To make the LCM Seed Utility work for Solaris environments, run RCU separately by performing the following steps:

  1. Create a work directory with read and write permissions, referred to as WORK_DIR.

  2. Download apps_rcu_11g from the REPOSITORY_LOCATION to the rcu directory created in Step 1 as shown in the following example:

    cp REPOSITORY_LOCATION/installers/apps_rcu_11g/linux/rcuHome_fusionapps_linux.zip WORK_DIR/rcu 
    cd WORK_DIR/rcu 
    unzip rcuHome_fusionapps_linux.zip
    
  3. Set JAVA_HOME and PATH on the Solaris Machine.

  4. Download and unzip patch 21189887 into a WORK_DIR on the Solaris machine and run the WORK_DIR/bin/lcmSchemaSeeding.sh in preRCU mode. The Schema Seed Utility will apply the patches, run PCU, and then pause/exit. For example:

    ./lcmSchemaSeeding.sh -appbase <APPTOP> -rculoc <RCU location> -mode preRCU
    
  5. Ensure that the JAVA_HOME environment variable is set properly on the Linux machine.

  6. Extract rcu to the Linux Machine as follows:
    cp REPOSITORY_LOCATION/installers/apps_rcu_11g/linux/rcuHome_fusionapps_linux.zip WORK_DIR/rcu 
    cd WORK_DIR/rcu 
    unzip rcuHome_fusionapps_linux.zip
    
  7. Unzip patch 21189887 into a WORK_DIR on the Linux machine and run the WORK_DIR/bin/rcuWrapper_solaris.sh script as follows:

    ./rcuWrapper_solaris.sh -rculoc <RCU location> -jdbcstring <JDBC connect string of database> -instancedir <Complete network path of instance dir> 
    
  8. Provide the credentials for the SYS schema and for the following 6 new schemas. These credentials must be retrieved from the CSF:
    LCM_SUPER_ADMIN
    LCM_USER_ADMIN
    LCM_EXP_ADMIN
    LCM_OBJECT_ADMIN
    DVACCTMGR
    DVOWNER
    
  9. After creating the schemas, run the lcmSchemaSeeding.sh script in postRCU mode from the Solaris machine as follows:

    ./lcmSchemaSeeding.sh -appbase <APPTOP> -rculoc <RCU location> -mode postRCU
    

6.1.2 Prepare to Register Database Schema Information

To ensure that all database schemas are registered in the credential store, perform the following steps on the primordial host, only once:

  1. Create the PCU_LOCATION/fusionapps/applications directory. PCU_LOCATION is a folder specified as a property in PRIMORDIAL.properties. This location must be within APPLICATIONS_CONFIG. For example:
    APPLICATIONS_CONFIG/lcm/tmp/pcu
    
  2. Unzip SHARED_LOCATION/11.12.x.0.0/Repository/installers/pre_install/pcubundle.zip into PCU_LOCATION/fusionapps/applications.

  3. Go to the PCU bin directory as follows:
    cd PCU_LOCATION/fusionapps/applications/lcm/util/bin
    
  4. Set the JAVA_HOME environment variable before running any commands in this section as follows:
    setenv JAVA_HOME=java_home_location
    

    All commands in this section must be run from PCU_LOCATION/fusionapps/applications/lcm/util/bin.

  5. Run the templateGen utility to create the csf_template.ini template file as follows:

    (UNIX)
    ./templateGen.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION
    

    For the -appbase argument, specify the complete directory path to the APPLICATIONS_BASE directory.

    Refer to the following example commands:

    (UNIX)
    ./templateGen.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION 
    

    The templateGen utility generates the following template files in the PCU_LOCATION/fusionapps/applications/lcm/util/config directory when the -codebase option is used:

    • standard_template.ini

    • csf_template.ini

    • validation_template.ini

    • system_user_template.ini

    • standard_template.properties

    • csf_template.properties

    The command also generates the pcu_output.xml file in the same directory.

  6. Make a copy of csf_template.ini from the PCU_LOCATION/fusionapps/applications/lcm/util/config directory. In this example, the copy is named csf_plain.ini.

  7. Manually edit csf_plain.ini as follows:

    • Set the master_password property to the Master Orchestration Password you previously selected.

    • For each line that contains #text# or #password#, replace #text# or #password# with the correct value for the environment. Note that this password must be a minimum of 8 characters long and it must contain at least one alphabetic character and at least one numeric or special character.

    • Do not replace #text<WLS.USER>#,#password<WLS.PASSWORD># as they are used internally by PCU preseeding tools.

    To prevent incorrect results, do not alter csf_plain.ini beyond these changes.

  8. Create an encrypted version of csf_plain.ini and delete the clear-text input file. This step requires an encryption tool, such as the lcmcrypt tool or the Linux gpg tool, which takes an encrypted file and a passphrase and writes the decrypted contents to the standard output. In the following example, using lcmcrypt, the command reads the passphrase from the standard input and produces an encrypted output file, csf_plain.ini.enc:

    (UNIX) 
    echo master_password | ./lcmcrypt.sh -nonInteractive -encrypt -inputfile complete_directory_path/csf_plain.ini 
    
  9. Run iniGen.sh in non-interactive mode, which also requires a decryption tool, to take an encrypted file and a passphrase and write the decrypted contents to the standard output. The following example uses lcmcrypt:

    (UNIX)
    echo master_password | ./lcmcrypt.sh -nonInteractive -decrypt -inputfile complete_directory_path/csf_plain.ini.enc | ./iniGen.sh -nonInteractive -templatefile PCU_LOCATION/fusionapps/applications/lcm/util/config/csf_template.ini -outputfile PCU_LOCATION/fusionapps/applications/lcm/util/config/csf_encrypted.ini -appbase APPLICATIONS_BASE -codebase PCU_LOCATION
    
    

    The call to lcmcrypt reads the passphrase from the standard input and writes the clear text version of csf_plain.ini.enc to the standard output, which is then piped to the standard input of iniGen.sh. iniGen.sh uses the value of the master_password property to encrypt all other passwords in the generated input file. It also alters the value of the master_password property back to master_password=ignore_me in the generated input file.

  10. Update the CSF_ENCRYPTED_FILE property in ORCH_LOCATION/config/POD_NAME/PRIMORDIAL.properties with the full directory path and file name for PCU_LOCATION/fusionapps/applications/lcm/util/config/csf_encrypted.ini. For more information, see Table 11-2.

Do not use special characters, such as @, _, $, or #, when seeding passwords. The native Repository Creation Utilities (RCUs) for Enterprise Data Quality (EDQ) and Business Intelligence Cloud (BI_CLOUD) do not support creating the schema with special characters. If special characters are used, the password must be enclosed in quotes. However, the native RCUs for EDQ and BI_CLOUD do not support such characters.

On the clean up, the log files are copied from <staging directory>/fusionapps/applications/lcm/util/logs to <normal_mode_log_directory>/preupg_<timestamp> and the configuration files are copied from <staging directory>/fusionapps/applications/lcm/util/config to <normal_mode_template_directory>/preupg_<timestamp>. These include the wallets that were also generated in wallet directory <staging directory>/fusionapps/applications/lcm/util/config.

For more information about the utilities used in this process, see Password and Certificate Management in the Oracle Fusion Applications Administrator's Guide.

6.1.3 Prepare to Register System User Information

Perform this procedure only if the upgrade to Fusion Applications Release 12 is from Release 8 or Release 9. Skip this step if the starting point is Release 10.

To prepare passwords for system users, perform the following steps:

  1. Make a copy of system_user_template.ini from the PCU_LOCATION/fusionapps/applications/lcm/util/config directory. In this example, the copy is named system_user_plain.ini.

  2. Manually edit system_user_plain.ini as follows:

    • Set the master_password property to the Master Orchestration Password previously selected.

    • For each line that contains #text# or #password#, replace #text# or #password# with the correct value for the environment. Note that this password must be a minimum of 8 characters long and it must contain at least one alphabetic character and at least one numeric or special character.

    • Do not replace #text<WLS.USER>#, and #password<WLS.PASSWORD>#. They are used internally by the SchemaPasswordChangeTool.

    MANDATORY: To prevent incorrect results, do not alter system_user_plain.ini beyond these changes.

  3. Create an encrypted version of system_user_plain.ini and delete the clear-text input file. This step requires an encryption tool, such as the lcmcrypt tool or the Linux gpg tool, which takes an encrypted file and a passphrase and writes the decrypted contents to the standard output. In the following example, using lcmcrypt, the command reads the passphrase from the standard input and produces an encrypted output file, system_user_plain.ini.enc:

    (UNIX)
    echo password | ./lcmcrypt.sh -nonInteractive -encrypt -inputfile complete_directory_path/system_user_plain.ini
    
    
  4. Run iniGen.sh in non-interactive mode. Running this script also requires a decryption tool to take an encrypted file and a passphrase, and write the decrypted contents into the standard output. The following example uses lcmcrypt:

    (UNIX)
    echo password | ./lcmcrypt.sh -nonInteractive -decrypt -inputfile complete_directory_path/system_user_plain.ini.enc | ./iniGen.sh -nonInteractive -templatefile PCU_LOCATION/fusionapps/applications/lcm/util/config/system_user_template.ini -outputfile PCU_LOCATION/fusionapps/applications/lcm/util/config/system_user_encrypted.ini -appbase APPLICATIONS_BASE -codebase PCU_LOCATION
    
    

    The call to lcmcrypt reads the passphrase from the standard input and writes the clear text version of system_user_plain.ini.enc to the standard output, which is then piped to the standard input of iniGen.sh.

    iniGen.sh uses the value of the master_password property to encrypt all other passwords in the generated input file. It also alters the value of the master_password property back to master_password=ignore_me in the generated input file.

6.1.4 Direct Upgrade JAZN

Before upgrading the Fusion Applications environment from Release 8 to Release 12 or from Release 9 to Release 12 for Jazn patching, perform the following steps during upgrade downtime:
  1. Back up the Java Authorization (JAZN) files listed in the following table from fusionapps Release 8 or Release 9 instance:

    Table 6-1 Jazn Patches

    Product JAZN Path Relative to APPLICATIONS_BASE/fusionapps Patch Number

    HCM

    ./applications/hcm/security/policies/system-jazn-data.xml

    23100321

    CRM

    ./applications/crm/security/policies/system-jazn-data.xml

    23345997

    FSCM

    ./applications/fscm/security/policies/system-jazn-data.xml

    23100488

    FA-BI

    ./applications/com/acr/security/jazn/bip_jazn-data.xml

    23307572

    Applcore

    ./atgpf/atgpf/modules/oracle.applcp.centralui_11.1.1/exploded/EssUiApp.ear/META-INF/jazn-data.xml

    ./atgpf/atgpf/applications/exploded/FndSetup.ear/META-INF/jazn-data.xml

    23307676

    • ATG 11.1.1.7.2 (Release 8)

    • ATG 11.1.1.7.3 (Release 9)

    FSM

    ./atgpf/setupEss/jazn-data.xml

    ./atgpf/setup/jazn-data.xml

    23322216

  2. Download the patches mentioned in Table 6-1 from My Oracle Support (MOS) as follows:
    1. Go to My Oracle Support.
    2. Click Sign In and log in using your My Oracle Support login name and password.
    3. Click the Patches and Updates tab.
    4. In the Patch Search section, select the Search tab, and click Number/Name or Bug Number (Simple).
    5. Select the Patch Name or Number field and enter the following patch numbers: 23100321, 23345997, 23100488, 23307572, 23307676, 23322216.
    6. Click Search.
      The Patch Search Results are displayed.
    7. Click the patch number and then click Download. Patches are released for both Release 8 and Release 9, check the version when downloading.
    8. Unzip all the zip files for the Release being upgraded from. There are six zip files for each release. The following example is for Release 8:
      1. Create a folder called Patch and download all the patches:

        Patch
        |-- p23100321_111800_Fusion_GENERIC.zip
        |-- p23100488_111800_Fusion_GENERIC.zip
        |-- p23307572_111800_Fusion_GENERIC.zip
        |-- p23307676_111172_Generic.zip
        |-- p23322216_111800_Fusion_GENERIC.zip
        `-- p23345997_111800_Fusion_GENERIC.zip
        
      2. Unzip all zip files using ‘*.zip’:

        Patch/
        |-- 23100321
        |-- 23100488
        |-- 23307572
        |-- 23307676
        |-- 23322216
        |-- 23345997
        |-- p23100321_111800_Fusion_GENERIC.zip
        |-- p23100488_111800_Fusion_GENERIC.zip
        |-- p23307572_111800_Fusion_GENERIC.zip
        |-- p23307676_111172_Generic.zip
        |-- p23322216_111800_Fusion_GENERIC.zip
        `-- p23345997_111800_Fusion_GENERIC.zip
        
  3. Run the following LDAP queries to identify the version for the HCM, CRM, and FSCM stripes:
    • HCM: ldapsearch -X -h <hostname> -p 3060 -D <admin username> -w <password> -b "cn=hcm,cn=FusionDomain,cn=JPSContext,cn=FAPolicies" -s base "objectclass=*" orclversion

    • CRM: ldapsearch -X -h <hostname> -p 3060 -D <admin username> -w <password> -b "cn=crm,cn=FusionDomain,cn=JPSContext,cn=FAPolicies" -s base "objectclass=*" orclversion

    • FSCM: ldapsearch -X -h <hostname> -p 3060 -D <admin username> -w <password> -b "cn=fscm,cn=FusionDomain,cn=JPSContext,cn=FAPolicies" -s base "objectclass=*" orclversion

    Where

    <hostname>: Policy store LDAP

    <admin username>: Admin username. For example: cn=orcladmin

    <password>: Admin password

    For each query, there is an output. The following example shows an HCM output:

    # LDAPv3
    # base<cn=hcm,cn=FusionDomain,cn=JPSContext,cn=FAPolicies> with scope baseObject
    # filter: objectclass=*
    # requesting: orclversion
    #
    
    # hcm, FusionDomain, JPSContext, FAPolicies
    dn: cn=hcm,cn=FusionDomain,cn=JPSContext,cn=FAPolicies
    orclversion: 11.1.8.0.0
    
  4. Replace the version field in the jazn xml with the version obtained from the LDAP query for these three patches 23100321 (HCM), 23345997 (CRM), and 23100488 (FSCM) as follows:
    • HCM: vi ./23100321/23100321_MW/files/hcm/security/policies/system-jazn-data.xml

    • CRM: vi ./23345997/crm/security/policies/system-jazn-data.xml

    • FSCM: vi ./23100488/fscm/security/policies/system-jazn-data.xml

    The following example shows how to replace the version field in the JAZN patch for HCM:

    <policy-store>
     <applications>
      <application locale="en_US" version="fusionapps/hcm/deploy/system-jazn-data.xml:20160513010336_23256548.0">
      <name>hcm</name>
      <app-roles>
    

    Change it as follows:

    <policy-store>
     <applications>
      <application locale="en_US" version="11.1.8.0.0">
      <name>hcm</name>
      <app-roles>
    
  5. Copy the JAZN files from the patch folder to the fusionapps instance. The following is an example for Release 8 mentioned in Step 2:
    cp  ./23322216/setup/jazn-data.xml APPLICATIONS_BASE/fusionapps/atgpf/setup/jazn-data.xml
    cp  ./23322216/setupEss/jazn-data.xml APPLICATIONS_BASE/fusionapps/atgpf/setupEss/jazn-data.xml
    cp  ./23100321/23100321_MW/files/hcm/security/policies/system-jazn-data.xml APPLICATIONS_BASE/fusionapps/applications/hcm/security/policies/system-jazn-data.xml
    cp  ./23100488/fscm/security/policies/system-jazn-data.xml APPLICATIONS_BASE/fusionapps/applications/fscm/security/policies/system-jazn-data.xml
    cp  ./23307676/files/atgpf/modules/oracle.applcp.centralui_11.1.1/exploded/EssUiApp.ear/META-INF/jazn-data.xml APPLICATIONS_BASE/fusionapps/atgpf/modules/oracle.applcp.centralui_11.1.1/exploded/EssUiApp.ear/META-INF/jazn-data.xml
    cp  ./23307676/files/atgpf/applications/exploded/FndSetup.ear/META-INF/jazn-data.xml APPLICATIONS_BASE/fusionapps/atgpf/applications/exploded/FndSetup.ear/META-INF/jazn-data.xml
    cp  ./23345997/crm/security/policies/system-jazn-data.xml APPLICATIONS_BASE/fusionapps/applications/crm/security/policies/system-jazn-data.xml
    cp  ./23307572/com/acr/security/jazn/bip_jazn-data.xml APPLICATIONS_BASE/fusionapps/applications/com/acr/security/jazn/bip_jazn-data.xml
    

6.1.5 Run OPSS Dup Tool

Run the OPSS dup tool by following the steps listed in OPSS: How to Delete Duplicate Permission Entries in Fusion Apps Environment (Doc ID 2223825.1) available on My Oracle Support. To view this document, perform the following steps:

  1. Go to My Oracle Support.

  2. Click Sign In and log in using your My Oracle Support login name and password.

  3. Click the Knowledge tab.

  4. In the Enter search terms field, enter “Doc ID 2223825.1”

    The Knowledge Base Search Results are displayed.

  5. Click the document's hyperlink to view it.

6.2 Upgrade to Release 12

Perform the following steps to upgrade to Oracle Fusion Applications Release 12 (11.12.x.0.0):

6.2.1 Update the Database and Middle Tier Credential Stores

Before running RUP Installer, the following pre-upgrade steps must be performed:

6.2.1.1 Run Database Credential Store Retrofit Utility in Pods Where EM is Not Present

The Database Credential Store (DBCS) Wallet Retrofit Utility runs on the Fusion Applications (FA) middle tier. As part of the DBCS Wallet Retrofit process, you must extract the credentials for all common users, the TDE wallet password (if any), and the Credential Store Framework (CSF) on the FA middle tier to a temporary wallet file. Then, run CCU on one of the database (DB) hosts in a special mode, which merges the contents of the temporary wallet into the DBCS wallet, creating the DBCS wallet in case it does not already exist. Finally, copy the updated DBCS wallet file to the other DB host.

Run the DBCS Wallet Retrofit Utility on the FA middle tier by performing the following steps:
  1. Create the PCU_LOCATION/fusionapps/applications directory. PCU_LOCATION is a folder specified as a property in PRIMORDIAL.properties. This location must be within APPLICATIONS_CONFIG. For example:
    APPLICATIONS_CONFIG/lcm/tmp/pcu
    
  2. Unzip SHARED_LOCATION/11.12.x.0.0/Repository/installers/pre_install/pcubundle.zip into PCU_LOCATION/fusionapps/applications.
  3. Go to the PCU bin directory as follows:
    cd PCU_LOCATION/fusionapps/applications/lcm/util/bin
    
  4. Set the JAVA_HOME environment variable before running any commands in this section as follows:
    setenv JAVA_HOME=java_home_location
    

    All commands in this section must be run from PCU_LOCATION/fusionapps/applications/lcm/util/bin.

  5. Create a wallet by looking up the common user credentials from CSF on the FA midtier. The following commands will produce a password-protected wallet at the output wallet location:
    cd PCU_LOCATION/fusionapps/applications/lcm/util/bin
    echo <wallet-password> | ./csfLookup.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION -common -schemalist ALL -outputwallet <ouput-wallet-location> -ccumerge -loglevel finest
    

    Back up the wallet in a location for future archival.

  6. Move the wallet onto the DB host RAC node 1 to a certain location, which will be the input wallet location.
  7. Download patch 24948508 and explode the patch zip in a stage directory on the DB host. After unzipping, pcubundle zip will be located at the following location inside the stage directory:
    24948508/files/sysman/metadata/swlib/pcu/11.12.0.0.0/upgradeemdp/components/pcubundle.zip
    
  8. Set up PCU in codebase mode on the DB host as follows:
    export ZIP_LOC="<stage-dir>/24948508/files/sysman/metadata/swlib/pcu/11.12.0.0.0/upgradeemdp/components/"
    ......
    export APP_BASE="<stage-dir>"
    export CODE_BASE="$APP_BASE/instance/tmp/pcu"
    mkdir -p $CODE_BASE/fusionapps/applications/lcm
    cd $CODE_BASE/fusionapps/applications
    echo "Zip location.. $ZIP_LOC/pcubundle.zip"
    cp $ZIP_LOC/pcubundle.zip $CODE_BASE/fusionapps/applications
    cd $CODE_BASE/fusionapps/applications/lcm
    rm -rf util
    rm -rf credstoreutil
    cd ..
    unzip pcubundle.zip
    cd lcm/util/bin/
    

    Replace the values accordingly.

  9. Ensure that there are valid entries in the /etc/oratab for the respective oracle uniquename in the proper format. For example, if the oracle unique name is fadb and the oracle home is /u01/app/oracle/product/11.2.0, then the valid entry that should exist in /etc/oratab is the following:
    fadb : /u01/app/oracle/product/11.2.0 :
    
  10. Run the following commands on the DB host:
    touch <codebase>/fusionapps/applications/lcm/util/config/ORACLE_UNQNAME.pcu
    echo "ORACLE_UNQNAME=<oracle unique name>"
    echo "<oracle unique name>" > <codebase>/fusionapps/applications/lcm/util/config/ORACLE_UNQNAME.pcu
    
    export ORACLE_UNQNAME=<oracle unique name>
    export PCU_BUNDLE_ZIP="<patch-unzip-location>/24948508/files/sysman/metadata/swlib/pcu/11.12.0.0.0/upgradeemdp/components/pcubundle.zip"
    
    export CDB_JDBC_CONNECT_STRING="jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=<ip-address>)(PORT=<port>))(ADDRESS=(PROTOCOL=TCP)(HOST=<ip-address>)(PORT=<port>))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=<oracle unique name>)))"
    
    cd <codebase>/fusionapps/applications/lcm/util/bin 
    echo <wallet-password> | ./dbcsUpdate.sh -appbase APPLICATIONS_BASE
     -codebase PCU_LOCATION -inputwallet <input wallet location> -jdbcconnectstring <jdbc connect string> -loglevel finest
    

    Where:

    <oracle unique name>: The DB unique name of that particular DB.

    In the example value for CDB_JDBC_CONNECT_STRING shown above, replace the following values:
    • <ip-address> with the IP address of your Sql*Net listener process

    • <port> with the port number the Sql*Net listener is using

    • <oracle unique name> with the unique name for this database (it should match the db_unique_name initialization parameter)

    Note that the sample value for CDB_JDBC_CONNECT_STRING assumes a RAC database. If your database is not RAC, replace the text after "jdbc:oracle:thin:" with the appropriate value.
  11. The tool will delete the input wallet that is transferred. After the tool is complete, verify the creation of the DBCS wallet by running the following commands:
    $ORACLE_HOME/bin/mkstore -wrl $ORACLE_HOME/dbs/dbcs/$ORACLE_UNQNAME/wallet -list 
    $ORACLE_HOME/bin/mkstore -wrl $ORACLE_HOME/dbs/dbcs/$ORACLE_UNQNAME/wallet -viewEntry SYS
    

    Where:

    $ORACLE_HOME/bin/mkstore -wrl $ORACLE_HOME/dbs/dbcs/$ORACLE_UNQNAME/wallet -list should get more than or equals to 3 entries.

  12. Copy the DBCS wallet in RAC node 1 from $ORACLE_HOME/dbs/dbcs/<oracle unique name>/wallet to RAC node 2 at $ORACLE_HOME/dbs/dbcs/<oracle unique name>/wallet.
For more information about the DBCS Wallet Retrofit Utility, see Run Utilities in the Oracle Fusion Applications Administrator's Guide.

6.2.1.2 Run the CSF Cache Utility Manually in Pods Where EM is Not Present

To run the Credential Store Framework (CSF) cache utility manually, perform the following steps:

  1. On the DB host, create master password payload wallet in either one of the following steps:
    echo <master-password>| ./ walletTool.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION -write -schema MASTER_PASSWORD -outputwallet PCU_LOCATION/MP_WALLET
    

    or

    $ORACLE_HOME/bin/mkstore -wrl <codebase>/MP_WALLET/ -createALO
    $ORACLE_HOME/bin/mkstore -wrl <codebase>/MP_WALLET/ -createEntry MASTER_PASSWORD 
    

    Note that MASTER_PASSWORD is not the master password value, but a literal text that should not be changed. The tool will prompt you for the password. Type in the master password twice.

  2. On the DB host, generate password protected payload wallet from the DBCS wallet as follows:
    echo <master-password>| ./dbcslookup.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION -outputwallet PCU_LOCATION/WALLET -schemalist SYS -auto –loglevel finest
    
  3. Move the wallet directories WALLET and MP_WALLET to the the admin host in a directory called wallets. Call the wallets directory <wallet-dir>. Note that wallets directory contains WALLET and MP_WALLET directories. In other words, WALLET and MP_WALLET are siblings and children to wallets directory as shown in the following example:
    |
    |-------WALLET
    |
    |-------MP_WALLET
    
  4. On the admin host, run the following command to update the CSF entry for SYS:
    ./csfUpdate.sh -appbase APPLICATIONS_BASE -codebase PCU_LOCATION  -schemalist SYS -inputwallet <wallet-dir> -loglevel finest
    
    Where:
    • <wallet-dir>: The wallet directory from Step 3.

6.2.2 Run Upgrade Orchestrator During Downtime

Review the following steps before starting Upgrade Orchestrator:

Start Upgrade Orchestrator during downtime by running the following commands on all host types, including the respective scaled out hosts. See Options for the Orchestration Command When Starting Orchestration. The value POD_NAME, for the -pod argument, refers to the directory created in Unzip Orchestration.zip. The Master Orchestration Password, which was created in Preliminary Steps, is required.

If the DISPLAY variable is set, confirm it is accessible. If the DISPLAY variable is not set, run unset/unsetenv DISPLAY before running orchestration.

To run Upgrade Orchestrator, perform the following steps:
  1. Run the following command to start orchestration on the Primordial host:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh -pod POD_NAME -hosttype PRIMORDIAL [-DlogLevel=log_level]
    
    
  2. Run the following command to start orchestration on each Midtier host that is listed in the HOSTNAME_MIDTIER property in the pod.properties file:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh -pod POD_NAME -hosttype MIDTIER [-DlogLevel=log_level]
    
  3. Run the following command to start orchestration on each OHS host that is listed in the HOSTNAME_OHS property in the pod.properties file:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh -pod POD_NAME -hosttype OHS [-DlogLevel=log_level]
    
    
  4. Run the following command to start orchestration on each IDM host associated with the following properties in the pod.properties file:
    • HOSTNAME_IDMOID

    • HOSTNAME_IDMOIM

    • HOSTNAME_IDMOHS

    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh -pod POD_NAME -hosttype IDM [-DlogLevel=log_level]
    
    

Upgrade Orchestrator runs the tasks listed in the following table:

Table 6-2 Tasks Run During the PreDowntime and DowntimePreFA Phase

Task Name Phase Name Task ID Host Types Notes

Verify current environment setup

PreDowntime

VerifySetupPlugin

Primordial

NA

 

Validate Mandatory Orchestration Properties

PreDowntime

PropertyValidationPlugin

All

NA

 

Validate Host Type

PreDowntime

HostTypeValidatePlugin

All

NA

 

Validate RUP Lite for OVM Properties

PreDowntime

RupLiteOvmValidatePlugin

All

NA

 

Register Database Schema Information 

PreDowntime

RegisterDBSchemaInfo

Primordial

NA

 

Validate Oracle Identity Management Setup

PreDowntime

IDMPreValidate

IDM and Configuration

This task may fail. If it fails, ignore the error and proceed.

 

Download Email Template from OIM

PreDowntime

DownloadEmailTemplate

IDM

This task may fail. If it fails, ignore the error and proceed.

 

Run PreUpgrade Tasks

DowntimePreFA

PreUpgradeTasks

Primordial

NA

Export OWSM Repository

DowntimePreFA

ExportOWSMRepository

Primordial

NA

Back up files in Smart Clone Environment (Oracle VM only)

DowntimePreFA

BackupFilesForSmartClone

Primordial

NA

Disable Index Optimization

DowntimePreFA

DisableIndexOptimization

Primordial

NA

Back Up the OPSS Security Store

DowntimePreFA

Backup OPSS

Primordial

NA

Stop All Servers

DowntimePreFA

StopAllServers

Primordial, Midtier

NA

Set CrashRecoveryEnabled Property to False

DowntimePreFA

DisableCrashRecoveryEnabled

Primordial

NA

Stop OPMN Control Processes

DowntimePreFA

StopOPMNProcesses

Primordial, OHS, Midtier

NA

Stop Node Managers

DowntimePreFA

StopNodeManager

Primordial, Midtier

NA

Stop IIR Server on Midtier host

DowntimePreFA

StopIIRPlugin

Midtier

NA

Uninstall IIR Server (If IIR is configured on primordial or Midtier)

DowntimePreFA

UninstallIIRPlugin

Primordial

NA

Stopping Oracle Identity Management - AUTHOHS

DowntimePreFA

StopOHS

IDM

This task may fail. If it fails, ignore the error and proceed.

Stopping Oracle Identity Management - OIM

DowntimePreFA

StopOIM

IDM

This task may fail. If it fails, ignore the error and proceed.

Stopping Oracle Identity Management -OID

DowntimePreFA

StopOID

IDM

This task may fail. If it fails, ignore the error and proceed.

Upgrade Orchestrator can exit for either a failure, a pause point, or upon successful completion. When orchestrator exits on failure, review the log files and take the appropriate corrective action. Then resume Orchestrator using the commands specified in this section.

For information about monitoring the progress of the upgrade, see Monitor Upgrade Orchestration Progress.

For information about troubleshooting, see the Monitor and Troubleshoot the Upgrade.

If the orchestration commands result in any hanging tasks on any host, do not use ctrl-C or ctrl-Z to exit. Update the status of the task that is hanging by using the commands in Upgrade Orchestrator Hangs. After exiting and fixing the issue that caused the hanging, restart Upgrade Orchestrator, using the commands specified in this section, on the hosts that were forced to exit.

6.2.3 Pause Point 1 - Run RUP Lite for OVM in Pre-Root Mode (Oracle VM Only)

If Oracle Fusion Applications is not running on an Oracle VM environment, proceed to Pause Point 2- Upgrade Oracle Identity Management to Release 12.

If Oracle Fusion Applications is running on an Oracle VM environment, orchestration pauses RUP Lite for OVM can be run in pre-root mode as the root user on the primordial, OHS, Midtier, and IDM hosts. Perform the steps in Run RUP Lite for OVM in Pre-Root Mode (Oracle VM Only).

6.2.4 Update Status to Success (Oracle VM Only)

After successful completion of running RUP Lite for OVM in pre-root mode, update the task status to success by performing the following steps:

  1. Update the task status on the primordial host as follows:
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype PRIMORDIAL -hostname host_name -release 11.12.x.0.0 -phase DowntimePreFA -taskid RupLiteOvmPreRootPausePointPlugin -taskstatus success
    
  2. Update the task status on the OHS host that is listed in the HOSTNAME_OHS property in the pod.properties file as follows:
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype OHS -hostname host_name  -release 11.12.x.0.0 -phase DowntimePreFA -taskid RupLiteOvmPreRootPausePointPlugin  -taskstatus success
    
  3. Update the task status on each Midtier host that is listed in the HOSTNAME_MIDTIER property in the pod.properties file as follows:
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype MIDTIER -hostname host_name -release 11.12.x.0.0 -phase DowntimePreFA -taskid RupLiteOvmPreRootPausePointPlugin -taskstatus success
    
  4. Update the task status on each IDM host that is listed in the following properties in the pod.properties file:
    • HOSTNAME_IDMOID

    • HOSTNAME_IDMOIM

    • HOSTNAME_IDMOHS

    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype IDM -hostname host_name  -release 11.12.x.0.0 -phase DowntimePreFA -taskid RupLiteOvmPreRootPausePointPlugin -taskstatus success
    

6.2.5 Resume Upgrade Orchestrator (Oracle VM Only)

Resume orchestration on all host types, including the respective scaled out hosts, using the commands in Run Upgrade Orchestrator During Downtime, Steps 1 through 4.

6.2.6 Pause Point 2- Upgrade Oracle Identity Management to Release 12

For the steps to upgrade Oracle Identity Management (IDM) that are appropriate for your environment, see Upgrade Oracle Identity Management to Release 12.

6.2.7 Pause Point 3 - Reload Orchestration

Orchestration pauses after first RUP installer is completed. No manual step is required.

Recover From CAS Corruption Caused by Out of Memory Error During Attaching CAS Store (Solaris Only)

An out of memory error during attaching CAS store may happen in the first RUP Installer. You can check for these errors in the APPTOP/fusionapps/applications/cfgtoollogs/opatch/obrepoXXX.log

The following errors may be seen:
[Jan 21, 2017 12:03:50 PM] [INFO] [OPSR-TIME] Loading CAS libraries
[Jan 21, 2017 12:03:50 PM] [INFO] [OPSR-TIME] CAS library loaded
[Jan 21, 2017 12:03:50 PM] [INFO] [OPSR-TIME] CAS - attaching cas store
[Jan 21, 2017 1:39:07 PM] [INFO] attachMain error: Corrupt master view:
java.lang.OutOfMemoryError: Direct buffer memory
[Jan 21, 2017 1:39:07 PM] [INFO] Stack Description:
oracle.glcm.opatch.content.errors.FileWriteException: Corrupt master view:
java.lang.OutOfMemoryError: Direct buffer memory
The errors shown above may corrupt the CAS master view. If these errors are seen, perform the following steps during this Pause Point:
  1. Remove the .cas directory from the APPLTOP/fusionapps/applications/ directory.

  2. Fix the memory setting in the oraparam.ini under APPLTOP/fusionapps/applications/oui by updating the memory setting for JRE_MEMORY_OPTIONS from -mx1024m to -mx3072m.

  3. Run the following obrepo attach command:
    OH/OPatch/obrepo attach -oh 
    <OH location> -jdk <jdk location> -invPtrLoc <inventory pointer location for oraInst.loc>
    
    For example:
    APPLTOP/fusionapps/applications/OPatch/obrepo attach -oh 
    APPLTOP/fusionapps/applications -jdk /u01/repository/jdk -invPtrLoc /u01/APPLTOP/fusionapps/applications/oraInst.loc 
    
  4. Resume with second RUP Installer.

6.2.8 Update Status to Success (Reload Orchestration)

Update the task status to success on all hosts by performing the following steps:

  1. Update the task status on the primordial host as follows:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype PRIMORDIAL -hostname host_name -release 11.12.x.0.0 -phase DowntimeDuringFA -taskid ReloadOrchPausePoint -taskstatus success
    
  2. Update the task status on each Midtier host that is listed in the HOSTNAME_MIDTIER property in the pod.properties file as follows:
    (Unix) 
    cd ORCH_LOCATION/bin ./orchestration.sh updateStatus -pod POD_NAME -hosttype MIDTIER -hostname host_name -release 11.12.x.0.0 -phase DowntimeDuringFA -taskid ReloadOrchPausePoint -taskstatus success 
    
  3. Update the task status on each OHS host that is listed in the HOSTNAME_OHS property in the pod.properties file as follows:
    (Unix) 
    cd ORCH_LOCATION/bin ./orchestration.sh updateStatus -pod POD_NAME -hosttype OHS -hostname host_name -release 11.12.x.0.0 -phase DowntimeDuringFA -taskid ReloadOrchPausePoint -taskstatus success 
    
  4. Update the task status on each IDM host that is listed in the following properties in the pod.properties file as shown in the following example:
    • HOSTNAME_IDMOID

    • HOSTNAME_IDMOIM

    • HOSTNAME_IDMOHS

    (Unix) 
    cd ORCH_LOCATION/bin ./orchestration.sh updateStatus -pod POD_NAME -hosttype IDM -hostname host_name -release 11.12.x.0.0 -phase DowntimeDuringFA -taskid ReloadOrchPausePoint -taskstatus success 
    

6.2.9 Resume Upgrade Orchestrator (Reload Orchestration)

Resume orchestration on all host types, including the respective scaled out hosts, by performing Steps 1 through 4 as listed in Run Upgrade Orchestrator During Downtime.

Table 6-3 Tasks Run During Various Downtime Phases

Task Name Phase Name Task ID Host Types

Run RUP Lite for Domain Configuration

DowntimeDuringFA Phase

RunRUPLiteForDomainsConfig

Primordial, Midtier

Start Node Managers

DowntimeDuringFA Phase

StartNodeManager

Primordial, Midtier

Start OPMN Control Processes

DowntimeDuringFA Phase

StartOPMNProcesses

Primordial, OHS, Midtier,

Update Topology Information and Worker Details

DowntimeDuringFA Phase

UpdateTopologyInfoPlugin

Primordial, Midtier

Run Oracle Fusion Applications RUP Installation Part 2 of 2

DowntimeDuringFA Phase

RunSecondRUPInstaller

Primordial

Start Remote Workers for Applying Database Patches in Distributed Mode

DowntimeDuringFA Phase

StartRemoteWorkersPlugin

Primordial, Midtier

Clean up Worker Details Information for the Topology

DowntimeDuringFA Phase

CleanupTopologyInfoPlugin

Primordial, Midtier

Run Vital Signs Checks

DowntimePostFA Phase

VitalSignsChecks

Primordial

Prepare for Oracle Fusion Applications Web Tier Upgrade

DowntimePostFA Phase

CopyWebtierUpgradeToCentralLoc

Primordial

Stop Oracle Fusion Applications - APPOHS

DowntimePostFA Phase

StopOPMNProcesses

OHS

Remove Conflicting Patches for Oracle Fusion Applications Web Tier Oracle Homes

DowntimePostFA Phase

RemoveConflictingPatches

OHS

Upgrade Oracle Fusion Applications OHS Binaries

DowntimePostFA Phase

UpgradeOHSBinary

OHS

Upgrade Oracle Fusion Applications OHS Configuration

DowntimePostFA Phase

UpgradeOHSConfig

OHS

Star OPMN Control Processes

DowntimePostFA Phase

StartOPMNProcesses

OHS

Run RUP Lite for BI

DowntimePostFA Phase

RunRUPLiteForBI

Midtier

Run RUP Lite for Domain Configuration in online mode

DowntimePostFA Phase

RunRUPLiteForDomainsConfigOnline

Primordial, Midtier

Run RUP Lite for OVM in Online Mode as Application User

DowntimePostFA Phase

RupLiteOvmOnline

Primordial, OHS, Midtier, IDM

6.2.10 Pause Point 4- Run RUP Lite for OVM in Post-Root Mode (Oracle VM Only)

If Oracle Fusion Applications is not running on an Oracle VM environment, proceed to Pause Point 5 - Create the Incremental Provisioning Response File.

If Oracle Fusion Applications is running on an Oracle VM environment, orchestration pauses RUP Lite for OVM can be run in post-root mode as the root user on the primordial, OHS, Midtier, and IDM hosts. Perform the steps listed in Run RUP Lite for OVM in Post-Root Mode (Oracle VM Only) .

6.2.11 Update Status to Success (Oracle VM Only)

After successful completion of running RUP Lite for OVM in post-root mode, update the task status to success by performing the following steps:
  1. Update the task status on the primordial host as follows:
    cd ORCH_LOCATION/bin
     ./orchestration.sh updateStatus -pod POD_NAME -hosttype PRIMORDIAL -hostname host_name -release 11.12.x.0.0 -phase DowntimePostFA -taskid RupLiteOvmPostRootPausePointPlugin -taskstatus success
    
  2. Update the task status on the OHS host that is listed in the HOSTNAME_OHS property in the pod.properties file as follows:
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype OHS -hostname host_name  -release 11.12.x.0.0 -phase DowntimePostFA -taskid RupLiteOvmPostRootPausePointPlugin  -taskstatus success
    
  3. Update the task status on each Midtier host that is listed in the HOSTNAME_MIDTIER property in the pod.properties file as follows:
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype MIDTIER -hostname host_name -release 11.12.x.0.0 -phase DowntimePostFA -taskid RupLiteOvmPostRootPausePointPlugin -taskstatus success
    
  4. Update the task status on each IDM host that is listed in the following properties in the pod.properties file:
    • HOSTNAME_IDMOID

    • HOSTNAME_IDMOIM

    • HOSTNAME_IDMOHS

    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype IDM -hostname host_name  -release 11.12.x.0.0 -phase DowntimePostFA -taskid RupLiteOvmPostRootPausePointPlugin -taskstatus success
    

6.2.12 Resume Upgrade Orchestrator (Oracle VM Only)

Resume orchestration on the Midtier hosts using the command in Run Upgrade Orchestrator During Downtime, Step 2.

Upgrade Orchestrator runs the tasks in the following table:

Table 6-4 Tasks Run During the DowntimePostFA Phase

Task Name Task ID Host Types

Set CrashRecoveryEnabled Property to True

EnableCrashRecoveryEnabled

Primordial

Run Post Upgrade Health Checks

PostUpgradeChecks

Primordial, OHS, Midtier

Run Data Quality Checks

DataQualityChecks

Primordial

6.2.13 Pause Point 5 - Create the Incremental Provisioning Response File

Orchestration pauses if one of the conditions described in Prepare Incremental Provisioning is met, so a response file for running incremental provisioning can be created. Perform the steps in Create an Extended Provisioning Response File in Oracle Fusion Applications Installation Guide.

Then, proceed to Update Status to Success (Incremental Provisioning Response File).

6.2.14 Update Status to Success (Incremental Provisioning Response File)

After successfully creating the response file for manual incremental provisioning, update the task status to success on the primordial host as follows:

(Unix)
cd ORCH_LOCATION/bin
./orchestration.sh updateStatus -pod POD_NAME -hosttype PRIMORDIAL -hostname host_name -release 11.12.x.0.0 -phase DowntimePostFA -taskid CreateIpResponseFilePausePointTask -taskstatus success 

6.2.15 Resume Upgrade Orchestrator (Incremental Provisioning Response File)

Resume orchestration on the primordial host, using the commands in Run Upgrade Orchestrator During Downtime, Step 1.

6.2.16 Pause Point 6 - Perform Incremental Provisioning

If the PERFORM_INCREMENTAL_PROVISIONING property is set to true in the pod.properties file, orchestration pauses at this point, so incremental provisioning can be performed manually. Perform the steps listed in Perform Incremental Provisioning in the Oracle Fusion Applications Installation Guide.

Perform the following after Incremental Provisioning has been completed adding new provisioning offerings. This is only required if Incremental Provisioning is run, not otherwise:
  • Edit <APPTOP>/instance/fapatch/FUSION_env.properties on the CommonDomain AdminServer host. The values of the following properties in the file should be edited to specify the host and port of the OID where the OPSS policy store lives:
    • POLICY_STORE_LDAP_HOSTNAME=<fully qualified OID host name>

    • POLICY_STORE_LDAP_PORT=<OID port>

    • POLICY_STORE_CONNECT_PROTOCOL_SSL=<Yes/No>

      Set the value to Yes or No depending on whether the policy store communicates with Fusion Application in secure mode or not.

Note the following:
  • This is required to be done only if Incremental Provisioning is run to add new provisioning offerings during upgrade and should be done only after Incremental Provisioning is complete and before 'postUpgradeCleanup' step of upgrade is run as part of the resumed upgrade flow.

  • If the policy store OID host and port is not known, please refer to the response file used to provision the environment initially. The values are found in properties OAM_OPSS_HOST and OAM_OPSS_PORT respectively of the response file.

Then, proceed to Update Status to Success (Incremental Provisioning).

If the PERFORM_INCREMENTAL_PROVISIONING property is set to false, this pause point does not occur and orchestration continues with the tasks listed in Table 6-5.

6.2.17 Update Status to Success (Incremental Provisioning)

After successfully performing manual incremental provisioning, update the task status to success on the primordial, OHS, and Midtier hosts:

  1. Update the task status on the primordial host as follows:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype PRIMORDIAL -hostname host_name -release 11.12.x.0.0 -phase DowntimePostFA -taskid RunIncrementalProvisioningManually -taskstatus success 
    
    
  2. Update the task status on each Midtier host that is listed in the HOSTNAME_MIDTIER property in the pod.properties file as follows:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype MIDTIER -hostname host_name -release 11.12.x.0.0 -phase DowntimePostFA -taskid RunIncrementalProvisioningManually -taskstatus success 
    
    
  3. Update the task status on each OHS host that is listed in the HOSTNAME_OHS property in the pod.properties file as follows:
    (Unix)
    cd ORCH_LOCATION/bin
    ./orchestration.sh updateStatus -pod POD_NAME -hosttype OHS -hostname host_name  -release 11.12.x.0.0 -phase DowntimePostFA -taskid RunIncrementalProvisioningManually -taskstatus success 
    
    

6.2.18 Resume Upgrade Orchestrator

Resume orchestration on all host types, including the respective scaled out hosts, using the commands in Run Upgrade Orchestrator During Downtime, Steps 1 through 3.

Upgrade Orchestrator runs the tasks shown in the following table:

Table 6-5 Tasks Run For the Language Pack Upgrade

Task Name Task ID Host Types

Run Post Incremental Provisioning Health Checks

PostIPChecks

Primordial, OHS, Midtier

Run Post Upgrade GeneralSystem Health Checks

GeneralSystemChecks

Primordial, OHS, Midtier

Update Topology Information and Worker Details

UpdateTopologyInfoPlugin

Primordial, Midtier

Runs Configuration Actions for all Installed Languages

LanguagePackConfig

Primordial, Midtier

Run Post Language Pack Health Checks

PostLangPackChecks

Primordial

Perform Post Upgrade Configuration

PostUpgradeConfiguration

Primordial

Run Post Upgrade Cleanup Tasks

PostUpgradeCleanup

Primordial

6.2.19 Upgrade Orchestrator Completes Successfully

Upgrade Orchestrator generates the Oracle Fusion Applications Orchestration Report upon successful completion of the upgrade, which is reviewed as a post-upgrade task. To continue with the upgrade after all tasks complete successfully, proceed to Run Post-Upgrade Tasks.

6.2.20 Clean Up the Middle Tier Credential Store

After running RUP Installer, the following post-upgrade step must be performed to clean up the middle tier credential store:

6.2.20.1 Run the CSF Cleanup Utility Manually

The Credential Store Framework (CSF) Cleanup Utility runs on the Fusion Applications (FA) middle tier and removes all common users from CSF. To run the CSF Cleanup Utility manually, perform the following steps:

  1. Go to the following directory on the FA admin host:
    $CODE_BASE/fusionapps/applications/lcm/util/bin
    
  2. Run the following command:
    ./csfClean.sh -appbase <appbase> -codebase <codebase>
    

    Where:

    • -appbase: The APPLICATIONS_BASE directory, which is the root directory under which all of the middle tier Fusion Applications (FA) and Fusion Middleware (FMW) code is installed.

    • -codebase: The base directory under which the utility code is installed or staged. By default, it is the same as -appbase. However, when running any utility on the database (DB) host, -codebase must be specified and -appbase should not since there is no APPLICATIONS_BASE directory on the DB host.

    For more information about the CSF Cleanup Utility, see Run Utilities in the Oracle Fusion Applications Administrator's Guide.

6.3 Pause Point Steps

This section describes the detailed steps required only by the following default pause points:

6.3.1 Upgrade the Oracle Identity Management Domain to Release 12 (11.12.x.0.0)

Before performing an upgrade to Release 12 (11.12.x.0.0), check the Oracle Fusion Applications Technical Known Issues - Release 12 (Doc ID 2224140.1) for the latest information on required patches.

Perform the following steps to manually upgrade the Oracle Identity Management domain to Release 12 (11.12.x.0.0):

For more information about the Oracle Identity Management domain, see Overview of Upgrade Patches and About Identity Management Domain, Nodes and Oracle homes.

6.3.1.1 Overview of Upgrade Patches

Oracle Identity Management for Oracle Fusion Applications 11g, Release 12 (11.12.x.0.0) includes patches for the following products that are installed in the Oracle Identity Management domain:

  • Oracle IDM Tools

  • Oracle Access Manager

  • Oracle WebGate

  • Oracle Internet Directory

The Oracle Fusion Applications Release 12 Identity Management software and patches for the appropriate platform are available in the Oracle Fusion Applications repository under SHARED_LOCATION/11.12.x.0.0/Repository/installers. Review the individual patch Readme files before applying them.

6.3.1.2 About Identity Management Domain, Nodes and Oracle homes

This section describes the nodes and Oracle homes in the Identity Management domain for Oracle Fusion Applications 11g Release 12 (11.12.x.0.0).

  • Identity Management (IDM) Node

    • WEBLOGIC_ORACLE_HOME: (For IDM provisioned environments, this is IDM_BASE/products/dir/wlserver_10.3):

      • Oracle WebLogic Server

    • IDM_ORACLE_HOME: This is also known as the OID_ORACLE_HOME. (For IDM provisioned environments, this is IDM_BASE/products/dir/oid). The following Oracle Identity Management products are installed in this Oracle home:

      • Oracle Internet Directory

      • Oracle Virtual Directory

      • Oracle Directory Services Manager

    • IDM_ORACLE_COMMON_HOME: (For IDM provisioned environments, this is IDM_BASE/products/dir/oracle_common). The following Oracle Identity Management products are installed in this Oracle home:

      • Oracle Platform Security Services (OPSS)

      • Oracle Web Services Manager (OWSM)

  • Database Node

    • RDBMS_ORACLE_HOME: This is the ORACLE_HOME of the Oracle Database. Apply mandatory database patches to this Oracle home.

6.3.1.3 Perform Preinstallation and Upgrade Tasks

6.3.1.3.1 Verify Prerequisites

Ensure that the environment meets the following requirements before installing or uninstalling the patch:

  • Verify the OUI Inventory

    OPatch needs access to a valid OUI inventory to apply patches. Validate the OUI inventory with the following command:

    opatch lsinventory
    

    If the command errors out, contact Oracle Support for assistance in validating and verifying the inventory setup before proceeding.

  • Confirm the executables appear in the system PATH.

    The patching process uses the unzip and the OPatch executables. After setting the ORACLE_HOME environment, confirm whether the following executables exist, before proceeding to the next step.

    • which opatch

    • which unzip

For more information about OPatch, see the Patching Oracle Fusion Middleware with Oracle OPatch section in the Oracle Fusion Middleware Patching Guide.

6.3.1.3.2 Stop the Servers and Processes

To stop the servers and processes, perform the following steps:

  • In the Oracle Identity Management domain, stop all Oracle Identity Management services and processes using the following sequence. Do not stop the database:

    Stop the following servers and processes:

    • Oracle HTTP Server

    • Oracle Identity Manager managed servers

    • Oracle SOA managed servers

    • Oracle Identity Federation managed servers

    • Oracle Access Manager managed servers

    • Oracle Directory Services Manager

    • Oracle WebLogic Administration Server for the Oracle Identity Management domain

    • Oracle Virtual Directory

    • Oracle Internet Directory

For more information about specific commands for stopping components, see Stop and Start Identity Management Related Servers.

6.3.1.3.3 Create Backups

At a minimum, create the following backups:

  • Middleware home directory (including the Oracle home directories inside the Middleware home)

  • Local domain home directory

  • Local Oracle instances

  • Domain home and Oracle instances on any remote systems that use the Middleware home

  • The database

    Ensure the backup includes the schema version registry table, as each Fusion Middleware schema has a row in this table. The name of the schema version registry table is SYSTEM.SCHEMA_VERSION_REGISTRY$.

  • The Configurations and Stores—specifically, all data under the root node of the LDAP store

  • Any Oracle Identity Federation Java Server Pages (JSP) that was customized

    The patching process overwrites JSPs included in the oif.ear file. After completing the patching process, restore the custom JSPs.

In addition to the preceding backups, Oracle recommends performing your organization's typical backup processes.

Refer to the Backing Up Your Middleware Home, Domain Home and Oracle Instances, Backing Up Your Database and Database Schemas, and Backing Up Additional Configuration Information sections in the Oracle Fusion Middleware Patching Guide for detailed information about creating the backups.

6.3.1.3.4 Patch the Database Clients

The Database Client patches are available under the SHARED_LOCATION/11.12.x.0.0/Repository/installers/dbclient/patch directory. Follow the patch Readme and apply all patches in the directory. To apply all patches, proceed as follows:

  1. Set the Oracle home to RDBMS_ORACLE_HOME, for example, ORACLE_HOME/u01/oid/oid_home.

  2. Go to the patch directory as follows:

    cd SHARED_LOCATION/11.12.x.0.0/Repository/installers/dbclient/patch
    
  3. Run opatch using the napply option.

6.3.1.3.5 Patch the Database (RDBMS_ORACLE_HOME)

Ensure the patches listed in Update the Oracle Fusion Applications and Oracle Identity Management Databases are applied on the Identity Management database to keep both Oracle Fusion Applications and Identity Management databases synchronized. To apply the patches, follow the steps listed in Update the Oracle Fusion Applications and Oracle Identity Management Databases.

6.3.2 Run RUP Lite for OVM in Pre-Root Mode (Oracle VM Only)

Run RUP Lite for OVM in pre-root mode locally on every node on the Oracle VM, for example, primordial, Midtier, IDM, and OHS. Use the -i option to point to the Release 12 rupliteovm/metadata directory that was set up as part of the pre-upgrade preparation in Prepare RUP Lite for OVM. Run this command as super user (root) as follows:

setenv JAVA_HOME java_home_directory
cd /u01/lcm/rupliteovm
bin/ruplite.sh pre-root -i ORCH_LOCATION/config/POD_NAME/11.12.x.0.0/rupliteovm/metadata

Then, proceed to Update Status to Success (Oracle VM Only).

6.3.3 Run RUP Lite for OVM in Post-Root Mode (Oracle VM Only)

Run RUP Lite for OVM in post-root mode locally on every node on the Oracle VM, for example, primordial, Midtier, IDM, and OHS. Use the -i option to point to the Release 12 rupliteovm/metadata directory that was set up as part of the pre-upgrade preparation in Prepare RUP Lite for OVM. Run this command as super user (root) as follows:
setenv JAVA_HOME java_home_directory
cd /u01/lcm/rupliteovm
bin/ruplite.sh post-root -I ORCH_LOCATION/config/POD_NAME/11.12.x.0.0/rupliteovm/metadata