6 Performing an Out-of-Place Cloned Upgrade of Oracle Identity Manager

The out-of-place upgrade procedure discussed in this guide explains how to perform a cloned upgrade of Oracle Identity Manager 12c (12.2.1.3.0) to Oracle Identity Manager 12c (12.2.1.4.0).

This chapter includes the following topics:

Pre-Upgrade Assessments

The pre-upgrade check includes reviewing your current OIM 12c (12.2.1.3.0) environment before starting the cloned upgrade to OIM 12c (12.2.1.4.0).

For more information, see the following topics:

Checking the Supported Versions

You can upgrade the Oracle Identity Manager 12c (12.2.1.3.0) to 12c (12.2.1.4.0). You must make sure that OIM is fully patched with the latest bundle and required patches.

If you are running an older version of OIM, you must first upgrade it to 12c (12.2.1.3.0), and then to 12c (12.2.1.4.0).

Checking the Potential Integrations with OAM and/or OAAM

Oracle 12c requires that OIM resides in a separate isolated domain. The schema set for Access and Governance are distinct and they cannot share the same database prefix. Hence, they cannot share schemas. If your current deployment has OIM co-existing with other Oracle Identity and Access Management products such as Oracle Access Manager (OAM) and/or Oracle Adaptive Access Manager (OAAM), you must first separate the domains.

For details on how to separate OIM and OAM, see Separating Oracle Identity Management Applications Into Multiple Domains.

Source Environment Validation for Use of Host Names

The cloning solution provided in this chapter relies on the use of host names and not IP addresses in all configuration properties. Validate the various domain and application configuration parameters in the source environment to ensure that there are no IP addresses directly configured. If IP addresses are found to be in use, Oracle recommends you to update the source environment prior to beginning the cloning process.

This section includes the following topics:

Auditing the WebLogic Server Domain Configuration

Verify that the domain is not configured with IP addresses for the various listener, nodemanager, datasource host/SCAN/ONS parameters, and so on. As customer configurations vary in scope and the number of parameters are too many to enumerate specifically, only a basic audit process is provided here. A simple search of the domain configuration files for each known hostname, or by domain name, IP address list, or network range will provide a quick report.

The source environment might have host records such as:

# On-Prem Host Entries
10.99.5.42   srchost27.example.com srcHost27   webhost1
10.99.5.43   srchost28.example.com srcHost28   webhost2
10.99.5.44   srchost20.example.com srcHost20  ldaphost1
10.99.5.45   srchost21.example.com srcHost21  ldaphost2
10.99.5.46   srchost23.example.com srcHost23   oamhost1
10.99.5.47   srchost24.example.com srcHost24   oamhost2
10.99.5.48   srchost25.example.com srcHost25   oimhost1
10.99.5.49   srchost26.example.com srcHost26   oimhost2
# Compute VNIC Secondary IP for AdminServer floating VIPs
10.99.5.61 srcVIPiad.example.com srcVIPiad
10.99.5.62 srcVIPigd.example.com srcVIPigd
# Database Systems with on-prem override aliases
10.99.5.20 src-DB-SCAN.example.com src-DB-SCAN
# Load Balancer IP
10.99.5.6  prov.example.com  login.example.com  idstore.example.com  iadadmin.example.com  igdadmin.example.com  iadinternal.example.com  igdinternal.example.com

Values to check for can be written to a file for easy command-line use. Include the corporate network range, partial domain names, and partial strings from any corporate host naming convention that might be relevant, and then execute a search of all XML configuration files from the DOMAIN_HOME/config folder.

cat << EOF > /tmp/domainHostNameSearchList.txt
10.99.
.example.com
srcHost
webhohst
ldaphost
oamhost
oimhost
EOF

cd DOMAIN_HOME/config
find .-name "*.xml" -exec grep -H -f /tmp/domainHostNameSearchList.txt {} \;

This will result in a list of configuration file paths/names, and the line in which the text is found. The resulting list should include machine and listen-address entries, JDBC URLs, ONS Node list entries (if using Gridlink JDBC Drivers), and so on.

./config.xml:    <machine>OIMHOST1</machine>
./config.xml:    <listen-address>OIMHOST1</listen-address>
./config.xml:      <arguments>-Dtangosol.coherence.wka1=OIMHOST1 -Dtangosol.coherence.wka2=OIMHOST2 -Dtangosol.coherence.localhost=OIMHOST1 -Dtangosol.coherence.wka1.port=8089 -Dtangosol.coherence.wka2.port=8089 -Dtangosol.coherence.localport=8089</arguments>
./config.xml:    <machine>OIMHOST1</machine>
./config.xml:    <listen-address>10.99.5.48</listen-address>
./config.xml:    <machine>OIMHOST1</machine>
./config.xml:    <listen-address>OIMHOST1</listen-address>
./config.xml:    <name>OIMHOST2</name>
./config.xml:      <name>OIMHOST2</name>
./config.xml:      <listen-address>srcHost26</listen-address>
./jdbc/mds-soa-jdbc.xml:    <url>jdbc:oracle:thin:@(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=src-DB-SCAN.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=igdupgdb.example)))</url>
./jdbc/mds-soa-jdbc.xml:    <ons-node-list>src-DB-SCAN.example.com:6200</ons-node-list>
Verify that all entries are using hostnames, either short or fully-qualified. These are the values that must be confirmed in the target host files.

Note:

Any configurations specifying IP addresses should be corrected in the source system prior to cloning.
Auditing the Application Configuration Data Stored in the Metadata Service (MDS)

Oracle Identity Governance stores configuration details in a Fusion Middleware Metadata Store (MDS) database schema. These configuration details include endpoint URI and JDBC connection strings that you should review and validate prior to cloning the environment. The hosts referenced in these URI and connection strings must be configured as hostnames or fully-qualified domain names (FQDN) rather than IP addresses. If IP addresses are used, they cannot be overridden in the target environment and you would have to change them during the cloning process.

Oracle recommends you to correct the source environment and replace any hard-coded IP addresses with appropriate host names prior to the cloning maintenance.

To audit the stored metadata configuration for OIM via WLST:

  1. Log in to an OIM host in the source environment as the OS user with privileges to the ORACLE_HOME directory.
  2. Create a temporary working directory.
    mkdir -p /tmp/mds/oim/
  3. Connect to the AdminServer via WLST.
    $ ORACLE_HOME/common/bin/wlst.sh
    wls:/offline> connect()
    Please enter your username :weblogic
    Please enter your password :
    Please enter your server URL [t3://localhost:7001] :t3://ADMINHOST:7001
    Connecting to t3://ADMINHOST:7001 with userid weblogic ...
    Successfully connected to Admin Server 'AdminServer' that belongs to domain 'IAMGovernanceDomain'.
    wls:/IAMGovernanceDomain/serverConfig>
  4. Export the OIM configuration XML data from the FMW Metadata Store and exit from WLST.
    • Application=OIMMetadata
    • server=WLS_OIM1 (your server name may vary)
    • toLocation=/tmp/mds/oim
    • docs= /db/oim-config.xml

    For example:

    wls:/IAMGovernanceDomain/serverConfig> exportMetadata(application='OIMMetadata', server='WLS_OIM1', toLocation='/tmp/mds/oim', docs='/db/oim-config.xml')
    
    Executing operation: exportMetadata.
    
    Operation "exportMetadata" completed. Summary of "exportMetadata" operation is:
    1 documents successfully transferred. 
    List of documents successfully transferred:
    
    /db/oim-config.xml
    
    wls:/IAMGovernanceDomain/serverConfig> exit()
  5. Create a file of search terms to be used to filter for the relevant data from the OIM configuration. There are a lot of configuration elements in the exported XML file. Create a short list to use for filtering.

    For example:

    $ cat << EOF > /tmp/mds/oim/grepHostValidationTerms.txt
    <directDBConfigParams
    bIPublisherURL
    oimFrontEndURL
    oimExternalFrontEndURL
    oimJNDIURL
    backOfficeURL
    accessServerHost
    tapEndpointUrl
    soapurl
    rmiurl
    host
    serviceURL
    EOF
  6. Search the OIM configuration data using the search terms.

    For example:

    $ grep -f /tmp/mds_oim/grepHostValidationTerms.txt /tmp/mds/oim/db/oim-config.xml
    
    <directDBConfigParams checkoutTimeout="1200" connectionFactoryClassName="oracle.jdbc.pool.OracleDataSource" connectionPoolName="OIM_JDBC_UCP" driver="oracle.jdbc.OracleDriver" idleTimeout="360" maxCheckout="1000" maxConnections="5" minConnections="2" passwordKey="OIMSchemaPassword" sslEnabled="false" url="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=src-DB-SCAN.example.com )(PORT=1521)) (CONNECT_DATA= (SERVICE_NAME=igdupgdb.example)))" username="IGDUPG_OIM" validateConnectionOnBorrow="true">
    <bIPublisherURL>http://OIMHOST2:9704,OIMHOST1:9704</bIPublisherURL>
    <oimFrontEndURL>http://igdinternal.example.com</oimFrontEndURL>
    <oimExternalFrontEndURL>https://prov.example.com:443</oimExternalFrontEndURL>
    <oimJNDIURL>@oimJNDIURL</oimJNDIURL>
    <backOfficeURL/>
    <accessServerHost>srcHost23</accessServerHost>
    <tapEndpointUrl>https://login.example.com:443/oam/server/dap/cred_submit</tapEndpointUrl>
    <soapurl>http://OIMHOST2:8001</soapurl>
    <rmiurl>cluster:t3://cluster_soa</rmiurl>
    <host>@oaacghost</host>
    <serviceURL>@oaacgserviceurl</serviceURL>
  7. Review the search results, verify all the configuration properties, and use appropriate hostnames or fully-qualified domain names.

    Note:

    • Some properties may have placeholder values (for example: @oaacghost or @oaacgserviceurl). These are acceptable.
    • The <rmiurl> URI specified is typically a WLS t3 protocol URI addressed to a WLS server name or cluster name, and does not use a hostname. This is also acceptable.

Purging Unused Data

Purging unused data and maintaining a purging methodology before an upgrade can optimize the upgrade process.

Some components have automated purge scripts. If you are using purge scripts, wait until the purge is complete before starting the upgrade process. The upgrade may fail if the purge scripts are running while using the Upgrade Assistant to upgrade your schemas.

Having excessive stale data in the database might cause problems when performing the upgrade schema updates. To optimize the upgrade process, it is recommended that you purge any stale or unnecessary data prior to the upgrade.

For instance, using data purge scripts included with OIM, as described in Using the Archival and Purge Utilities for Controlling Data Growth, allows your site to choose what data has to be archived into a different location, what data can be purged, and provides options to manage these operations.

Note:

In large systems with plenty of data, archiving/purging may take a long time. Oracle strongly recommends not to run the archival/purge scripts in parallel to improve performance.

Performing an Out-of-Place Cloned Upgrade

An out-of-place upgrade from Oracle Identity Manager 11.1.2.3 to 12c (12.2.1.4.0) includes preparing the host files, cloning the database, binaries, and the configuration, and then upgrading the target system.

Preparing the Host Files

In a cloned environment, the referenced host names in the target environment are the same as the host names in your source system. If you have followed the recommendations in the Enterprise Deployment Guide and used virtual host names for all configurations, this is simply a matter of aliasing these entries to the real target host names. For example:
10.0.2.17   oimhost1.idm.tenant.oraclevcn.com   oimhost1
If you are using physical host names in your source WebLogic configuration, you must alias these names to the real target host names. For example:
10.0.2.17   oimhost1.idm.tenant.oraclevcn.com   oimhost1   srchost25.example.com srcHost25
In addition, if the source environment has additional floating VIPs and FQDN for the AdminServer's Machine listen address and Node Manager host declaration, then the target Secondary IP addresses should be configured on the VNICs for the appropriate target compute instances and added to the hosts file. These secondary IP address entries should also include the source environment FQDNs and hostnames to override DNS when connecting to the AdminServer.
10.0.2.21 igdadminvhn.idm.tenant.oraclevcn.com igdadminvhn srcVIPigd.example.com srcVIPigd
An example /etc/hosts file:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# Compute with on-prem override aliases
10.0.2.11   webhost1.idm.tenant.oraclevcn.com   webhost1   srchost27.example.com srcHost27
10.0.2.12   webhost2.idm.tenant.oraclevcn.com   webhost2   srchost28.example.com srcHost28
10.0.2.13  ldaphost1.idm.tenant.oraclevcn.com  ldaphost1   srchost20.example.com srcHost20
10.0.2.14  ldaphost2.idm.tenant.oraclevcn.com  ldaphost2   srchost21.example.com srcHost21
10.0.2.15   oamhost1.idm.tenant.oraclevcn.com   oamhost1   srchost23.example.com srcHost23
10.0.2.16   oamhost2.idm.tenant.oraclevcn.com   oamhost2   srchost24.example.com srcHost24
10.0.2.17   oimhost1.idm.tenant.oraclevcn.com   oimhost1   srchost25.example.com srcHost25
10.0.2.18   oimhost2.idm.tenant.oraclevcn.com   oimhost2   srchost26.example.com srcHost26

# Compute VNIC Secondary IP for AdminServer floating VIPs
10.0.2.20 iadadminvhn.idm.tenant.oraclevcn.com iadadminvhn srcVIPiad.example.com srcVIPiad
10.0.2.21 igdadminvhn.idm.tenant.oraclevcn.com igdadminvhn srcVIPigd.example.com srcVIPigd

# Database Systems with on-prem override aliases
10.0.2.19  iamdbhost.idm.tenancy.oraclevcn.com  iamdbhost   src-DB-SCAN.example.com src-DB-SCAN

# Load Balancer IP
10.0.1.10  prov.example.com  login.example.com  idstore.example.com  iadadmin.example.com  igdadmin.example.com  iadinternal.example.com  igdinternal.example.com

Note:

Ensure that the entries for each of the target compute instances and DB Host/SCAN addresses are present in the host file for all the hosts in the topology.

Cloning the Database

You can take a copy of your existing environment and then upgrade that copy. If you encounter issues during the upgrade, you will have the existing environment as a fallback.

For more information, see Performing an Upgrade via a Cloned Environment.

Methods for Cloning Databases

There are different methods of cloning a database and each method has its own merits.

Note:

Oracle Identity and Access Management 12c does not support Oracle Access Manager and Oracle Identity Manager configured to use the same database schema prefix. Before you upgrade, if both products co-exist and share the same database schemas, you must first split the database into two different prefixes and schema sets.

You can use the following options to clone the database:

Option 1 – Database Export Import

  • Suitable for smaller sized databases.

  • Allows movement between versions. For example, 12.1.0.3 to 19c.

  • Allows movement into Container Databases/Private Databases.

  • Is a complete copy; redoing the exercise requires data to be deleted from the target each time.

  • No ongoing synchronization.

  • During cut-over the source system will need to be frozen for updates.

Option 2 – Duplicate Database Using RMAN

  • Suitable for databases of any size.

  • Takes a back up of an entire database.

  • The database version and patch level should be the same on both the source and destination.

  • Database upgrades will need to be performed as a separate task.

  • CDP/PDB migration will have to be done as a separate exercise.

  • No ongoing synchronization.

  • During cut-over, you should freeze the source system for updates.

Option 3 – Data Guard Database

  • Suitable for databases of any size.

  • Takes a back up of an entire database.

  • Database upgrades will need to be performed as a separate task.

  • CDP/PDB migration will have to be done as a separate exercise.

  • Ongoing synchronisation; Database can be opened to test the upgrade and closed again to keep data synchronized with the source system.

Note:

You should choose the solution based on your requirements.

Cloning the Database Using the Export/Import Method

On your 12c (12.2.1.3) environment, export the data from your database to an export file.

On the source environment:

  1. Create and set the directory details for the export process on the source DB hosts.
    1. Make a directory on the source DB hosts in a location with sufficient space.
      mkdir -p /u01/installers/database
    2. On the source database, create a database directory object pointing to this location:
      SQL> CREATE DIRECTORY orcl_full AS '/u01/installers/database';
  2. Shutdown WebLogic Server Managed Servers or Clusters for OIM, SOA, and BIP.

    Note:

    If executing in parallel with the domain backup, coordinate the shut down of the entire domain including AdminServer and NodeManagers.
  3. Stop the SOA DBMS queues in the source database.
    1. Connect as the SOAINFRA schema user and query for the user queues.
      $ sqlplus <PREFIX>_SOAINFRA@<sourceDB>
      SQL> COLUMN name FORMAT A32
      SQL> SELECT name,enqueue_enabled,dequeue_enabled  
      FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE' order by name;
      NAME                             ENQUEUE DEQUEUE
      -------------------------------- ------- -------
      B2B_BAM_QUEUE                     YES        YES
      EDN_EVENT_QUEUE                   YES        YES
      EDN_OAOO_QUEUE                    YES        YES
      IP_IN_QUEUE                       YES        YES
      IP_OUT_QUEUE                      YES        YES
      TASK_NOTIFICATION_Q               YES        YES
      
      6 rows selected.
    2. Stop each queue.
      SQL> BEGIN
      
      DBMS_AQADM.STOP_QUEUE ('B2B_BAM_QUEUE');
      
      DBMS_AQADM.STOP_QUEUE ('EDN_OAOO_QUEUE');
      
      DBMS_AQADM.STOP_QUEUE ('EDN_EVENT_QUEUE');
      
      DBMS_AQADM.STOP_QUEUE ('IP_IN_QUEUE');
      
      DBMS_AQADM.STOP_QUEUE ('IP_OUT_QUEUE');
      
      DBMS_AQADM.STOP_QUEUE ('TASK_NOTIFICATION_Q');
      
      END;
      
      /
      exit
  4. As the OIM schema user, query for and stop any running DBMS_SCHEDULER jobs in the source database.
    $ sqlplus <PREFIX>_OIM@<sourceDB>
    
    SQL> SELECT job_name,session_id,running_instance,elapsed_time 
    FROM user_scheduler_running_jobs ORDER BY job_name;
    
    no rows selected

    Note:

    In case of any running jobs, either wait till the job is complete or stop the job ‘gracefully’ using:

    SQL> BEGIN
    
    DBMS_SCHEDULER.stop_job('REBUILD_OPTIMIZE_CAT_TAGS');
    
    END;
    
    /
    SQL> exit
  5. Grant system policies to avoid errors during export datapump jobs.
    $ sqlplus SYS as SYSDBA
    SQL> GRANT EXEMPT ACCESS POLICY TO SYSTEM;
    SQL> exit
  6. Export the system and application schemas from the source database, setting the directory property appropriately.
    1. Export the system.schema_version_registry table and view:
      $ expdp \"sys/<password>@<sourcedb> as sysdba \" \
           DIRECTORY=orcl_full \
           DUMPFILE=oim_system.dmp \
           LOGFILE=oim_system_exp.log \
           SCHEMAS=SYSTEM \
           INCLUDE= VIEW:"IN('SCHEMA_VERSION_REGISTRY')" TABLE:"IN('SCHEMA_VERSION_REGISTRY$')"\
           JOB_NAME=MigrationExportSys
    2. Export all of the schemas used by the datasources in the source WebLogicServer domain.
      $ expdp \"sys/<password>@<sourcedb> as sysdba \" \
           DIRECTORY=orcl_full \
           DUMPFILE=oim.dmp \
           LOGFILE=oim_exp.log \
           SCHEMAS=<PREFIX>_OIM,<PREFIX>_SOAINFRA,<PREFIX>_BIPLATFORM,<PREFIX>_MDS,<PREFIX>_ORASDPM,<PREFIX>_OPSS,IGDJMS,IGDTLOGS \
           JOB_NAME=MigrationExport \
           EXCLUDE=STATISTICS
  7. Extract the source database DDL for the tablespaces, schema users, and grants.

    This step allows the efficient creation of the correct tablespaces on the target database and retains the schema user passwords. Therefore, domain reconfiguration is not necessary. System and Object grants for objects outside the exported schemas are also accounted for to reduce the risk of invalid objects and recompilation difficulties.

    An example script is provided to create the complete SQL DDL output all at once. The example will need to be modified if not using a CDB/PDB.

    1. In SQLPLUS, execute the example SQL script to extract the DDL to a ddl.sql file in the same directory as the datapump exported dumps. Enter the source environment and the target PDB. Output will be copied to both the screen and in the file named ddl.sql.
      $ cd /u01/installers/database
      $ sqlplus SYS as SYSDBA
      SQL> @extract_ddl.sql
      Enter RCU Prefix: <PREFIX>
      Enter PDB: targetPDB

      Example SQL Script:

      Note:

      Lines in bold are applicable only if your target database is a PDB. This SQL assumes that all the objects are created using the RCU prefix. If you have created objects without the prefix (for example tablespaces/users for JMS or TLogs, add these manually).
      $ cat << EOF > extract_ddl.sql
      set pages 0
      set feedback off
      set heading off
      set long 5000
      set longchunksize 5000
      set lines 200
      set verify off
      exec dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true);
      exec dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);
      accept PREFIX char prompt 'Enter RCU Prefix:'
      accept PDBNAME char prompt 'Enter PDB:'
      
      spool ddl.sql
      
      select 'alter session set container=&&PDBNAME;'
      from dual
      /
      SELECT DBMS_METADATA.GET_DDL('TABLESPACE',Tablespace_name)
      from  dba_tablespaces
      where tablespace_name like '&&PREFIX%'
      /
      set lines 600
      SELECT DBMS_METADATA.GET_DDL('USER',USERNAME)
      from DBA_USERS
      where USERNAME like '&&PREFIX%'
      /
      set lines 200
      SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT',USERNAME)
      from DBA_USERS
      where USERNAME like '&&PREFIX%'
      and USERNAME NOT LIKE '%_IAU_APPEND'
      and USERNAME NOT LIKE '%_IAU_VIEWER'
      /
      
      SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT',USERNAME)
      from DBA_USERS
      where USERNAME like '&&PREFIX%'
      and USERNAME NOT LIKE '%TLOGS'
      and USERNAME NOT LIKE '%JMS'
      /
      
      spool off
      EOF
    2. Delete any object grants for system QT*_BUFFER views in the output ddl.sql.The buffer views will not exist in the target database and cause errors.
      $ sed -i.bak -e '/QT.*_BUFFER/d' /u01/installers/database/ddl.sql
  8. Re-start the SOA DBMS queues. Connect as the SOAINFRA schema user and restart each queue that was stopped earlier.
    $ sqlplus <PREFIX>_SOAINFRA@sourceDB
    SQL> BEGIN
    
    DBMS_AQADM.START_QUEUE ('B2B_BAM_QUEUE');
    
    DBMS_AQADM.START_QUEUE ('EDN_OAOO_QUEUE');
    
    DBMS_AQADM.START_QUEUE ('EDN_EVENT_QUEUE');
    
    DBMS_AQADM.START_QUEUE ('IP_IN_QUEUE');
    
    DBMS_AQADM.START_QUEUE ('IP_OUT_QUEUE');
    
    DBMS_AQADM.START_QUEUE ('TASK_NOTIFICATION_Q');
    
    END;
    
    /
    SQL> COLUMN name FORMAT A32
    SQL> SELECT name,enqueue_enabled,dequeue_enabled  
    FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE' order by name;
    
    NAME                             ENQUEUE DEQUEUE
    -------------------------------- ------- -------
    B2B_BAM_QUEUE                     YES        YES
    EDN_EVENT_QUEUE                   YES        YES
    EDN_OAOO_QUEUE                    YES        YES
    IP_IN_QUEUE                       YES        YES
    IP_OUT_QUEUE                      YES        YES
    TASK_NOTIFICATION_Q               YES        YES
    
    6 rows selected.
    SQL> exit
  9. Re-start the WebLogic Server Managed Servers or clusters for OIM, SOA, and BIP.
  10. Replicate the DDL SQL and the datapump dump files to the target database host.
    • oim.dmp
    • oim_system.dmp
    • ddl.sql

On the target environment:

  1. Install/configure the target database sufficiently in accordance with FMW requirements. Install a version of the Oracle database you want to use on the target environment. This database can be a single instance database, a real applications cluster (RAC) database, a standard database, or a Container Database with OIG in a separate pluggable database (PDB).
  2. Validate that the target database is configured to meet all the criteria of Oracle Identity Manager as defined in Installing and Configuring the Oracle Identity Governance Software in the Installing and Configuring Oracle Identity and Access Management.
  3. Create the TNS entry for the Pluggable Database in the target system, if necessary. For example:
    IGDPDB =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)
                     (HOST = iamdbhost.idm.tenancy.oraclevcn.com)
                     (PORT = 1521)
          )
          (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = igdpdb.idm.tenancy.oraclevcn.com)
          )
         )
  4. Create and set the directory details for the export process on the source DB hosts.
    1. Make a directory on the target DB hosts in a location with sufficient space.
      $ mkdir -p /u01/installers/database
    2. Create a database directory object pointing to this location on the source and destination databases.
      SQL> CREATE DIRECTORY orcl_full AS '/u01/installers/database';
  5. Create a database restore point in case there is a need to roll back the transaction.
  6. Create and start a database service for the new database with the same service name as the source environment.

    For example:

    $ srvctl add service -db iamcdb -pdb igdpdb -service onpremservice -rlbgoal SERVICE_TIME -clbgoal SHORT 
    $ srvctl start service -db iamcdb -service onpremservice 
    $ srvctl status service -db iamcdb -service onpremservice
  7. Confirm that the exported datapump dump files and SQL files are available on the target database host in the correct directory, and the DBA directory name and path in the database match.
    $ ls -al /u01/installers/database
    $ sqlplus / as sysdba
    SQL> ALTER SESSION SET CONTAINER = igdpdb;
    SQL> CREATE DIRECTORY orcl_full AS '/u01/installers/database';

    To verify:

    $ sqlplus / as sysdba
    SQL> ALTER SESSION SET CONTAINER = igdpdb;
    
    SQL> COLUMN directory_name FORMAT A32
    SQL> COLUMN directory_path FORMAT A64
    SQL> set linesize  128
    SQL> SELECT directory_name,directory_path FROM dba_directories ORDER BY directory_name;
  8. Confirm that the required DBMS_SHARED_POOL and XATRANS database objects exist and create them if they do not. Check for a count of '2' for each of the following SQLs on the target database where the OIM schema export dump is to be restored.
    SQL> SELECT COUNT(*) FROM dba_objects 
    WHERE owner = 'SYS' AND object_name = 'DBMS_SHARED_POOL' 
    AND object_type IN ('PACKAGE','PACKAGE BODY');
    
      COUNT(*)
    ----------
             2
    
    SQL> SELECT COUNT(*) FROM dba_objects 
    WHERE owner = 'SYS' AND object_name like '%XATRANS%';
    
      COUNT(*)
    ----------
             0
    1. If DBMS_SHARED_POOL count is < 2, run the appropriate SQL to re-configure:
      SQL> @/u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/dbmspool.sql
      SQL> @/u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/prvtpool.plb
    2. If XATRANS count is < 2, run the appropriate SQL to reconfigure:
      SQL> @/u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/xaview.sql
  9. Import the source database system dump from the correct folder to create the schema_version_registry table and view, then create the required public synonym manually via SQL.
    $ cd /u01/installers/database
    $ impdp \"SYS/<password>@<targetdb> AS SYSDBA\" \
        PARALLEL=4 
        DIRECTORY=orcl_full \
        DUMPFILE=oim_system.dmp \
        LOGFILE=oim_system_imp.log \
        FULL=YES;
    
    $ sqlplus / as sysdba
    
    SQL> alter session set container=igdpdb;
    SQL> CREATE PUBLIC SYNONYM schema_version_registry FOR system.schema_version_registry;
    SQL> exit
  10. Verify that the schema_version_registry table data matches your source environment. It is important to check that the following query returns rows that are consistent with your deployment. This table should have been imported as part of the above steps. If it fails to do so you must populate the table with values from your source system.
    $ sqlplus / as sysdba
    SQL> alter session set container=igdpdb;SQL> set linesize 100
    SQL> col comp_id for a10
    SQL> col comp_name for a50
    SQL> col version for a10
    SQL> select comp_id, comp_name, version, status, upgraded 
    from system.schema_version_registry;
    
    Output will look something like: 
    
    COMP_ID    COMP_NAME                                          VERSION    STATUS      U
    ---------- -------------------------------------------------- ---------- ----------- -
    BIPLATFORM OracleBI and EPM                                   11.1.1.9.0 VALID       N
    MDS        Metadata Services                                  11.1.1.9.0 VALID       N
    OIM        Oracle Identity Manager                            11.1.2.3.0 VALID       N
    OPSS       Oracle Platform Security Services                  11.1.1.9.0 VALID       N
    ORASDPM    SDP Messaging                                      11.1.1.9.0 VALID       N
    SOAINFRA   SOA Infrastructure Services                        11.1.1.9.0 VALID       N
  11. Execute the DDL SQL from the source database to create the required tablespaces, schema users with the same passwords, system grants, and object grants. If using a PDB, ensure that you set the container correctly.
    $ sqlplus / as sysdba
    SQL> alter session set container=igdpdb;
    SQL> @'/u01/installers/database/ddl.sql'
    SQL> exit
  12. Import the application schemas.

    Note:

    There will be ORA-31684 errors due to pre-created the users. Ignore the following types of errors:

    • Procedure/Package/Function/Trigger compilation warnings
    • DBMS_AQ errors
    • ORA-31684: Object type USER:"" already exists

    For example:

    $ cd /u01/installers/database
    $ impdp \"SYS/<password>@<targetdb> AS SYSDBA\" \
        PARALLEL=4 \
        DIRECTORY=orcl_full \
        DUMPFILE=oim.dmp \
        LOGFILE=oim_imp.log 
        FULL=YES;
  13. Query for any invalid objects for the imported schemas and execute a recompile for each schema with invalid objects.

    For example:

    $ sqlplus / as sysdba
    SQL> alter session set container=igdpdb;
    SQL> COLUMN owner       FORMAT A24
    SQL> COLUMN object_type FORMAT A12
    SQL> COLUMN object_name FORMAT A32
    SQL> SET LINESIZE 128
    SQL> SET PAGESIZE 50
    
    SQL> SELECT owner,object_type,object_name, status
    FROM   dba_objects
    WHERE  status = 'INVALID'
    AND owner like '<PREFIX>'
    ORDER BY owner, object_type, object_name;
    
    OWNER                    OBJECT_TYPE  OBJECT_NAME                      STATUS
    ------------------------ ------------ -------------------------------- -------
    IGDUPG_OIM               SYNONYM      ALTERNATE_ADF_LOOKUPS            INVALID
    IGDUPG_OIM               SYNONYM      ALTERNATE_ADF_LOOKUP_TYPES       INVALID
    IGDUPG_OIM               SYNONYM      FND_LOOKUPS                      INVALID
    IGDUPG_OIM               SYNONYM      FND_STANDARD_LOOKUP_TYPES        INVALID
    
    SQL> EXECUTE UTL_RECOMP.RECOMP_SERIAL('IGDUPG_OIM');
    
    SQL> SELECT owner,object_type,object_name, status
    FROM   dba_objects
    WHERE  status = 'INVALID'
    AND owner like '<PREFIX>'
    ORDER BY owner, object_type, object_name;
    
    no rows selected
  14. Start the SOA DBMS queues.
    1. Connect as the SOAINFRA schema user and query for the user queues.
      $ sqlplus <PREFIX>_SOAINFRA@<sourceDB>
      SQL> COLUMN name FORMAT A32
      SQL> SELECT name,enqueue_enabled,dequeue_enabled  FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE' order by name;
      
      NAME                             ENQUEUE DEQUEUE
      -------------------------------- ------- -------
      B2B_BAM_QUEUE                     YES        YES
      EDN_EVENT_QUEUE                   YES        YES
      EDN_OAOO_QUEUE                    YES        YES
      IP_IN_QUEUE                       YES        YES
      IP_OUT_QUEUE                      YES        YES
      TASK_NOTIFICATION_Q               YES        YES
      
      6 rows selected.
    2. Start each queue.
      SQL> BEGIN
      
      DBMS_AQADM.START_QUEUE ('B2B_BAM_QUEUE');
      
      DBMS_AQADM.START_QUEUE ('EDN_OAOO_QUEUE');
      
      DBMS_AQADM.START_QUEUE ('EDN_EVENT_QUEUE');
      
      DBMS_AQADM.START_QUEUE ('IP_IN_QUEUE');
      
      DBMS_AQADM.START_QUEUE ('IP_OUT_QUEUE');
      
      DBMS_AQADM.START_QUEUE ('TASK_NOTIFICATION_Q');
      
      END;
      
      /
      exit
Cloning the Database Using RMAN

Clone the database from the source environment to the target environment by using RMAN. See Transferring Data with RMAN.

Cloning the Database Using Data Guard

You can manually create a physical standby database using Data Guard. See Creating a Physical Standby Database in Oracle Data Guard Concepts and Administration.

Cloning the Oracle Binaries

Use your preferred backup/restore tools to archive and transfer the MW_HOME binaries and OraInventory directories.

This section includes the following topic:

Using Backup/Restore Tools to Clone the Oracle Identity Manager Domain

Note:

For this exercise, you can use any backup and restore tool you are familiar with. The example below uses the tar tool. But any command that can back up and restore directories and sub-directories can be used. You can take a back up with the domain and NodeManagers online or offline. However, Oracle recommends to execute the backup with all FMW processes shutdown.

Take a backup:

Complete the following steps to take a backup of your source environment binaries and Oracle Inventory:

  1. Using your preferred backup tool, take a backup of the following directories in the source environment:

    • oraInventory

    • MW_HOME

    For example, a command on OAMHOST1 may appear as follows:

    tar cfzP /u01/oracle/backups/oamhost1_binaries.tar.gz /u01/oracle/oraInventory MW_HOME
  2. Repeat the command on any supplementary nodes using the separate product binary volumes.

    Note:

    When using the shared filesystem volumes for the Oracle products MW_HOME locations, you should take only the binary backups from one host per volume.

    For example, a command on OAMHOST2 may appear as follows:

    tar cfzP /u01/oracle/backups/oamhost2_binaries.tar.gz /u01/oracle/oraInventory MW_HOME
  3. Copy the resulting backup files to their appropriate target environment hosts.

Restore the backup

Using your preferred extraction tool, extract the backup to your target environment nodes.

Note:

When using the shared filesystem volumes for the Oracle products MW_HOME locations, you should restore only the binary backups to one host per volume.

For example:

On OAMHOST1, run the following command:

tar xvfzP oamhost1.tar.gz

On OAMHOST2, run the following command:

tar xvfzP oamhost2.tar.gz

Cloning the Configuration

Use your preferred backup/restore tools to clone the configuration.

This section includes the following topics:

Using Backup/Restore Tools to Clone the Oracle Identity Manager Domain

Note:

For this exercise, you can use any backup and restore tool you are familiar with. The example below uses the tar tool. But any command that can back up and restore directories and sub-directories can be used. You can take a back up with the domain and NodeManagers online or offline. However, Oracle recommends to execute the backup with all FMW processes shutdown.

Take a backup:

Perform the following steps to take a backup of the source environment binaries and Oracle Inventory:

  1. Using your preferred backup tool, take a backup of the following locations from OIMHOST1 on the source site:

    • oraInventory

    • Nodemanager

    • Application Server domain home (ASERVER_HOME)

    • Managed Server domain home if you have a separate location as described in the Enterprise Deployment Guide (MSERVER_HOME)

    • Keystores

    • Runtime directories

    Note:

    If you have a combined DOMAIN_HOME rather than a segregated one, as described in the Enterprise Deployment Guide, include DOMAIN_HOME rather than ASERVER_HOME and MSERVER_HOME.

    For example, a command on OIMHOST1 may appear as follows:

    tar cvzPpsf oimhost1.tar.gz \
       /u01/oracle/oraInventory \
       /u01/oracle/config/nodemanager/OIMHOST1 \
       /u01/oracle/config/nodemanager/OIMHOST2 \
       /u01/oracle/config/nodemanager/IGDADMINVHN \
       /u01/oracle/config/keystores \
       /u01/oracle/runtime/domains/IAMGovernanceDomain \
       /u01/oracle/config/domains/IAMGovernanceDomain \
       /u02/private/oracle/config/domains/IAMGovernanceDomain
  2. Repeat the command on any supplementary nodes. For example, a command on OIMHOST2 may appear as follows:

    tar cvzPpsf OIMHOST2.tar.gz /u02/private/oracle/config/domains/IAMGovernanceDomain
  3. Copy the resulting backup files to their appropriate target environment hosts.

  4. Delete any lock and log files in the domain that have been replicated from the source environment.

    • Remove any lock files for all NodeManager folders on the appropriate cloned environment hosts by running the following command:

      find /u01/oracle/config/nodemanager -type f -name "*.lck" -exec rm -f {} \;

    • Remove any lock files from the ASERVER_HOME and MSERVER_HOME folders on the appropriate cloned environment hosts by running the following command:

      Note:

      If you have a combined DOMAIN_HOME rather than a segregated one as described in the Enterprise Deployment Guide, include DOMAIN_HOME rather than ASERVER_HOME and MSERVER_HOME.

      For example, on OIMHOST1, run the following command:

      # Lock Files Cleanup:
      
      find /u01/oracle/config/nodemanager -type f -name "*.lck" -exec rm -f {} \;
      
      find  /u01/oracle/config/domains/IAMGovernanceDomain \
          -type f  \( -name "*.lck" -or -name "*.lok" \) -print -exec rm -f {} \;
      
      find  /u02/private/oracle/config/domains/IAMGovernanceDomain \
          -type f  \( -name "*.lck" -or -name "*.lok" \) -print -exec rm -f {} \;
      
      # Log File Cleanup:
      
      find /u01/oracle/config/nodemanager/OIMHOST1 \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
      
      find /u01/oracle/config/nodemanager/OIMHOST2 \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
      
      find /u01/oracle/config/nodemanager/IGDADMINVHN \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
      
      find ${ASERVER_HOME}/servers/AdminServer/logs \
          -type f ! -size 0c -print -exec rm -f {} \+
      
      find ${MSERVER_HOME}/servers/*/logs \
          -type f ! -size 0c -print -exec rm -f {} \+

      For example, on OIMHOST2, run the following command:

      # Lock Files Cleanup:
      
      find  /u02/private/oracle/config/domains/IAMGovernanceDomain \
          -type f  \( -name "*.lck" -or -name "*.lok" \) -print -exec rm -f {} \;
      
      # Log File Cleanup:
      
      find ${MSERVER_HOME}/servers/*/logs \
          -type f ! -size 0c -print -exec rm -f {} \+
    • Optionally, remove the old log files from the NodeManager and Managed Server folders in the cloned domain:

      For example, on OIMHOST1, run the following command:

      find /u01/oracle/config/nodemanager/OIMHOST1 \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
      find /u01/oracle/config/nodemanager/OIMHOST2 \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
      
      find /u01/oracle/config/nodemanager/IGDADMINVHN \
          -type f \( -name '*.log' -or -name '*.out' \) -print -exec rm -f {} \;
       
      find ASERVER_HOME/servers/AdminServer/logs \
          -type f ! -size 0c -print -exec rm -f {} \+
       
      find MSERVER_HOME/servers/*/logs \
          -type f ! -size 0c -print -exec rm -f {} \+
      

      For example, on OIMHOST2, run the following command:

      find MSERVER_HOME/servers/*/logs \ -type f ! -size 0c -print -exec rm -f {} \+

Restore the backup in the cloned environment

Using your preferred extraction tool, extract the backup to your target environment nodes.

Note:

If using tar, be sure to preserve permissions and root paths.

For example:

On OIMHOST1, run the following command:

tar xvzPpsf oimhost1.tar.gz

On OIMHOST2, run the following command:

tar xvzPpsf oimhost2.tar.gz

Starting the OIM Domain

After successfully restoring the backup to the target environment instances, do the following to start the domain:

  • Start the Node Manager for the ASERVER_HOME.
  • Start the Node Manager for the MSERVER_HOME on all nodes.

    Note:

    If you have a single DOMAIN_HOME, start the Node Manager associated with that DOMAIN_HOME.
  • Start the Administration Server and check logs.
  • Start the SOA Managed Server/Cluster and check logs.
  • Start the Business Intelligence Platform Managed Server/Cluster and check logs.
  • Start the OIM Managed Server/Cluster and check logs.
Executing the OIM LDAP Consolidated Full Reconciliation Job

After cloning the domain, a full reconciliation job needs to be executed. See Jobs in Administrator's Guide for Oracle Identity Manager.

To execute the reconciliation job:

Note:

You have to perform the reconciliation job only if the 12.2.1.3 setup is using LDAP Connectors. This step is not required if the setup is using LDAPSync because LDAPSync will be disabled after the upgrade is complete.
  1. Log in to https://igdadmin.example.com/sysadmin and authenticate as xelsysadm.
  2. In the left-pane, under System Configuration, click Scheduler. A popup window will appear.
  3. In the Identity System Administration popup window, search for the scheduled job: LDAP Consolidated Full Reconciliation.
  4. Click the LDAP Consolidated Full Reconciliation entry in the search results to view the job details.
  5. Click Run Now to execute the job and verify the confirmation message: Job is running.
  6. Periodically click the Refresh button and verify the job status.
  7. When the Job Status shows Stopped, validate the Execution Status for Success. Check logs and troubleshoot as needed.
  8. Click the Event Management tab and execute an empty search for all recent reconciliation events.
  9. Spot-check the events to assure that the current status is either Creation Succeeded or Update Succeeded.

Upgrading In-place Cloned Environment to 12c

After cloning the 12c (12.2.1.3) domain to the target system, you can upgrade the target system to Oracle 12.2.1.4.0. For instructions, see:

Increasing the Maximum Message Size for WebLogic Server Session Replication

As part of the post-upgrade tasks, Oracle recommends you to modify the Maximum Message Size from the default value of 10 MB to 100 MB. This value is used to replicate the session data across the nodes.

You should perform this step for all the Managed servers and the Administration server.

  1. Log in to the WebLogic Server Administration Console.
  2. Navigate to Servers, select Protocols, and then click General.
  3. Set the value of Maximum Message Size to 100 MB.

Increasing the maxdepth Value in setDomainEnv.sh

The recommended value for the maxdepth parameter is 250. To update this value:
  1. Open the $DOMAIN_HOME/bin/setDomainEnv.sh file in a text editor.
  2. Locate the following code block:
    ALT_TYPES_DIR="${OIM_ORACLE_HOME}/server/loginmodule/wls,${OAM_ORACLE_HOME}/a
    gent/modules/oracle.oam.wlsagent_11.1.1,${ALT_TYPES_DIR}"
    export ALT_TYPES_DIR
    CLASS_CACHE="true"
    export CLASS_CACHE
  3. Add the following lines at the end of the above code block:
    JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.oif.serialFilter=maxdepth=250"
    export JAVA_OPTIONS
  4. Save and close the setDomainEnv.sh file.