22 Using the Archival and Purge Utilities for Controlling Data Growth

Data archival and purge solution involves real-time archival and purge and command-line option of the archival purge utilities.

This chapter contains the following sections:

Note:

  • Oracle recommends that you use the real-time purge and archival option rather than the command-line utilities.

  • The archival and purge utilities (scheduled task-based and command-line) only purge data from the underlying Oracle Identity Manager tables, and do not reclaim space. For information about reclaiming space, see the document titled How To Reclaim Space For Overgrown/Huge Footprint Of LOB Columns In OIM Database (Doc ID 2017034.1) in the My Oracle Support web site at the following URL:

    https://support.oracle.com

22.1 About Archival and Purge Utilities

Oracle Identity Manager provides archival and purge solutions for its entities and their dependant data.

The application capabilities in Oracle Identity Manager generate a large volume of data. To meet the standards of performance and scalability, maintaining the data generated for the life cycle management of Oracle Identity Manager entities is a challenge. Oracle Identity Manager meets this challenge by providing online and continuous as well as offline data purge and archival solutions.

Table 22-1 lists the archival and purge solutions provided by Oracle Identity Manager for its entities and their dependent data.

Table 22-1 Archival and Purge Solutions

Archival and Purge for Entities Real-time Online Mode Operated via Command Line Available via Other Modes

Reconciliation

Yes

Yes

Provisioning Tasks

Yes

Yes

Request

Yes

Yes

Orchestration

Yes

No

Certification

Yes

No

Legacy Audit

Yes

No

For more information on partition-based approach, see About Audit Data Growth Control Measures in Legacy Audit Framework.

Lightweight Audit

Yes

No

For more information on partition-based approach, see About Audit Data Growth Control Measures in Lightweight Audit Framework.

22.2 Archival and Purge Concepts

Archival and purge concepts include purge-only or archive solution, archival and purging of data, retention period, and modes of archival purge operations.

This section contains the following topics:

22.2.1 Purge Only Solution Versus Purge and Archive Solution for Entities

The purge only solution and the purge plus archive solution is applicable to the real-time purge and archival feature.

Oracle Identity Manager entities are divided into the following on the basis of how the data related to them are purged and archived from the perspective of real-time purge archival feature:

  • Purge only: Entities for which data is directly purged but not archived. These entities are Reconciliation, Provisioning Tasks, Orchestration, and Legacy Audit (UPA).

  • Purge and archive: Entities for which data is purged as well as archived. This is applicable to the Request, Reconciliation Exceptions and Certification entities.

Note:

The real-time purge and archival solution provides data purge capabilities on a continuous basis. In addition, you can use the command-line archival utilities periodically to archive data, if required. There is no such categorization of entities in their command-line archive purge utilities version. They essentially archive prior to purge. For details about the command-line archival utilities, see Using Command-Line Option of the Archival Purge Utilities in Oracle Identity Governance.

22.2.2 Archival of Data in Oracle Identity Governance

Archival (prior to purge) is the standard mechanism followed in Oracle Identity Manager command-line utilities that offer for deleting data from the Active Feature or Entity tables.

Archival of data is done by copying the data to a shadow copy or replica of the original table, typically with a suffix ARCH_TABLE_NAME. Archive operation precedes purge in data purge solutions for entities in the Purge and Archive category.

22.2.3 Purging of Data in Oracle Identity Governance

Purging of data is the mechanism to delete or purge data from the Active Feature or Entity tables without any preceding archive operation.

Data purged is non-recoverable in Oracle Identity Manager.

22.2.4 Real-Time Purging in Oracle Identity Governance

Real-time purge denotes that data is deleted or purged when Oracle Identity Manager is up and running and is available irrespective of the feature invocation, concurrency, or workload.

However, in contrast to the literal meaning of real-time, entity data created in the system is not deleted immediately.

22.2.5 Retention Period in Oracle Identity Governance

Retention period defines the age of the data that needs to be retained in Oracle Identity Manager for functional usage and compliance purpose.

Data is deleted based on the age defined by the retention period value for the entity data in question. The Retention Period attribute must be defined for the real-time purge feature via the OIM Data Purge scheduled job user interface.

22.2.6 Modes of Archival Purge Operations

Archival purge operations can be run in online or offline modes.

Archival purge operations can be performed in the following modes:

  • Offline mode: In this mode, archival and purge of data renders Oracle Identity Manager unusable for the time period it is being run. Because the entire operation being database-intensive, it disables the constraints/indexes at the beginning, copies, deletes the data from the entity tables, and re-enables the post deletion. This is for attaining the maximum performance in the delete operation and eliminating possibilities of functional inconsistencies in the data entered in the window of deletion with table-level constraints disabled. Therefore, any transactional-level changes from Oracle Identity Manager usage is not advised, and as a result, the system is offline from the usage perspective.

  • Online mode: In this mode, archival and purge of data happens with the entire database-level indexes/constraints enabled as usual. Therefore, Oracle Identity Manager usage can be continued in online mode from the operational perspective.

    Note:

    Real-time purge supports online mode only. Command-line Archival Purge Utilities support both online and offline modes based on the user input.

22.3 Using Real-Time Purge and Archival Option in Oracle Identity Governance

Oracle Identity Manager provides a real-time and continuous data purge solution to meet the standards of performance and scalability by maintaining the data generated for the life cycle management of various entities.

Information about using real-time and continuous data purge solution is described in the following sections:

22.3.1 About Real-Time Data Purge and Archival

The real-time purge and archival capability is provided by default in Oracle Identity Manager. Entity data can be continuously purged through this based on the options.

The configuration is one time and the purge solution works automatically without any intervention from the administrator.

The real-time purge and archival has the following features:

  • The administrators provides values for some critical parameters, such as retention period, run duration, and purge criteria, for entities by using the Scheduled Tasks section of Oracle Identity System Administration.

  • Diagnostic information about each purge run is captured as a log.

  • Purge tasks run periodically.

  • The entity modules, such as Request, Reconciliation, Reconciliation Exceptions, Task, Orchestration, and Legacy Audit is purged according to the allotted time duration.

  • The purge solution is fail safe. This means that in the event of a situation, the system does not endlessly consume CPU cycles. A fail-safe design has a minimum impact on other modules. The fail-safe capability is provided by:

    • Maximum Run Time for Auto-Cutoff in Purge Run for each Entity: Each run of the purge utility is governed by the value of the Maximum Purge Run Time parameter, the value of which is in minutes. Purge automatically stops when this maximum purge run duration is exceeded. This is provided at the each entity level so that you can control the Purge Time Period allocation at the feature level.

      Each batch picked up for deletion is aware of the time factor. When the time factor exceeds, the next batch is skipped and the utility's flow of control comes to completion.

      The Maximum Purge Run Time in minutes for each entity can be specified in the scheduled task UI.

    • Single-threaded batching: The purge operation accepts a batch size, which is the maximum number of rows to delete before a commit is issued. This keeps the redo log segments from growing too large when purge is applied to a large number of rows. The batch size is accepted from the scheduled task interface for the purge run operation.

  • Data growth and subsequent footprint is controlled on an on-going basis.

  • It operates online with no disruption of service.

  • The purge operation via an automated scheduled task runs silently at a predefined periodicity and is non-interactive. Various metrices related to the purge operation, such as names of the entity modules, success or failure status, and number of rows targeted for deletion, are logged. These logs are diagnostic pointers for the purge operation for every run.

  • The volume of data purged through the Real-time Purge Utilities Framework is a function of a few inputs, such as time duration window, entities selected, and existing workload. There might be instances when outflow of data in Oracle Identity Manager via this purge functionality is less than the inflow, which means that there would be some data volume accumulating in the system. This can be then purged via the Command-Line Archival/Purge Utilities at a reasonable point in time.

22.3.2 Configuring Real-Time Purge and Archival

Entity data via the Purge solution is continuously purged based on the options or choices that you make when you configure running of the utility. You can modify these options based on data retention policies and maintenance requirements.

To configure real-time purge and archival:

  1. Log in to Oracle Identity System Administration.
  2. Under System Management, click Scheduler.
  3. Search and open the OIM Data Purge Job or OIM Recon Exceptions Purge Job scheduled job.
  4. In the Parameters section, specify values for the parameters, as described in Table 22-2:

    Table 22-2 Purge Configuration Parameters

    Category Parameter Description Default Value

    Global parameters

    Batch Size

    The purge operation runs in batches. It represents the maximum number of rows to delete before a commit is issued.

    5000

    Global parameters

    Maximum Purge Run Duration Per Entity(in Mins)

    This is the maximum run duration in minutes for purge processing for each entity.

    30 mins

    Orchestration purge parameters

    Orchestration Entity Selection

    This specifies whether or not data is to be purged from orchestration tables.

    Yes

    Orchestration purge parameters

    Orchestration Purge Criteria

    This takes the following values:

    • 1 for completed orchestrations

    • 2 for failed, compensated, canceled, or canceled with compensation orchestrations

    • 3 for both 1 and 2

    1

    Orchestration purge parameters

    Orchestration[COMPLETED] Retention Period(in days)

    This indicates the retention period in days for completed orchestrations.

    1 day

    Orchestration purge parameters

    Orchestration[OTHERS] Retention Period(in days)

    This indicates the retention period in days for failed, compensated, or other orchestrations

    30 days

    Provisioning task purge parameters

    Provisioning Task Entity Selection

    This specifies whether or not data is to be purged from provisioning task tables.

    Yes

    Provisioning task purge parameters

    Provisioning Tasks Purge Criteria

    This takes the following values:

    • 1 for completed provisioning tasks

    • 2 for completed and canceled provisioning tasks

    1

    Provisioning task purge parameters

    Provisioning Tasks Retention Period(in days)

    This indicates the retention period in days for provisioning tasks.

    90 days

    Reconciliation purge parameters

    Recon Entity Selection

    This specified whether or not data is to be purged from reconciliation tables.

    Yes

    Reconciliation purge parameters

    Recon Purge Criteria

    This takes the following values:

    • 1 for completed reconciliation events

    • 2 for linked reconciliation events

    • 3 for both 1 and 2

    1

    Reconciliation purge parameters

    Recon Exceptions Purge Criteria

    This takes the following values:

    • 1 for completed reconciliation events

    • 2 for linked reconciliation events

    • 3 for both 1 and 2

    • 4 for ALL reconciliation events

     

    Reconciliation purge parameters

    Recon Retention Period(in days)

    This indicates the retention period in days for reconciliation events.

    180 days

    Request purge parameters

    Request Entity Selection

    This specifies whether or not data is to be purged from request tables.

    No

    Request purge parameters

    Request Purge Criteria

    This takes the following values:

    • 1 for completed requests

    • 2 for failed requests

    • 3 for completed and failed requests

    1

    Request purge parameters

    Request Retention Period(in days)

    This indicates the retention period in days for requests.

    90 days

    Legacy Audit (UPA)

    User Audit Entity Selection

    This specifies whether or not data is to be purged from audit table.

    Note:

    After you upgrade to Oracle Identity Governance 12c (12.2.1.3.0) from an earlier release, for all the scheduled jobs defined against the OIM Data Purge Task scheduled task, the User Audit Entity Selection parameter value is No. As a result, the existing OIM data is not automatically purged without notifying the user. If you want to start the data purge per this job definition, then change the value of the User Audit Entity Selection parameter to Yes.
    Yes

    Legacy Audit (UPA)

    User Audit Retention Period (in days) This indicates the retention period in days for user audit. 365

    Note:

    By default, the 'OIM Data Purge Job' scheduled job is available in the enabled state with a retention period of 90 days. You must revisit the job parameters to disable or to change the purge interval as required.

  5. Click Apply.

In addition to the steps on the Scheduled Task UI for configuration inputs documented in this section, there are no further steps required manually, such as archival tablespace creation. All the steps in the subsequent sections are for running the command-line version of the utilities.

Note:

  • For Real-time Archival Purge operation via Scheduled Task interface, Retention Period must not be specified as ZERO as this can cause inconsistencies in purge operation.

  • Simultaneous runs of multiple OIM Data Purge Job scheduled jobs is not supported via instantiation of the Scheduled Task functionality.

  • There should be no overlap of archival/purge utility run for an entity from both modes in Oracle Identity Manager, which are scheduled task and command-line modes.

  • For details of the purge internals, such as tables that undergo purge for Request, Reconciliation, and Provisioning Tasks, refer to the subsequent sections of the command-line utilities. Both real-time scheduled job-based purge and command-line archival utilities purge data from the same set of table for an entity.

  • If database is restarted when any scheduled job is running, then the job is stuck in RUNNING status. You need to restarting the scheduler service to stop all the jobs which are stuck in RUNNING status.

    For more information on how to stop the scheduled services, see Starting and Stopping the Scheduler.

22.3.3 About the Orchestration Purge Utility

Orchestration data purge takes place from the active orchestration tables via the unified OIM Data Purge Job scheduled job interface.

Orchestration data purge is based on the following criteria:

Note:

Orchestration purge is available only in online mode and via the scheduled job interface.

  • Orchestration process status, such as Completed, Failed, Compensated, Canceled, or Canceled with Compensation.

  • Time-based criteria, which is specified via the retention period value specified in days on the scheduled job interface.

The following active orchestration tables undergo purge via the Orchestration Purge feature:

  • ORCHPROCESS

  • CALLBACK_INVOCATION_RESULT

22.3.4 About the Reconciliation Exceptions Purge Utility

Reconciliation exceptions data purge takes place from the Recon_Exceptions tables via the OIM Recon Exceptions Purge Job scheduled job run.

Reconciliation exceptions data purge is run as a separate scheduled task, which is OIM Recon Exceptions Purge Job, so that it is not run accidentally with other entity purges in OIM Data Purge Job.

Reconciliation exceptions data purge is based on the following criteria:

Note:

Reconciliation exceptions data purge is available only in online mode and via the scheduled job interface.

  • Reconciliation exceptions purge criteria is based on the reconciliation events: 1 for Closed Reconciliation Events, 2 for Linked Reconciliation Events, 3 for both 1 and 2, and 4 for ALL Reconciliation events.

  • Time-based criteria, which is specified via the retention period value specified in days on the scheduled job interface.

Only the Recon_Exceptions table undergo purge via the reconciliation exceptions purge.

22.3.5 Collecting Diagnostic Data of the Online Archival and Purge Operations

You can capture and communicate the various metrics and diagnostic data related to the archival and purge operation.

The Real-Time Purge and Archival operation via the automated scheduled task runs silently at a predefined periodicity and is non-interactive. However, you can capture and communicate the various metrics related to the purge operation, such as:

  • Names of the Entity modules that were picked

  • Success/failure status

  • Exceptions encountered during the run

  • Number of rows targeted for deletion

  • Actual number of rows purged

At a minimum, these metrics are logged for every run. At any point in time, data of the most recent 500 runs is available.

The following diagnostic logging tables are part of the Real-Time Purge and Archival operation to store the diagnostic information of the entity purge runs:

  • OIM_DATAPURGE_TASK_LOG: Stores the critical information related to the purge runs controlled by the scheduled task for the deletion of Entity data.

    Table 22-3 lists the columns of the OIM_DATAPURGE_TASK_LOG table.

    Table 22-3 Columns of the OIM_DATAPURGE_TASK_LOG Table

    Column Description

    OIM_DATAPRGTASK_KEY

    Stores keys to uniquely identify tasks

    OIM_DATAPRG_ID

    Stores unique purge name

    SCH_JOB_ID

    Stores the Job ID of the scheduled task as assigned by the Scheduler

    EXECUTION_MODE

    The execution mode of the purge run, which is SCH for scheduled task mode.

    PURGERUN_START_TIME

    Stores the start time of the entire purge run

    PURGERUN_END_TIME

    Stores the end time of the entire purge run

    PURGERUN_STATUS

    Stores the overall status of the purge run, which can be any one of the following during the run:

    • STARTED

    • COMPLETED

    • ERRORED_OUT

      Task-level purge run could not proceed due to run-time errors. The root cause can be further probed into via the PURGE_RUN_NOTE column that stores the exception stack trace.

    • COMPLETED WITH ERROR

      Task-level purge run has completed but one of its modules could not get completed within the allotted time or encountered some run-time errors. The root cause can be further probed into via the PURGE_RUN_NOTE column that stores the exception stack trace.

    PURGE_RUN_NOTE

    Stores the task-level exception details at the purge run

  • OIM_DATAPRG_TASKS_LOGDTLS: Stores the critical information related to the Module or Entity-level purge runs controlled by the scheduled task.

    Table 22-4 lists the columns of the OIM_DATAPRG_TASKS_LOGDTLS table.

    Table 22-4 Columns of the OIM_DATAPRG_TASKS_LOGDTLS Table

    Column Description

    OIM_DATPRGLOGDET_KEY

    Stores keys to uniquely identify a module in a task

    OIM_DATAPRGTASK_KEY

    Stores the logical foreign key for the OIM_ENTITYPURGE_TASK_LOG table

    MOD_NAME

    Stores the module name, such as:

    • RECON

    • REQUEST

    • ORCH

    • PROVTASKS

    • AUDIT

    • RECON EXCEPTIONS

    EST_ALLOCT_TIME

    Stores the time allocated for the module purge run

    MOD_STATUS

    Stores the module status, which can be any one of the following during the run:

    • STARTED

    • COMPLETED

    • COMPLETED WITH ERROR

      Module or Entity purge run has completed within the allotted time duration but encountered errors during its execution. The root cause can be further probed into via the MOD_PURGE_RUN_NOTE column that stores the exception stack trace.

    • ERRORED_OUT

      Module or Entity purge run could not proceed because of run-time errors. The root cause can be further probed into via the MOD_PURGE_RUN_NOTE column that stores the exception stack trace.

    • PARTIALLY COMPLETED

      Module or Entity purge run is unable to complete within the allotted time duration. This is an acceptable functional state of completion. The root cause can be further probed into via the MOD_PURGE_RUN_NOTE column that stores the exception stack trace.

    • PARTIALLY_COMPLETED WITH ERROR

      Module or Entity purge run could not complete within the allotted time duration but also encountered errors during its execution. The root cause can be further probed into via the MOD_PURGE_RUN_NOTE column that stores the exception stack trace.

    MODPURGERUN_START_TIME

    Stores the start time of the module purge run

    MODPURGERUN_END_TIME

    Stores the end time of the module purge run

    EST_PURGE_ROW_CNT

    Stores the driving table target row count for purge run for the module

    ACTUAL_PURGE_ROW_CNT

    Stores the actual driving table rows deleted during purge run

    MOD_PURGE_RUN_NOTE

    Stores the exception or other information encountered at module level

  • OIM_DATAPRG_FAILED_KEYS: Stores the entity keys for each Module or Entity that have failed during the scheduled purge run.

    Table 22-5 lists the columns of the OIM_DATAPRG_FAILED_KEYS table.

    Table 22-5 Columns of the OIM_DATAPRG_FAILED_KEYS Table

    Column Description

    OIM_DATAPRGFAILED_KEY

    Stores keys to uniquely identify a failed task

    OIM_DATAPRGTASK_KEY

    Stores the logical foreign key for the OIM_ENTITYPURGE_TASK_LOG table

    MOD_NAME

    Stores the module name for which the purge run fails

    MOD_ENTITY_KEY

    Stores the driving table key for each module

    ERROR_NOTE

    Stores the exception stack trace

The OIM_DATAPURGE_TASK_LOG and OIM_DATAPRG_TASKS_LOGDTLS tables contain the data of the last 500 runs. The OIM_DATAPRG_FAILED_KEYS table stores the failed keys data for the last run only.

Note:

For troubleshooting issues in the PL/SQL layer during OIM Data Purge scheduled task run, see Using the PL/SQL Unified Diagnostic Logging and Debugging Framework.

22.4 Using Command-Line Option of the Archival Purge Utilities in Oracle Identity Governance

The command-line option of the archival purge utilities includes the Reconciliation Archival Utility, Task Archival Utility, and Requests Archival Utility.

This section describes how to use the command-line archival purge utilities. It contains the following topics:

Note:

You can use the Reconciliation Archival utility, the Task Archival utility, and the Requests Archival utility in both offline and online modes.

22.4.1 About Command-Line Utilities

Oracle Identity Manager provides archival and purge of entity data via command-line utilities option for three entities, namely reconciliation, provisioning tasks, and requests..

All the command-line utilities are part of Oracle Identity Manager installation and are interactive to capture user-specified parameters to archive and purge entity data. These utilities are available for both Linux and Microsoft Windows operating system environments.

22.4.2 Using the Reconciliation Archival Utility

You can use the Reconciliation Archival utility to archive data that has been reconciled with Oracle Identity Manager. This involves meeting the prerequisites, understanding the archival criteria for reconciliation data, running the utility, understanding the logs generated, and troubleshooting any issues.

This section describes how to use the Reconciliation Archival utility. It contains the following topics:

22.4.2.1 About the Reconciliation Archival Utility

Oracle Identity Manager stores reconciliation data from target systems in Oracle Identity Manager tables called active reconciliation tables:

During the reconciliation process, Reconciliation Manager reconciles data in the active reconciliation tables with the Oracle Identity Manager core tables. Because Reconciliation Manager does not remove reconciled data from the active reconciliation tables, they might eventually grow very large, resulting in decreased performance during the reconciliation process. You can use the Reconciliation Archival utility to archive data that has been reconciled with Oracle Identity Manager. The Reconciliation Archival utility stores archived data in the archive reconciliation tables, which have the same structure as the active reconciliation tables.

Table 22-6 lists the active reconciliation tables with the corresponding archive reconciliation tables in which data from the active reconciliation tables are archived.

Table 22-6 Active and Archive Reconciliation Tables

Active Reconciliation Tables (Oracle Identity Manager Tables) Archive Reconciliation Tables

RECON_EVENTS

ARCH_RECON_EVENTS

RECON_JOBS

ARCH_RECON_JOBS

RECON_BATCHES

ARCH_RECON_BATCHES

RECON_EVENT_ASSIGNMENT

ARCH_RECON_EVENT_ASSIGNMENT

RECON_HISTORY

ARCH_RECON_HISTORY

RECON_USER_MATCH

ARCH_RECON_USER_MATCH

RECON_ACCOUNT_MATCH

ARCH_RECON_ACCOUNT_MATCH

RECON_CHILD_MATCH

ARCH_RECON_CHILD_MATCH

RECON_ORG_MATCH

ARCH_RECON_ORG_MATCH

RECON_ROLE_MATCH

ARCH_RECON_ROLE_MATCH

RECON_ROLE_HIERARCHY_MATCH

ARCH_RECON_ROLE_HIER_MATCH

RECON_ROLE_MEMBER_MATCH

ARCH_RECON_ROLE_MEMBER_MATCH

RA_LDAPUSER

ARCH_RA_LDAPUSER

RA_MLS_LDAPUSER

ARCH_RA_MLS_LDAPUSER

RA_LDAPROLE

ARCH_RA_LDAPROLE

RA_MLS_LDAPROLE

ARCH_RA_MLS_LDAPROLE

RA_LDAPROLEMEMBERSHIP

ARCH_RA_LDAPROLEMEMBERSHIP

RA_LDAPROLEHIERARCHY

ARCH_RA_LDAPROLEHIERARCHY

All horizontal tables mentioned under RECON_TABLES

"ARCH_ "first 25 characters of the horizontal tables (RA_* tables)

Note:

Data from RECON_EXCEPTION table will not be archived and purged. This is due to Oracle Identity Manager predefined BIP Report dependency. The data from RECON_EXCEPTION table can be purged by running the scheduled task.

The Reconciliation Archival utility performs the following tasks:

  • Archives all or specific data from the active reconciliation tables to the archive reconciliation tables

  • Deletes all data from the active reconciliation tables

The Reconciliation Archival Utility archives data by moving it from the active reconciliation tables to the archive reconciliation tables based on the following two-fold criteria per the user inputs:

  • The date- based criteria, which is the reconciliation event creation date. This must be specified in the YYYYMMDD format. All records on or before this date will be archived.

  • The functional reconciliation event state-based criteria, which is the reconciliation event status. This must be selected from the prompted status options when the utility is run.

For information about the archiving criteria, refer to Archival Criteria for Reconciliation Data.

If you choose to archive selective data, then the utility archives reconciliation data based on selected event status that have been created on or before the specified date and event status.

When you archive all data from the active reconciliation tables to the archive reconciliation tables, the Reconciliation Archival utility archives all reconciliation data that have been created on or before the specified date.

The files that constitute the Oracle Database version of the Reconciliation Archival utility are located in the following directory:

OIM_HOME/server/db/oim/oracle/Utilities/Recon11gArchival

You can run the Reconciliation Archival utility in offline mode with Oracle Identity Manager stopped, or in online mode with Oracle Identity Manager running.

Before running the utility in offline mode, you must stop Oracle Identity Manager.

22.4.2.2 Prerequisite for Running the Reconciliation Archival Utility

Before running the Reconciliation Archival utility, the OIM_RECON_ARCH tablespace must be created in the database. To do so, you can run the following sample command as a DBA privilege user, for instance SYS or SYSTEM.

CREATE TABLESPACE OIM_RECON_ARCH
        LOGGING DATAFILE 'ORADATA/OIM_RECON_ARCH.dbf'
        SIZE 500M REUSE AUTOEXTEND ON NEXT 10M;

Note:

  • You must replace ORADATA in the preceding sample command with the full path to your ORADATA directory.

  • You must set LD_LIBRARY_PATH to start Oracle utilities such as SQL*Plus in the environment where you want to run Oracle Identity Manager utilities.

  • Data that has been archived from the active reconciliation tables to the archive reconciliation tables will no longer be available through Oracle Identity Manager. To access this data, you must query the archive reconciliation tables in your Oracle Identity Manager database.

If you are using ASM, Exadata (ASM) or Oracle Managed Files (OMF), then follow the instructions described here.

If you are using ASM, then you can use the name of a diskgroup say DATA 1 to create the tablespace in the database as follows:

CREATE TABLESPACE OIM_RECON_ARCH
       LOGGING DATAFILE '+DATA1'
       SIZE 500M AUTOEXTEND ON NEXT 10M;

If you are using Oracle Managed Files, then you can omit the datafile and run the command as follows:

CREATE TABLESPACE OIM_RECON_ARCH
       LOGGING DATAFILE
       SIZE 500M AUTOEXTEND ON NEXT 10M;

If you want to run the utility in offline mode, then you must stop Oracle Identity Manager before running the utility.

22.4.2.3 Archival Criteria for Reconciliation Data

To select reconciliation data to archive, provide the following criteria. Data with matching values and having no reference in RECON_EXCEPTION table will be archived and purged.

  1. Date must be in the format YYYYMMDD. All records on or before this date that match the specified reconciliation event parameter value will be archived.

  2. Select Closed, Linked, or Closed and Linked for the reconciliation event parameter.
    • Closed describes events that have been manually closed in Reconciliation Manager, that is, any recon events with status as Event Closed.

    • Linked describes events that were reconciled in Oracle Identity Manager, including the Creation Succeeded, Update Succeeded, and Delete Succeeded states.

    • Closed or Linked.

    • Select status for reconciliation events to be archived. Enter 1 for Closed status, enter 2 for Linked status, enter 3 for Closed and Linked status, and enter 4 for Exit status.

22.4.2.4 Running the Reconciliation Archival Utility

To run the Reconciliation Archival utility:

  1. Ensure that the Oracle Identity Manager database is available and that no reconciliation processes are running.

    Note:

    Oracle recommends that you run the Reconciliation Archival utility during off-peak hours.

  2. If you want to run the utility in offline mode, then stop Oracle Identity Manager.

    To run the utility in online mode, ignore this step and proceed to step 3.

  3. On Microsoft Windows platforms, you must specify the short date format as M/d/yyyy. In addition, you must specify the time format as H:mm:ss. To customize the date and time formats, use the Regional and Language Options command in Control Panel.

    Note:

    • When you change the date and time format, the change is applied to all the applications running on the Microsoft Windows platform.

    • Minimal validation is done on date before calling the utility, and you can scan logs files for any ORA-18xx errors for invalid date-related errors.

  4. On Linux or UNIX platforms, run the following commands to set execution permission for the oim_recon_archival.sh file and to ensure that the file is a valid Linux or UNIX text file:
    chmod 755 path/oim_recon_archival.sh
    dos2unix path/oim_recon_archival.sh
    
  5. On Linux or UNIX platforms, run the path/oim_recon_archival.sh file to run the utility.

    On Microsoft Windows platforms, run the path\oim_recon_archival.bat file to run the utility.

  6. For Oracle Database installations, enter values for the following parameters when prompted:
    • Oracle home directory

    • Oracle Identity Manager database user name and password

  7. Enter the reconciliation creation date in the YYYYMMDD format. All records on or before this date with required status value will be archived.
  8. When prompted, select a reconciliation event status for the data that you want to archive:
    • Enter 1 for Closed

    • Enter 2 for Linked

    • Enter 3 for Closed or Linked

    • Enter 4 for Exit

  9. When prompted to specify the mode of running the utility, enter 1 if you want to run the utility in online mode. Otherwise, enter 2 to run the utility in offline mode.
  10. Enter the batch size for processing.

    The default batch size is 5000.

    Note:

    Batch size is a value for the number of records to be processed in a single iteration of archival/purge, also as an internal commit at the database level. You must provide the batch size as an input parameter value while starting the operation of Archival Utilities at run time.

    This batch size by default is 5000. When purging greater than few hundred thousand recon_events, a higher batch size can be opted for. This may need more resources from RDBMS, such as more space from the TEMP and UNDO tablespaces.

    The utility archives the reconciliation data and provides an execution summary in a log file.

  11. On Microsoft Windows platforms, reset the short date format to the date format for your region or locale after you run the utility. Use the Regional and Language Options command in Control Panel to reset the date format.
  12. Because the data from active reconciliation tables are removed, your DBA must analyze the active reconciliation tables and their indexes in order to update the statistics.
22.4.2.5 Log File Generated by the Reconciliation Archival Utility

After running the Reconciliation Archival utility, if the following error is encountered:

ORA-01034: ORACLE not available or ORA-27101: shared memory realm does not exist

Then check whether the target instance is up and running.If not, then contact the DBA, and bring up the instance. Ensure that target instance is accessible with OIM DB user credentials using SQLPLUS command.

22.4.2.6 Troubleshooting Scenario for Reconciliation Archival Utility

While running the Reconciliation Archival utility, if the following error is encountered:

ORA-01034: ORACLE not available, ORA-27101: shared memory realm does not exist

Then, verify whether the target instance is up and running. If not, then contact the database administrator, and bring up the instance. Ensure that the target instance is accessible with Oracle Identity Manager database user credentials by using the SQLPLUS command.

22.4.3 Using the Task Archival Utility

You can use the Task Archival utility to archive the task data and remove it from the active task tables. This involves preparing the database, running the utility, and reviewing the generated output files.

This section describes how to use the Task Archival utility. It contains the following topics:

22.4.3.1 About the Task Archival Utility

In Oracle Identity Manager, a task refers to one or more activities that comprise a process, which handles the provisioning of a resource. For example, a process for requesting access to a resource may include multiple provisioning tasks. Oracle Identity Manager stores task data in the active task tables.

By default, Oracle Identity Manager does not remove completed tasks from the active task tables. As the size of the active task tables increases, you might experience a reduction in performance, especially when managing provisioning tasks. After a task executes successfully, you can use the Task Archival utility to archive the task data and remove it from the active task tables. Archiving task data with the Task Archival utility improves performance and ensures that the data is safely stored.

The Task Archival utility stores archived task data in the archive task tables, which have the same structure as the active task tables.

Table 22-7 lists the active task tables with the corresponding archive task tables in which data from the active task tables are archived.

Table 22-7 Active and Archive Task Tables

Active Task Tables Archive Task Tables

OSI

ARCH_OSI

OSH

ARCH_OSH

SCH

ARCH_SCH

You can use the Task Archival utility to archive the following types of tasks:

  • Provisioning tasks that have been completed

  • Provisioning tasks that have been completed and canceled

The Task Archival Utility archives provisioning tasks by moving it from the active task tables to the archive task tables. This is based on the following two-fold criteria per the user inputs provided:

  • The date-based criteria, which is the provisioning task creation date. This must be specified in the YYYYMMDD format. All records on or before this date will be archived

  • The functional criteria task status, which is the provisioning task status, for example, provisioning tasks with Completed or Completed and Canceled status. This must be selected from the prompted status options when the utility is run.

The archive operation represents the type of task data to archive and the user status determines whether to archive data for users who have been deleted, disabled, or both. The task execution date represents the date on which a task is executed and must be in the format YYYYMMDD.

All executed tasks, up to the task execution date you specify, will be archived. To reduce the time that the archiving process takes, the utility drops the indexes on all active task tables when the number of records to be archived is greater than 200000. The indexes are re-created after the archived data is deleted from the active task tables. You can change the value 200000 to your preferred value. You can change the value in the following lines of code in the OIM_TasksArch.bat file or in the OIM_TasksArch.sh file:

In the .bat file, set INDXRESP=200000

In the .sh file, indxopt=200000

The files that constitute the Oracle Database version of the Task Archival utility are located in the following directory:

OIM_HOME/server/db/oim/oracle/Utilities/TaskArchival

Note:

Data that has been archived from the active task tables to the archive task tables will no longer be available through Oracle Identity Manager. To access this data, you must query the archive task tables in your Oracle Identity Manager database.

You can run the Task Archival utility in offline mode with Oracle Identity Manager stopped, or in online mode with Oracle Identity Manager running.

Before running the utility in offline mode, you must stop Oracle Identity Manager.

22.4.3.2 Preparing Oracle Database for the Task Archival Utility

Before you can use the Task Archival utility with Oracle Database, you must perform the following steps:

  1. Start SQL*Plus and connect to Oracle Database as a SYS user.
  2. Create a separate tablespace for the archival task tables by entering the following command. Replace DATA_DIR with the directory in which you want to store the data file and adjust the size and other parameters as necessary for your environment.
    CREATE TABLESPACE TasksArch
        DATAFILE 'DATA_DIR\tasksarch_01.dbf' SIZE 1000M REUSE
        EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    

    Note:

    Oracle recommends that you allocate a large UNDO tablespace when archiving large amounts of data. In addition, turn on parallel execution by configuring the parallel_max_servers and parallel_min_servers initialization parameters. Parallel execution helps improve the performance of the archival process.

  3. Connect to Oracle Database as the Oracle Identity Manager database user.

Note:

You must set LD_LIBRARY_PATH to start Oracle utilities such as SQL*Plus in the environment where you want to run Oracle Identity Manager utilities.

22.4.3.3 Running the Task Archival Utility

Perform the following steps to run the Task Archival utility:

  1. Ensure that the Oracle Identity Manager database is available but it is not open to other Oracle Identity Manager transactions.

    Note:

    Oracle recommends that you run the Task Archival utility during off-peak hours.

  2. Ensure that you have created a backup of the OSI, SCH, and OSH tables.
  3. If you want to run the utility in offline mode, then stop Oracle Identity Manager.

    To run the utility in online mode, ignore this step and proceed to step 4.

  4. On Microsoft Windows platforms, you must specify the short date format as dddd M/d/yyyy. In addition, you must specify the time format as H:mm:ss. To customize the date and time formats, select the Regional and Language Options command in the Control Panel.

    Note:

    • When you change the date and time format, the change is applied to all the applications running on the Microsoft Windows platform

    • Minimal validation is done on date before calling the utility, and you can scan logs files for any ORA-18xx errors for invalid date-related errors

  5. On Linux and UNIX platforms, run the path/OIM_TasksArch.sh file. On Microsoft Windows platforms, run the path\OIM_TasksArch.bat file.

    On UNIX platform, run the following commands to set execution permission for the OIM_TasksArch.sh file and to ensure that the file is a valid UNIX text file:

    chmod 755 path/OIM_TasksArch.sh
    dos2unix path/OIM_TasksArch.sh
  6. For Oracle Database installations, enter values for the following parameters when prompted:
    • Oracle home directory

    • Oracle Identity Manager database name or TNS string if the Oracle Identity Manager database is running on a remote computer

    • Oracle Identity Manager database user name and password

  7. When prompted, select one of the following options:
    • Archive Provisioning Tasks which have been Completed.

    • Archive Provisioning Tasks which have been Completed and Cancelled.

    • Exit.

  8. When prompted to specify the mode of running the utility, enter 1 if you want to run the utility in online mode. Otherwise, enter 2 to run the utility in offline mode.
  9. Enter a task execution date in the format YYYYMMDD when prompted. All executed tasks, up to the task execution date you specify, will be archived. To archive all tasks that were executed on or before the current date, press Enter without entering a date.
  10. Summary information is displayed before the utility starts the archival process. The summary information gives you the total number of tasks to be archived. Read the summary information carefully and make sure your database can support the delete volume listed in the summary.

    Enter a value of y or Y when prompted to archive the tasks. Otherwise, enter a value of n or N to exit the utility.

    Note:

    You must enter the value of Y or N when prompted. If you press Enter without selecting a value, then the utility again counts the number of tasks to be archived and prompts you without beginning the archive.

  11. On Microsoft Windows platforms, reset the short date format to the date format for your region or locale after the Task Archival utility finishes running. Use the Regional and Language Options command in the Control Panel to reset the date format.

    Note:

    You must analyze the active task tables and their indexes for updated statistics, because the data from active task tables is removed. Perform this step only if you are using Oracle Database as the database for Oracle Identity Manager.

22.4.3.4 Reviewing the Output Files Generated by the Task Archival Utility

Table 22-8 describes the output files that are generated by the Task Archival utility.

Table 22-8 Output Files Generated by the Task Archival Utility

File Description

Err_DB_Conn_timestamp.log

Generated when the utility is unable to connect to the database with the specified credentials

Err_Arch_Tasks_timestamp.log

Generated when the archival or deletion processes fail

Arch_TaskData_timestamp.log

Generated when the archival or deletion processes succeed

Note:

These error log files are deleted when you run the utility again.

22.4.4 Using the Requests Archival Utility

You can use the Requests Archival utility to archive the closed or withdrawn requests. This involves meeting the prerequisites, understanding the input parameters, running the utility, and understanding the generated logs.

This section describes how to use the Requests Archival utility. It contains the following topics:

22.4.4.1 About the Requests Archival Utility

By default, Oracle Identity Manager does not remove closed or withdrawn requests from the active request tables. To archive these requests and free up the disk space and thereby enhance database performance, the Requests Archival utility is used. You can archive request data based on request creation date and request status. Archiving requests based on the request status is optional. By using request status, you can archive:

  • Completed requests such as requests with status Withdrawn, Closed, and Completed. This is specified by selecting the 1 for Completed option.

  • Failed requests such as requests with status Failed, and Partially Failed. This is specified by selecting the 2 for Failed option.

  • Completed and failed requests, such as requests with status Withdrawn, Closed, Completed, Failed, and Partially Failed. This is specified by selecting the 3 for Completed and Failed option.

Table 22-9 lists the names of the tables which are to be archived and the corresponding archival table names.

Table 22-9 Archival Tables

Main Table Archival Table

REQUEST

ARCH_REQUEST

REQUEST_HISTORY

ARCH_REQUEST_HISTORY

REQUEST_APPROVALS

ARCH_REQUEST_APPROVALS

REQUEST_ENTITIES

ARCH_REQUEST_ENTITIES

REQUEST_ENTITY_DATA

ARCH_REQUEST_ENTITY_DATA

REQUEST_BENEFICIARY

ARCH_REQUEST_BENEFICIARY

REQUEST_BENEFICIARY_ENTITIES

ARCH_REQUEST_BE

REQUEST_BENEFICIARY_ENTITYDATA

ARCH_REQUEST_BED

REQUEST_TEMPLATE_ATTRIBUTES

ARCH_REQUEST_TA

WF_INSTANCE

ARCH_WF_INSTANCE

REQUEST_COMMENTS

ARCH_REQUEST_COMMENTS

The files that constitute the Oracle Database version of the Requests Archival utility are located in the following directory:

OIM_HOME/server/db/oim/oracle/Utilities/RequestArchival

You can run the Requests Archival utility in offline mode with Oracle Identity Manager stopped, or in online mode with Oracle Identity Manager running.

Before running the utility in offline mode, you must stop Oracle Identity Manager.

22.4.4.2 Prerequisites for Running the Requests Archival Utility

If you want to run the utility in offline mode, then you must stop Oracle Identity Manager before running the utility.

Note:

You must set LD_LIBRARY_PATH to start Oracle utilities such as SQL*Plus in the environment where you want to run Oracle Identity Manager utilities.

22.4.4.3 Input Parameters used by the Requests Archival Utility

Table 22-10 lists the input parameters used by the Requests Archival Utility:

Table 22-10 Input Parameters

Parameter Description

Oracle Home

The value of ORACLE_HOME environment variable on the system.

Oracle SID

The SID of the Oracle Identity Manager database, which is a TNS name or TNS alias.

OIM DB User

The database login ID of the Oracle Identity Manager database user.

OIM DB Pwd

The password of the Oracle Identity Manager database user.

Request Status

The request status based on the user inputs 1, 2, or 3.

Request Creation Date

The utility archives all requests created on or before this request creation date with the required request status.

Batch Size

The utility processes a group of records or batch as a single transaction. The batch size can influence the performance of the utility.

Default value of Batch Size is 2000.

Utility Running Mode

The mode in which you want to run the utility, online or offline. You must enter 1 for online mode, or 2 for offline mode.

The utility runs faster when you run it in offline mode than online mode. However, running the utility in offline mode requires downtime. The archival operation can be speeded up by running in offline mode, but Oracle Identity Manager is not usable until the utility completes the archival operation. Therefore, make sure that Oracle Identity Manager is not running before choosing this option.

22.4.4.4 Running the Requests Archival Utility

To run the Requests Archival utility:

  1. Ensure that the Oracle Identity Manager database is available.

    Note:

    It is recommended that you run the Requests Archival utility during off-peak hours.

  2. If you want to run the utility in offline mode, then stop Oracle Identity Manager.

    To run the utility in online mode, ignore this step and proceed to step 3.

  3. On Microsoft Windows platform, you must specify the short date format as dddd M/d/yyyy. In addition, you must specify the time format as H:mm:ss. To customize the date and time formats, use the Regional and Language Options command in Control Panel.

    Note:

    • When you change the date and time format, the change is applied to all the applications running on the Microsoft Windows platform.

    • Minimal validation is done on date before calling the utility, and you can scan logs files for any ORA-18xx errors for invalid date-related errors.

  4. On UNIX platform, run the following commands to set execution permission for the OIM_request_archival.sh file and to ensure that the file is a valid UNIX text file:
    chmod 755 path/OIM_request_archival.sh
    dos2unix path/OIM_request_archival.sh
    
  5. On UNIX platform, run the path/OIM_request_archival.sh file. On Microsoft Windows platform, run the path\OIM_request_archival.bat file.

    The oim_request_archival script validates the database input and establishes a connection with the database. It then calls the oim_request_archival.sql script, the script is used to compile PL/SQL procedures related to the utility.

  6. For Oracle Database installations, enter values for the following parameters when prompted:
    • Oracle home directory.

    • Oracle Identity Manager database name or TNS string if the Oracle Identity Manager database is running on a remote computer. Otherwise, enter ORACLE SID.

    • Oracle Identity Manager database user name and password.

  7. When prompted, enter one of the following options:
    • Enter 1 to archive the requests with status Request Withdrawn, Request Closed, or Request Completed, and requests with creation date on or before the request creation date specified by the user in the format YYYYMMDD.

    • Enter 2 to archive the requests with status Request Failed, Request Partially Failed, and requests with creation date on or before the request creation date specified by the user in the format YYYYMMDD.

    • Enter 3 to archive the requests with status Request Withdrawn, Request Closed, Request Completed, Request Failed, or Request Partially Failed, and requests with creation date on or before the request creation date specified by the user in the format YYYYMMDD.

  8. When prompted to specify the mode of running the utility, enter 1 if you want to run the utility in online mode. Otherwise, enter 2 to run the utility in offline mode.
  9. Specify the batch size, when prompted.

    Note:

    Batch size is a value for the number of records to be processed in a single iteration of archival/purge also an internal commit at the database level. You must provide the batch size as an input parameter value while starting the operation of Archival Utilities at run time.

    This batch size by default is 2000. A higher batch size can be opted for, but this might require more resources from the database, such as more space from the TEMP and UNDO tablespaces.

    The utility archives the request data and provides an execution summary in a log file.

  10. On Microsoft Windows platforms, reset the short date format to the date format for your region or locale after you run the utility. Use the Regional and Language Options command in Control Panel to reset the date format.
  11. Because the data from active request tables are removed, your DBA must analyze the active request tables and their indexes in order to update the statistics. Perform this step only if you are using Oracle Database as the database for Oracle Identity Manager.
22.4.4.5 Log Files Generated by the Utility

All the logs are written to the logs/ directory created in the current folder. Table 22-11 lists the log files generated by the utility.

Table 22-11 Logs Generated by the DB Archival Utility

Log File Description

validate_date.log

Created when the input REQUEST_CREATION_DATE is invalid

oim_request_archival_summary_TIMESTAMP.log

Contains the summary of the run

Err_DB_Conn_TIMESTAMP_ATTEMPTNUMBER.log

Created when the utility is unable to connect to the database with the credentials provided

22.5 Using the Audit Archival and Purge Utility

Growth control measures for audit data are available for the lightweight audit framework as well as for the legacy audit framework.

This section discusses tools and methodologies available to control the data growth in Lightweight Audit framework and Legacy Audit framework:

22.5.1 About Audit Archival and Purge Utility

The Audit Archival and Purge utility controls the growth of the audit data by purging the data in a logical and consistent manner.

Continuous business operations in the Oracle Identity Manager Database results in audit data growth which also has gradual increase in the storage consumption of the database server. Oracle Identity Manager's audit data related to provisioning feature are stored in legacy audit table called UPA and rest of data which are audited using lightweight auditing framework goes into AUDIT_EVENT table.

To keep this disk space consumption in control, you can use the Audit Archival and Purge utility. This utility controls the growth of the audit data by purging the data in a logical and consistent manner.

22.5.2 Audit Data Growth Control Measures in Lightweight Audit Framework

Controlling the growth of audit data involves partitioning the AUDIT_EVENTS table, archiving or purging the AUDIT_EVENTS table, and maintaining the partition on an ongoing basis.

This section describes how to use the Partition Based Approach to control growth of audit data in Lightweight Audit framework. It contains the following topics:

22.5.2.1 About Audit Data Growth Control Measures in Lightweight Audit Framework

To control the growth of audit data in lightweight audit framework, that is AUDIT_EVENT table, there are two available solutions:

  1. Run the Remove Audit Log Events scheduled job.

    When this scheduled job is run, the system will automatically start purging all audit records that are older than the retention period configured in Remove Audit Log Events older Than (in days) field. By default it is 180 days. This scheduled job is enabled by default and has purge only option.

    For more information about Scheduled jobs, see Scheduled Tasks.

  2. Partitioning AUDIT_EVENT table.

    Tip:

    This is a documented approach and you will need to use this solution based on your audit data compliance or lifecycle management requirement. This solution complements purge scheduled job.

    This solution allows you to achieve the following:

    • Archiving the data, where as Schedule job allows only purging audit data.

    • Flexibility to manage disk size based on your requirement. For example, data needs to be saved for a longer duration of time or if high audit growth is expected.

Figure 22-1 helps you to pick the option(s) that might be suitable to control audit data growth in your deployment.

Figure 22-1 Solutions Available to Control Audit Data Growth in Lightweight Audit Framework

Description of Figure 22-1 follows
Description of "Figure 22-1 Solutions Available to Control Audit Data Growth in Lightweight Audit Framework"
22.5.2.2 Overview of Partition Based Approach

When Partition based approach is used in combination with Scheduled job, it helps you to achieve the following solutions suitable for your deployment :

  • If you need to archive or keep audit data for longer duration like, few years, then:

    1. Implement Partition based approach to manage audit data growth. This allows you to archiving and/or managing data growth.

    2. Schedule job will come into purview later when you start approaching the retention period.

  • If you need to keep audit data for smaller duration, like few months, then Schedule job should be able to control the data growth.If you need to keep audit data for smaller duration, like few months, but audit data growth rate is high due to high number of business operations, then:

    1. Implement Partition based approach to manage audit data growth.

    2. Purge the data by running Schedule job.

22.5.2.3 Prerequisites for Partitioning the AUDIT_EVENT Table

The following prerequisites must be met before or when using Partition based approach:

  • Licensing for Database partitioning is required to use partitioning feature of Oracle Database.

  • It is recommended to try out this solution in the development or staging environment before implementing it on the production database.

  • Make sure that the latest backup of the AUDIT_EVENT table is available. Creating a backup of the AUDIT_EVENT table is a compulsory prerequisite before applying this solution.

  • It is recommended to use INTERVAL partitioning if your Oracle database release is 11g. Use RANGE partitioning if your oracle database is pre-11g. Partitioning should be done on the basis of month by using EVENT_DATE column.

  • Decide how many months of audit data you require to keep online before implementing this solution. For example, if your audit data retention is six months, you have to partition AUDIT_EVENT table for six months based on month partition.

  • Make sure that Oracle Identity Manager is not running and is not available for off-line utilities.You can start Oracle Identity Manager after partition for current month is created successfully and rest of audit data can be partitioned while Oracle Identity Manager is running.

22.5.2.4 Preparing the AUDIT_EVENT Table for Archival and Purge

To prepare the AUDIT_EVENT table for the audit and purge solution:

  1. Query the AUDIT_EVENTS table to get the minimum and maximum calendar month for the audit data.

    Following queries can help you get the minimum and maximum month. The maximum month should be the current calendar month.

    SELECT  MIN (event_date) min_month, MAX (event_date) running_month  FROM AUDIT_EVENT
    
  2. Based on the result of the previous step, three possible scenarios and time phases listed in Table 22-12 can be considered for partitioning.

    Table 22-12 Possible Scenarios That are Considered For Partitioning

    Scenario Time Phase

    Scenario 1

    If the minimum and calendar month is the same, then you can create partition for the current month. Partitions for rest of the months will be created in the future if you use INTERVAL partitioning. If RANGE partitioning is used, then you need to create future partitioning manually.

    Scenario 2

    If the minimum and calendar month falls within your retention duration for example six months. For example, minimum month is OCT-2015 and calendar month is DEC-2015. Then you will want to partition from OCT-2015 to DEC-2015. Future partitions will be created automatically.

    Scenario 3

    If the minimum and calendar month falls out of your duration, like more than six months. For example, minimum month is MAY-2015 and calendar month is DEC-2015. Then, you will want to partition from JUL-2015 to DEC-2015. You will need to decide what to do with data for months (May, June) that falls out of your selected duration.

  3. Refer Oracle RDBMS partitioning documentation for steps or commands to partition AUDIT_EVENT table.

22.5.2.5 Archiving or Purging the AUDIT_EVENT Data Using Partitions

Archiving or purging of audit data can be done by moving or dropping the partitions. Oracle Identify Manager does not use any partitions other than the current month. You cannot move or drop the current month partition. Which partitions to archive or purge depends on your audit data compliance or life cycle requirement.

For example if your requirement is to retain one year of audit data for compliance purpose, then follow these steps:

  1. Change the retention period of audit in Remove Audit Log Events scheduled job from default six months (180 days) to one year.
  2. Implement the partition based solution for AUDIT_EVENT table using INTERVAL or RANGE partitioning.
  3. Archive or drop any partitions except the current month partition to offline storage if disk space is a concern. Oracle Identity Manager uses the current month partition to update or insert audit records. You have to keep the current month partition intact for Oracle Identity Manager to work.
  4. When you are about to reach the retention duration, you may want to archive or move the partition that contains the first month data to offline storage. Otherwise, Remove Audit Log Events scheduled job will purge that data when it falls out of your retention period set in Remove Audit Log Events scheduled job.
22.5.2.6 Ongoing Partition Maintenance

The partition of the AUDIT_EVENTS table requires the following maintenance activities on an ongoing basis:

  • Remove Audit Log Events scheduled job will purging data from partitions that contains audit data older than the retention period. This creates empty partitions in AUDIT_EVENT table. It is recommended to periodically check for these empty partitions and drop them.

  • Drop these empty partitions in your maintenance window using SQL like:

    Alter table AUDIT_EVENT drop partition PARTITION_NAME UPDATE GLOBAL INDEXES;

22.5.3 Partition-Based Approach for Audit Growth Control Measures in Legacy Audit (UPA) Framework

Using the Audit Archival and Purge utility to control the growth of the legacy audit data involves meeting the prerequisites of running the utility, preparing the UPA table for archival and purge, and archiving and purging the UPA table.

This topic contains the following sections:

Note:

The partitioning feature of Oracle Database Enterprise Edition is required for implementing audit archival and purge.

22.5.3.1 About Audit Data Growth Control Measures in Legacy Audit Framework

To control the growth in legacy audit engine, that is in UPA tables, you can use the Audit Archival and Purge utility. This utility controls the growth of the audit data by purging the data in a logical and consistent manner.

Note:

  • The audit archival and purge solution is only applicable to the UPA table. It is not applicable to audit reporting tables, which are tables with the UPA_ prefix.

  • The utility is compatible with Oracle Identity Manager release 9.1.0 and later.

You must shut down Oracle Identity Manager to fetch the latest data, which is to retrieve EFF_TO_DATE as null records. You can retrieve the remaining data later when Oracle Identity Manager is running with the new partitioned UPA.

Oracle recommends partitioning of the UPA table on the basis of calendar year, which allows you to archive or drop partitions. The advantage of partitioning is that the old partitions can be archived or purged because Oracle Identity Manager does not use old audit data lying in those partitions. Oracle Identity Manager uses the latest audit data and the current calendar year data. Therefore, the UPA table is partitioned based on date range-partitioning approach by calender year using EFF_TO_DATE column. After partitioning, the latest audit data where EFF_TO_DATE is NULL, can be grouped in one partition, and there will be one partition for each calendar year. Oracle Identity Manager do not read or write into any other partitions except the latest and current year partitions.

For instance, if you are using Oracle Identity Manager audit feature since 2005 and implementing the audit archive and purge solution in calendar year 2011, then you will have seven partitions after this exercise, assuming that you create a partition for each calendar year. In those seven partitions, Oracle Identity Manager will only read or write the following partitions:

  • The latest partition

  • The partition for the current year, for example 2011

All the previous year partitions can be archived and then purged. If you do not want to archive, then you can purge those old partitions. You can reclaim the space by archiving and purging those old partitions. You must keep the latest and current year partitions untouched for Oracle Identity Manager to continue working.

22.5.3.2 Prerequisites for Using the Utility

The following prerequisites must be met before or when using the Audit Archival and Purge utility:

  • Database partitioning is supported only on Enterprise Edition of Oracle Database. Therefore, to implement the audit archival and purge solution, you must run Enterprise Edition of Oracle Database.

  • The UPA table must be range-partitioned. Time interval can be any value as per data distribution. Other modes of partition methods are not supported.

  • Make sure that the latest backup of the UPA table is available. Creating a backup of the UPA table is a compulsory prerequisite before applying this solution. It is recommended to try out this solution in the development or staging environment before implementing it on the production database.

  • Decide how many previous year's of audit data you require to keep online before implementing this solution. This helps in creating partitions beforehand.

  • Each partition should be placed on its own tablespace. Do not share the tablespace between partitions of different year or with some other data.

  • During partitioning, the audit data for each calendar year is copied into a table before it is moved into a final destination. You must have provision for disk space to hold the copied data.

22.5.3.3 Preparing the UPA Table for Archival and Purge

To prepare the UPA table for the audit and purge solution:

  1. Make sure that Oracle Identity Manager database has no transaction against it until the UPA table is partitioned.

  2. Query the UPA table to get the minimum and maximum calendar year for the audit data. Following queries can help you get the minimum and maximum year. The maximum year should be the current calendar year.

    SELECT EXTRACT (YEAR FROM MIN (eff_to_date)) min_year,EXTRACT (YEAR FROM MAX (eff_to_date)) running_year FROM upa;
    

    This helps in deciding the partitions for each calendar year starting from minimum year.

  3. Make sure that Oracle Identity Manager is not running and is not available for off-line utilities.

  4. Create a new partition table.

    Assuming 2005 as minimum year and 2011 as running or current calendar year, the following decisions are to be made before creating a newly partition table:

    • How many years of old audit data you want to keep? If it is important to keep only three years of audit data, then you have to create newly partitioned table starting from year 2008. The data older than 2008 will get cleaned up when the original UPA table gets dropped.

    • After deciding the years of old data to keep, the next question is how and where the old data should be kept? Do you want to keep all the old data partitions in the active UPA table, or create backup of the old partitions and then drop the old partitions? Oracle recommends moving the old partitions into tapes and then purging them from the UPA table. As stated earlier, you must keep the latest and running calendar year partition untouched.

    The following sample assumes that you want to keep three years of audit data in UPA table and current calendar year is 2011:

    SQL> SELECT 'Create Table UPA_PART
    (
    UPA_KEY NUMBER (19) Not Null,
    USR_KEY NUMBER (19) Not Null,
    EFF_FROM_DATE TIMESTAMP (6) Not Null,
    EFF_TO_DATE TIMESTAMP (6),
    SRC VARCHAR2 (4000),
    SNAPSHOT CLOB,
    DELTAS CLOB,
    SIGNATURE CLOB
    )
    PARTITION BY RANGE (EFF_TO_DATE)
    (PARTITION UPA_2008 VALUES LESS THAN (TO_DATE(''01/01/2009'', ''DD/MM/YYYY'')) Tablespace upa_2008,
    PARTITION UPA_2009 VALUES LESS THAN (TO_DATE(''01/01/2010'', ''DD/MM/YYYY'')) Tablespace upa_2009,
    PARTITION UPA_2010 VALUES LESS THAN (TO_DATE(''01/01/2011'', ''DD/MM/YYYY'')) Tablespace upa_2010,
    PARTITION UPA_2011_PART1 VALUES LESS THAN (TO_DATE('''||TO_CHAR(SYSDATE,'DD/MM/YYYY HH24:MI:SS')||''',''DD/MM/YYYY HH24:MI:SS'')) TABLESPACE UPA_2011_PART1,
    PARTITION UPA_2011_PART2 VALUES LESS THAN (TO_DATE(''01/01/2012'',''DD/MM/YYYY'')) TABLESPACE UPA_2011_PART2,
    PARTITION UPA_LATEST VALUES LESS THAN (MAXVALUE) TABLESPACE UPA_MAX
    )
    ENABLE ROW MOVEMENT;' FROM DUAL;
    
  5. Create another non-partitioned table with similar structure as the UPA table, by running the following statement:

    SQL> Create table upa_non_part Tablespace TBS_NAME as select * from upa where 1=2;
    

    Here, TBS_NAME is the name of the same tablespace as of partition, which is to be exchanged.

    This table is temporary in nature. The purpose of this table is to facilitate the loading of audit data to a newly partitioned UPA table.

    Note:

    UPA_NON_PART or temporary non-partitioned table must be created on same tablespace as the partition to be exchanged.

  6. Load the latest audit data into the non-partitioned UPA table, as shown:

    SQL> Insert /*+ parallel */ into upa_non_part select /*+ parallel */   * from upa where eff_to_date is null;
    SQL> COMMIT;
    

    Note:

    Using hint /*+parallel*/ in the INSERT statement is optional and you can use other hints also to improve performance according to the available resources.

  7. Swap the data into the partitioned table by using the ALTER TABLE command, as shown:

    SQL> ALTER TABLE upa_part EXCHANGE PARTITION UPA_LATEST WITH TABLE UPA_NON_PART WITH VALIDATION UPDATE GLOBAL INDEXES;
    
  8. Drop the upa_non_part table, as shown:

    SQL> DROP TABLE upa_non_part;
    

    While exchanging partitions, the data dictionary is updated instead of writing data physically. Therefore, it is necessary to drop and re-create the temporary non-partitioned UPA_NON_PART table in the same tablesapce associated to the partition to be exchanged.

  9. Rename the original non-partitioned UPA table to UPA_OLD, as shown:

    SQL> ALTER TABLE upa rename TO upa_old;
    
  10. Rename the newly partitioned UPA_PART table to UPA:

    SQL> RENAME UPA_PART to UPA;
    
  11. Manage the constraints for the new UPA table. To do so:

    1. Rename the constraint from old UPA table to some other name, as shown:

      ALTER TABLE UPA_old RENAME CONSTRAINT PK_UPA TO PK_UPA_old;
      ALTER INDEX IDX_UPA_EFF_FROM_DT RENAME TO IDX_UPA_EFF_FROM_DT_old;
      ALTER INDEX IDX_UPA_EFF_TO_DT RENAME TO IDX_UPA_EFF_TO_DT_old;
      ALTER INDEX IDX_UPA_USR_KEY RENAME TO IDX_UPA_USR_KEY_old; 
      ALTER INDEX PK_UPA RENAME TO PK_UPA_OLD;
      
    2. Create the necessary indexes and primary key constraint on the newly partitioned UPA table. Make sure to add storage characteristics, such as tablespace and size. To do so, run the following SQL query:

      SQL>create index IDX_UPA_EFF_FROM_DT on UPA (EFF_FROM_DATE) Local;
      SQL>create index IDX_UPA_EFF_TO_DT on UPA (EFF_TO_DATE) Local;
      SQL>create index IDX_UPA_USR_KEY on UPA (USR_KEY) Local;
      SQL>ALTER TABLE UPA add constraint PK_UPA primary key (UPA_KEY) using index;
      

      Note:

      The global non-partitioned index is created to support the primary key. Global index becomes unusable every time a partition is touched. You must rebuild the index when required.

  12. Run the statistics collection for the UPA table, as shown:

    SQL>Exec dbms_stats.gather_table_stats(ownname => 'SCHEMA_NAME',tabname => 'UPA',cascade => TRUE,granularity => 'GLOBAL and PARTITION');
    

    Note:

    Global statistics must be gathered by default. Oracle 11g includes improvements to statistics collection for partitioned objects so untouched partitions are not rescanned. This significantly increases the speed of statistics collection on large tables where some of the partitions contain static data. When a new partition is added to the table, you need to collect statistics only for the new partition. The global statistics is automatically updated by aggregating the new partition synopsis with the existing partitions synopsis.

  13. Start Oracle Identity Manager. The database is ready to be opened for transactions. Test and make sure that applications are running as expected.

  14. Bring current year data in UPA_2011_PART1 to have all data and maintain consistency for current year. To do so, run the following SQL queries in sequence:

    SQL> CREATE TABLE upa_non_part Tablespace TBS_NAME AS SELECT * FROM upa WHERE 1=2;
    

    Here, TBS_NAME is the same tablespace name as of the partition, which is to be exchanged.

    SQL> Alter Table UPA_NON_PART add constraint PK_UPA_NON_PART primary key (UPA_KEY) using index;
    
    .............
    .............
    SQL> Insert into upa_non_part select * from upa_old where eff_to_date >= to_date('01/01/2011', 'mm/dd/yyyy');
    
    .............
    ............. 
    SQL> COMMIT;
    
    .............
    .............
     
    SQL> ALTER TABLE upa exchange partition UPA_2011_PART1 WITH table upa_non_part WITH VALIDATION UPDATE GLOBAL INDEXES;
    
    .............
    ............. 
    SQL> Drop table upa_non_part;
    
  15. If required, bring previous year's data into the newly partitioned UPA table. To do so:

    1. Run the following SQL queries in sequence:

      SQL> CREATE TABLE upa_non_part Tablespace TBS_NAME AS SELECT * FROM upa WHERE 1=2;
      

      Here, TBS_NAME is the same tablespace as of the partition, which is to be exchanged.

      .............
      .............
      SQL> Alter Table UPA_NON_PART add constraint PK_UPA_NON_PART primary key (UPA_KEY) using index;
      .............
      .............
      SQL> Insert into upa_non_part select * from upa_old where eff_to_date >= to_date('01/01/YEAR', 'mm/dd/yyyy') and eff_to_date < to_date('01/01/<YEAR+1>', 'mm/dd/yyyy');
      

      Here, YEAR is the year for which you want to bring the data into newly partitioned UPA table.

      .............
      .............
      	SQL>COMMIT;
      
      .............
      .............
      	SQL> Alter table upa exchange partition UPA_<year> with table upa_non_part with validation Update global indexes;
      
    2. Rebuild indexes if they are unusable. The Following SQL query shows the indexes that are unusable:

      SQL> Select index_name, partition_name, tablespace_name, status from user_ind_partitions;
      
    3. Drop the table upa_non_part, as shown:

      SQL> Drop table upa_non_part;
      

    Note:

    Repeat step 15 for each old year.

  16. All partition operations against UPA table are done and all the data is brought into. Run the statistics collection for the UPA table, as shown:

    SQL>Exec dbms_stats.gather_table_stats(ownname => '<Schem_name>',tabname => 'UPA',cascade => TRUE,granularity => 'GLOBAL and PARTITION');
    
  17. Drop the UPA_OLD table if it is not required. You can create a backup of this table before dropping.

22.5.3.4 Archiving or Purging the UPA Table
22.5.3.4.1 Partitions That Must Not Be Archived or Purged

Oracle Identity Manager always requires the latest and the current calendar year audit data. The following are the names of latest and calendar year partitions:

  • UPA_LATEST: The latest partition

  • UPA_2011_PART1 and UPA_2011_PART2: Partitions for the current year if current year is 2011

You must keep these two partitions untouched for Oracle Identity Manager to continue working. These two partitions should never be archived or purged.

22.5.3.4.2 Ongoing Partition Maintenance

A new partition must be added to the UPA table before the new calendar year arrives. To do so, use the following SQL template:

SQL> Alter table UPA split partition UPA_LATEST at (TO_DATE('01/01/YEAR+1','DD/MM/YYYY')) into (partition UPA_YEAR tablespace UPA_YEAR,partition UPA_LATEST tablespace UPA_MAX) update global indexes;

Here, YEAR in the TO_DATE function represents the new calendar year plus one. YEAR for partition name and tablespace name represents new upcoming calendar year.

An example of SQL statement for adding new partition for new calendar year 2012 is as follows:

SQL> Alter table UPA split partition UPA_LATEST at (TO_DATE('01/01/2013','DD/MM/YYYY')) into (partition UPA_2012 tablespace UPA_2012,partition UPA_LATEST tablespace UPA_MAX) update global indexes;

Oracle recommends adding new partition with the given SQL template before the new calendar year arrives. However, if you do not add the same before the arrival of the next calender year, then the same can be done after the next year has started by using the same SQL command.

22.5.3.4.3 Archiving or Purging Partitions in the UPA Table

To archive or purge partitions in the UPA table:

  1. If you use the attestation feature of Oracle Identity Manager, then make sure that the partition to be archived or purged does not have any active attestation records. You can use the following SQL to verify that.
    SQL> SELECT COUNT(1) FROM UPA PARTITION(<PARTITION_TO_BE_DROPPED>) WHERE UPA_KEY IN (select distinct (upa_key) from apt apt, atr atr, atd atd where apt.atr_key=atr.atr_key and atr.atr_completion_time is NULL and apt.apt_key = atd.apt_key);
    

    This query should return zero records, which means there are no active attestation records. If this returns non-zero value, then it means that there are still active attestations pointing to the partition to be dropped. This is not common, but you must make sure that there are no active attestation records before dropping an old year partition.

  2. Make sure that there are no custom reports or queries that needs the data from partition to be dropped.
  3. Archive the partition to be dropped to tape or any other media. There are many ways to archive a partition. One of the ways is to use data pump or export utility to archive the partition to be dropped. Choose a way that works best in your environment.
  4. Purge the partition. To do so:
    SQL> Alter table UPA drop partition PARTITION_NAME UPDATE GLOBAL INDEXES;
    SQL>Drop tablespace TBS_NAME including contents and datafiles;
    

    Here, TBS_NAME is the tablespace associated with the partition to be dropped, and it must not contain any other data.

    Note:

    • The current year contains two partitions named UPA_2011_PART1 and UPA_2011_PART2. When current year becomes an old year and the data for that is ready to be archived or purged, make sure to archive or purge these two partitions.

    • It is your responsibility to restore the archived data later, if required.

22.6 Using the Real-Time Certification Purge in Oracle Identity Governance

Using real-time certification purge in Oracle Identity Manager involves understanding and configuring the real-time certification purge job utility.

The concepts related to real-time certification purge solutions in Oracle Identity Manager are described in the following sections:

22.6.1 Understanding Real-Time Certification Purge Job

The Real-Time Certification Purge Job capability is provided by default in Oracle Identity Manager. Certification data can be continuously purged using this feature based on the options or choices made during configuration. This configuration is a onetime process and the purge solution works automatically without any intervention from the administrator.

The Real-Time Certification Purge Job has the following features:

  • The administrator provides values for some critical parameters by using the Scheduled Tasks section of Oracle Identity System Administration.

  • Diagnostic information about each purge run is captured as a log.

  • Purge tasks run periodically according to the allotted time duration.

  • Data growth and subsequent footprint is controlled on an on-going basis.

  • It operates online with no disruption of service.

  • The purge operation via an automated scheduled task runs silently at a predefined periodicity and is non-interactive.

  • Various metrics related to the purge operation, such as names of the entity modules, success or failure status, and number of rows targeted for deletion, are logged.

  • These logs are diagnostic pointers for the purge operation for every run.

  • Certification Purge task utilizes the existing Purge diagnostic logging framework. Refer to section Collecting Diagnostic Data of the Online Archival and Purge Operations for more information on the framework.

Oracle Identity Manager stores certification data in Oracle Identity Manager tables called active certification tables.

Naming convention used in Oracle Identity Manager in storing active certification data in the database has acronym as listed in Table 22-13.

Table 22-13 Acronyms Used in Archive Certification Tables

Table Acronym Description

CERT_*

Table stores the certifications. CERT_ID is the key to each of these tables

CERTD_*

Table stores the decision-data for certifications

CERTDS_*

Table stores the decision-data and the snapshot-data for certifications

CERTS_*

Table stores the snapshot-data for certifications

You can use the Certification Purge Job to archive data in the archive certification tables, which have the same structure as the active certification tables.

Table 22-14 lists the active certification tables with the corresponding archive certification tables in which data from the active certification tables are archived.

Table 22-14 Active and Archive Certification Tables

Active Certification Tables (Oracle Identity Manager Tables) Archive Certification Tables

CERT_CERTS

ARCH_CERT_CERTS

CERT_CONFIG

ARCH_CERT_CONFIG

CERT_LAST_DECISION

ARCH_CERT_LAST_DECISION

CERT_TASK_INFO

ARCH_CERT_TASK_INFO

CERT_TASK_ACTION

ARCH_CERT_TASK_ACTION

CERT_ACTION_HISTORY_SCOPE

ARCH_CERT_ACTION_HISTORY_SCOPE

CERT_ACTION_HISTORY

ARCH_CERT_ACTION_HISTORY

CERTD_USER

ARCH_CERTD_USER

CERTD_USER_ACCT

ARCH_CERTD_USER_ACCT

CERTD_ROLE

ARCH_CERTD_ROLE

CERTD_APP_INST

ARCH_CERTD_APP_INST

CERTD_ENT_DEFN

ARCH_CERTD_ENT_DEFN

CERTD_ACCT_ENT_ASGN

ARCH_CERTD_ACCT_ENT_ASGN

CERTD_ROLE_POLICY

ARCH_CERTD_ROLE_POLICY

CERTD_POL_ENT_DEFN

ARCH_CERTD_POL_ENT_DEFN

CERTDS_USER_ROLE_ASGN

ARCH_CERTDS_USER_ROLE_ASGN

CERTDS_ENT_ASGN

ARCH_CERTDS_ENT_ASGN

CERTS_USER

ARCH_CERTS_USER

CERTS_USR_UDF

ARCH_CERTS_USR_UDF

CERTS_ROLE

ARCH_CERTS_ROLE

CERTS_APP_INST

ARCH_CERTS_APP_INST

CERTS_ENT_DEFN

ARCH_CERTS_ENT_DEFN

CERTS_ACCOUNT

ARCH_CERTS_ACCOUNT

CERTS_ACCT_ENT_ASGN

ARCH_CERTS_ACCT_ENT_ASGN

CERTS_POLICY

ARCH_CERTS_POLICY

CERTS_POL_ENT_DEFN

ARCH_CERTS_POL_ENT_DEFN

CERTS_CATALOG_UDF

ARCH_CERTS_CATALOG_UDF

Note:

Certification purge is available only in online mode and via the scheduled job interface. Data from CERTD_STATS, CERT_DEFN, CERT_EVT_LSNR and CERT_EVT_TRIG tables will not be archived and purged.

For information on collecting diagnostic data of real-time certification purge job, see Collecting Diagnostic Data of the Online Archival and Purge Operations.

22.6.2 Configuring Real-Time Certification Purge Job

Certification entity data via the purge solution is continuously purged based on the selections made during configuration of the utility. You can modify these options based on data retention policies and maintenance requirements.

To configure real-time certification purge:
  1. Login to Oracle Identity System Administration.
  2. In the left pane, under System Configuration, click Scheduler.
  3. Search for the OIM Certification Purge Job.
  4. Enable the OIM Certification Purge Job scheduled job.

    Note:

    The OIM Certification Purge Job scheduled job is disabled by default. Enable it after installing or upgrading Oracle Identity Manager.
  5. In the Parameters section, specify values for the parameters, as described in Table 22-15:

    Table 22-15 OIM Certification Purge Job Parameters

    Option Description
    Cert Campaigns for Purge The purge operation runs in batches based on the input passed to this parameter. It represents the maximum no of certification campaign to delete before a commit is issued. Default value is 10.

    In this field, a minimum value of 10 and a maximum value of 25 is recommended to be used.

    Maximum Purge Run Duration(in Mins) This is the maximum run duration in minutes for purge processing. Default value is 30.
    Purge Criteria This is the purge criteria and it takes the following values:
    • 1– Completed certification campaigns

    • 2– Expired certification campaigns

    • 3– Both completed certification campaigns and expired certification campaigns

    Default value is 1.

    Purge Retention Period(in days) This indicates the retention period in days for Certification Campaigns. Default value is 180.

    Note:

    By default, the OIM Certification Purge Job is seeded with default values for input parameters like purge criteria, purge retention period etc. You must revisit the input parameters to change their default values as required.
  6. Click Apply.

    In addition to the steps on the Scheduled Task UI for configuration inputs documented in this section, there are no further manual steps required to be performed.

    Note:

    • For Certification Real-Time Purge operation via Scheduled Task interface, Retention Period must not be specified as ZERO as this can cause inconsistencies in purge operation.

    • Simultaneously running multiple instances of the OIM Data Purge Job and the OIM Certification Purge Job is not supported via instantiation of the Scheduled Task functionality.

22.7 Using the Real-time Entitlement Assignment History Purge in Oracle Identity Governance

OIG Bundle Patch 12.2.1.4.211010 and Later Releases. This content applies only to OIG Bundle Patch 12.2.1.4.211010 Bundle Patch and later releases.

Using the real-time Entitlement Assignment History purge in Oracle Identity Manager involves understanding and configuring the real-time Entitlement Assignment History purge job utility.

The concepts related to real-time Entitlement Assignment History purge solutions in Oracle Identity Manager are described in the following sections:

22.7.1 Understanding Real-Time Entitlement Assignment History Purge Job

The real-time Entitlement Assignment History Purge Job capability is provided by default in Oracle Identity Manager. Entitlement Assignment History data can be continuously purged using this feature based on the options or choices made during configuration

This configuration is a one time process and the purge solution works automatically without any intervention from the administrator.

The real-time Entitlement Assignment History Purge Job includes the following features:

  • The administrator provides values for some critical parameters by using the Scheduled Tasks section of Oracle Identity System Administration.
  • Diagnostic information for each purge run is captured as a log.
  • Purge tasks run periodically according to the allotted time duration.
  • Data growth and subsequent footprint is controlled on an on-going basis.
  • It operates online with no disruption of service.
  • The purge operation via an automated scheduled task runs silently at a predefined periodicity and is non-interactive.
  • Various metrics related to the purge operation, such as names of the entity modules, success or failure status, and number of rows targeted for deletion, are logged.
  • These logs are diagnostic pointers for the purge operation for every run.
  • Entitlement Assignment History Purge task utilizes the existing Purge diagnostic logging framework. For more information on the framework, see Collecting Diagnostic Data of the Online Archival and Purge Operations.

Oracle Identity Manager stores Entitlement Assignment History data in the Oracle Identity Manager tables called active Entitlement Assignment History table. You can use the Entitlement Assignment History Purge Job to archive data in the archive Entitlement Assignment History table, that has the same structure as the active Entitlement Assignment History table.

Active and Archive Entitlement Assignment History Table

The Table 22-16 lists the active Entitlement Assignment History table with the corresponding archive Entitlement Assignment History table in which data from the active Entitlement Assignment History table are archived

Table 22-16 Active and Archive Entitlement Assignment History Table

Active Entitlement Assignment History Table Archive Entitlement Assignment History

ENT_ASSIGN_HIST

ARCH_ENT_ASSIGN_HIST

Note:

Entitlement Assignment History purge is available only in online mode and via the scheduled job interface.

For more information on collecting diagnostic data of real-time Entitlement Assignment History purge job, see Collecting Diagnostic Data of the Online Archival and Purge Operations.

22.7.2 Configuring Real-time Entitlement Assignment History Purge Job

Entitlement Assignment History entity data via the purge solution is continuously purged based on the selections made during configuration of the utility. You can modify these options based on data retention policies and maintenance requirements

To configure real-time Entitlement Assignment History purge, perform the following steps:
  1. Log in to Oracle Identity System Administration.
  2. In the left pane, under System Configuration, click Scheduler.
  3. Search for the OIM Entitlement Assignment History Purge Job.
  4. Enable the OIM Entitlement Assignment History Purge Job scheduled job.

    Note:

    The OIM Entitlement Assignment History Purge Job scheduled job is disabled by default. Enable it after installing or upgrading Oracle Identity Manager.
  5. In the Parameters section, specify values for the parameters, as described in Table 22-15:

    Table 22-17 OIM Entitlement Assignment History Purge Job Parameters

    Option Description

    Batch size

    The purge operation runs in batches. It represents the maximum number of rows to delete before a commit is issued.

    Default Value: 5000 (recommended)

    Maximum Purge Run Duration(in Mins)

    This is the maximum run duration in minutes for purge processing.

    Default value: 30

    Purge Retention Period(in days)

    This indicates the retention period in days for purging Entitlement Assignment History data.

    Default value: 180

    Note:

    By default, the OIM Entitlement Assignment History Purge Job is seeded with default values for input parameters like batch size, purge retention period, and so on. You must revisit the input parameters to change their default values as required.
  6. Click Apply.

    Note:

    After the purge job run, the data in the ENT_ASSIGN_HIST table is deleted and purged with the ARCH_ENT_ASSIGN_HIST table.
    In addition to performing the steps on the Scheduled Task the configuration inputs provided here, no further manual steps are required to be performed.

    Note:

    Running multiple instances of OIM Data Purge Job, the OIM Certification Purge Job, and OIM Entitlement Assignment History job simultaneously is not supported via the instantiation of the Scheduled Task functionality.

22.8 Using the Real-time Provisioning Status Accounts Purge in Oracle Identity Governance

OIG Bundle Patch 12.2.1.4.23XXXX and Later Releases. This content applies only to OIG Bundle Patch 12.2.1.4.231009 Bundle Patch and later releases.

Note:

Purge provisioning status Accounts Job is available only in online mode and via the scheduled job interface.

The real-time Purge provisioning status Accounts Job capability is provided by default in Oracle Identity Manager. The unwanted accounts that are stuck in the Provisioning status can be purged continuously using this feature based on the options or choices made during configuration. This configuration is a one time process and the purge solution works automatically without any intervention from the administrator. Both tasks and accounts data gets purged from the provisioning tables. The records can be archived before the purging (deleting) the records.

To archive the records, specify the value Archive for the job parameter Purge Type. The value Delete does not archive the data, it only purges the data.

To perform the real-time purge provisioning status of the Accounts Job perform the following steps:

  1. Provide the values for some critical parameters by using the following:

    Note:

    This must be performed by an administrator.

    Scheduled Tasks section of Oracle Identity System Administration.

  2. Capture the details of the diagnostic information for each purge run as a log.
  3. Purge tasks run periodically according to the allotted time duration. The data growth and the subsequent footprint is controlled on an on-going basis.

    Note:

    If the job is not available, create a job manually using the task Purge Provisioning status Accounts Task.
  4. The various metrics related to the purge operation, such as names of the entity modules, success or failure status, and number of rows targeted for deletion, are logged.

Active and Archive Provisioning tables

The following table lists the active provisioning table with the corresponding archive provisioning table in which data from the active provisioning table are archived.

Table 22-18 Active and Archive Provisioning tables

Active Provisioning Tables Archive Provisioning Tables
OBI ARCH_PROV_OBI
OIU ARCH_PROV_OIU
ORC ARCH_PROV_ORC
OSH ARCH_PROV_OSH
OSI ARCH_PROV_OSI
OTI ARCH_PROV_OTI
SCH ARCH_PROV_SCH

In addition to providing the above OOTB tables, the archive tables with a prefix ARCH_PROV_<parent/child table> gets created for parent and child UD tables during the run time.

The following topic provides information about configuring Real-time Purge provisioning status:

22.8.1 Configuring Real-time Purge provisioning status Accounts Job

To configure real-time Purge provisioning status Accounts Job, perform the following steps:

  1. Log in to Oracle Identity System Administration.
  2. In the left pane, under the drop-down System Configuration, click Scheduler.
  3. Search Purge provisioning status Accounts Job scheduled job.
  4. In the Parameters section, specify the values as described in the following table:

    Table 22-19 Application Table

    Parameter Description
    Application Instance Name Name of the Application Instance
    Purge Type Allowed values:
    1. Delete
    2. Archive
    Batch size The purge operation runs in batches. It represents the maximum number of rows to delete before a commit is issued.

    Default Value: 5000 (Recommended)

    Maximum Purge Run Duration(in mins) This is the maximum run duration in minutes for purge processing. Default value: 30 mins
    Purge Retention Period (in days) This indicates the retention period in days for purging provisioning status Accounts data. Minimum value: 60 Days
  5. Click Apply to save the changes.
  6. By default, the diagnostic logging may not be enabled as the default log level is set to NONE.

    To see the diagnostic logging for this job, please change logging level to INFO or FINEST using the system property DB Diagnostic Level for Online Data Purge or OIM.DBDiagnosticLevelDataPurge.