Managing MLOG Tables

This chapter covers the following topics:

Overview

When the Planning Data Collection is run, a number of concurrent programs are launched to collect data from the EBS source tables (instance) to the VCP destination tables (instance). The Refresh Collection Snapshots program is responsible for taking 'snapshots' of the current data in the EBS source system. Many times, it is this program that causes the biggest performance impact for the entire Planning Data Collection process.

In a distributed installation where VCP applications run on separate instance from the EBS source transaction instance, the Refresh Collection Snapshots program is run on the EBS source instance. When the time between running Data Collections is such that very high volumes are recorded in the MLOG tables used to refresh snapshots, there are strategies to help improve performance.

General Guidelines

There are certain entities that change frequently throughout the day. Many customers perform a Complete Refresh Data Collection on a nightly basis. When this is done, a fast refresh of the snapshots is performed by the Refresh Collection Snapshots program for all entities except MTL_MATERIAL_TRANSACTIONS_TEMP. For this table/snapshot, a complete refresh of the snapshot is performed. If the volume of the data in the MLOG$ table is too large, a fast refresh is not as effective as a complete refresh of the snapshots.

Note: If you run Refresh Collection Snapshots as a standalone program and use complete refresh or automatic refresh, then you must launch Data Collections with a complete refresh or data will be lost.

To improve the performance of the Complete Refresh Data Collection it is advisable to schedule the Refresh Collection Snapshots program to run periodically throughout the day to keep the volume of data in the MLOG$ tables from growing too large during the day. This does not need to be done for all snapshots, just those that have a high volume of inserts, updates, and deletes throughout the day. When running the Refresh Collection Snapshots program, use the following parameters:

To determine which entities within the system have large growth in the MLOG$_ tables, the following scripts can be used to monitor the changes in the snapshot logs:

  1. Run SQL #26 -- Create Temporary Tables to Analyze Changes in Snapshot Logs to create temporary tables used for analyzing the changes in the snapshot logs.

  2. Run SQL #25 -- Lock Statistics for All MLOGs Used for Data Collections to check the various MLOG$ tables to get row counts for respective applications release.

  3. Schedule SQL #25 -- Lock Statistics for All MLOGs Used for Data Collections in a cron job to run every 30-60 minutes for a week to monitor the row changes in the MLOG$ tables.

  4. Analyze the data in the temp tables that were created by SQL #26 -- Create Temporary Tables to Analyze Changes in Snapshot Logs by exporting the output into an Excel spreadsheet using the following query:

    select * from rcs_analysis_r12_temp;

When you analyze the output of each run, you should see the record count of the MLOG$_ tables increasing throughout the day up to the point where the complete refresh data collection is run. At that time, the record count should return to 0 and then start to grow again. It's the MLOG$_ tables with the largest amount of growth during the day that should be analyzed.

Example of data from rcs_analysis_r12_temp for the MLOG$_OE_ORDER_LINES_ALL table:

Between 21:00 and 23:30, the volume of data is a 0. Then the data grows to between 2.7 and 3.5 million rows each day. If the Refresh Collection Snapshots program is scheduled to run every two hours starting at 06:30, the amount of data that needs to be processed during the complete refresh Data Collection is reduced to about 300,000. This can cause dramatic improvements to the overall data collection process. You may experience a 50-60% reduction in run times of the RCS program for the OE_ODR_LINES_SN snapshot after scheduling the program to run every two hours between runs of the Data Collection process.

Note: It is important that the Refresh Collection Snapshots program not be scheduled to run during the Data Collection process. Be sure to allow enough time for the last scheduled RCS program to complete prior to the scheduled data collection process.

MLOG Growth

MLOG management is the key to performance of Refresh Collection Snapshots. It is important to manage MLOGs to keep the program tuned to perform as fast as possible. If you are actively using data collections, performance problems occur for two primary reasons:

Once the initial collection has been performed and most data collected, then use the steps in Advanced Strategy for Handling MLOGs.

When you execute these steps, you will be setting the MLOGs to provide best performance.

You must also be aware if you have any MLOGs that are shared with other applications. In this case, the DBA will need to actively manage those other snapshots, refreshing them manually. A cron job may be the best method.

MLOG Terminology

MLOG$ Table

An MLOG$ table is the Snapshot Log table created against a base table. A row is inserted into the table each time a change (insert/update/delete) is made to the base table.

When a snapshot is refreshed in fast (incremental) mode, it reads the rows in the MLOG$ table and uses this information to get the updated information from the base table into the snapshot for further processing.

Standard RDBMS functionality manages rows inserted into the MLOG tables. When a snapshot is refreshed in either fast or complete mode, then the RDBMS will delete rows from the MLOG table. However, if multiple snapshots reference the same MLOG, then all the snapshots must be refreshed before the RDBMS can remove any rows from the MLOG table.

The RDBMS creates snapshot logs on the base table and automatically names them using MLOG$_[20 characters of the table name]. For example:

MRP_FORECAST_DATES (18 char) creates MLOG$_MRP_FORECAST_DATES BOM_DEPARTMENT_RESOURCES (24 char) creates MLOG$_BOM_DEPARTMENT_RESOU

In case of a tie, the RDBMS adds a number to the MLOG name. For example:

WSM_COPY_OP_RESOURCES creates MLOG$_WSM_COPY_OP_RESOURCE WSM_COPY_OP_RESOURCE_INSTANCES creates MLOG$_WSM_COPY_OP_RESOURCE1

Snapshot or Materialized View

Tables that reference the MLOG$ log tables when a snapshot is refreshed using fast (or incremental) refresh. For a complete refresh, the snapshot pulls data directly from the base table and the RDBMS handles the MLOG in both cases to remove unneeded rows.

ALL_SNAPSHOTS.QUERY will show the query for a particular snapshot. See SQL #16 -- Show All Information on a Snapshot in DBA_SNAPSHOTS for more information.

Simple Snapshot: The Snapshot that queries rows from a single table. See How MLOGs Are Used During Data Collections for examples.

Complex Snapshot: Snapshot query has multiple tables and therefore references multiple MLOG$ tables to gather rows into the snapshot. See How MLOGs Are Used During Data Collections for examples.

There are three refresh options:

How Rows are Cleared from MLOG$ Tables

It is standard RDBMS functionality to manage the rows in the MLOG tables. When a snapshot is refreshed, then the rows from the MLOG$ table are deleted. However, if multiple snapshots reference the same MLOG$ table, then all the snapshots must be refreshed before any rows can be deleted from the MLOG$ table. This is a primary cause of service requests that are logged with Oracle Support for issues with MLOGs. When there are many rows being inserted in the MLOG on a regular basis, then the delete performed by the RDBMS does not recover space and the size of the MLOG can grow. Therefore, Data Collections designed the automatic refresh into Refresh Collection Snapshots, so that the MLOG is truncated followed by a complete refresh of the snapshot. This is exclusive to Data Collections and can only be used for our list of snapshots. The basic steps are:

truncate table [owner].[MLOG$_name];

exec DBMS_MVIEW.REFRESH ('[owner].[snapshot_name]','C');

Note: Since Data Collections also uses a sequence, you cannot use DBMS_MVIEW.REFRESH with Data Collection snapshots unless you first initiate the sequence.

How MLOGs are Used During Data Collection

MLOGs are created when the setup requests for Data Collections are run. The MLOG is used when refreshing the snapshots in the Refresh Collection Snapshots process to gather information on the changed rows that need to be collected by the Planning Data Pull process into the MSC_ST staging tables.

The following is an example of a simple snapshot and the data progression during Data Collections that moves the data from the EBS source tables to the MSC tables for use in VCP applications:

  1. MTL_SYSTEM_ITEMS_B: The Inventory Items table.

  2. MLOG$_MTL_SYSTEM_ITEMS_B: Snapshot Log on the table.

  3. MTL_SYS_ITEMS_SN: Data Collections snapshot for this table.

  4. MRP_AP_SYS_ITEMS_V: View that references the snapshot using a synonym MRP_SN_SYS_ITEMS used by Planning Data Pull to populate.

  5. MSC_ST_SYSTEM_ITEMS: The MSC staging table populated by the Planning Data Pull process.

  6. MSC_SYSTEM_ITEMS: The MSC base table loaded during ODS Load process.

An example of a complex snapshot:

Managing the MLOG Growth for VCP Data Collections

Primary Strategy -- Using Automatic Refresh

Keeping MLOG size manageable is the first requirement for performance. Do the following:

Therefore, if the automatic refresh does not resolve your performance issues with Refresh Collection Snapshots, we recommend the advanced strategy for handling MLOGs.

Advanced Strategy for Handling MLOGs

This strategy works for any of the MLOG entities used for Data Collections. For specific problems, see Most Common Performance Problems.

When users are not updating the affected base tables, execute the following steps:

  1. Get count(*) for the MLOGs affected.

    select count(*) from [MLOG_name];
  2. Truncate the MLOG(s) for the snapshot(s).

    truncate table [owner].[MLOG$_name];

    For BOM, WIP, WSM, MTL, OR WSH you need to do this for all MLOGs used for Data Collections.

  3. Gather Statistics on the MLOG(s) while the MLOG has ZERO rows. From the System Administrator responsibility, run Gather Table Statistics with the following parameters:

    • Table Owner: [OWNER]

    • Table Name: [MLOG$_name]

    • All other parameters: Use default or in SQL*Plus, execute the following command:

      begin FND_STATS.GATHER_TABLE_STATS ( '[owner]','[MLOG$_name]', 10 ); end;
  4. Confirm that MLOGs have zero rows by checking DBA_TABLES information for those MLOGs. Run SQL #21 -- Size, NUM_ROWS and LAST_ANALYZED for MLOGs Used in Data Collections .

    The MLOGs only have more than zero rows if some user or process is updating the tables. If this happens, then you must find the user or process that is updating the tables and suspend that process.

  5. Lock Statistics on the MLOG(s) while the MLOG has zero rows:

    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('[owner]','[MLOG_name]');
  6. Set up the database parameters to not use statisticss on the MLOG. As APPS user in SQL*Plus, set the database parameter _mv_refresh_use_stats = FALSE:

    alter system set "_mv_refresh_use_stats"=FALSE;

    The DBA can set in the init.ora to be used at all times.

  7. Run Refresh Collection Snapshots against the snapshot(s) using complete refresh. You must do this for all snapshots used for Data Collections: BOM, WIP, WSM, MTL, or WSH. For others, you may use individual snapshots.

  8. Gather table statistics for the snapshots. Use the same steps as #2 using the Snapshot_Name instead of the MLOG$_name for Table. In the latest code, the snapshots are owned by APPS and not the application. Confirm using SQL below:

    Select * from dba_snapshots where name like '&snapshot_name';
  9. Run Data Collections with targeted refresh with only Planners = Yes. This is a setup step for certain code improvements that help overall data collections performance. This step must be run for all the organizations being collected. Do not use a Collection Group for this run. To run in targeted mode, set the Planning Data Pull parameter Purge Previously Collected Data to 'No' and then the targeted collection method becomes available.

  10. Run Data Collections with a complete refresh. This step should show the performance improvements.

  11. If any other snapshots for other products also reference the MLOG being truncated, then refresh those snapshots after every run of Data Collections to keep the MLOGs from growing again.

    Use SQL #15 -- Use Base Table to Find All Related Snapshots to find all the snapshots that are using the MLOG table for a single base table. Review Appendix D -- Snapshots to note any snapshot not seen in the list .

    Note: If the production instance does not allow downtime to execute these steps, and if the MLOG$ has zero rows in TEST and PROD, then Steps 1 - 3 may be run on a test instance and then the stats could be imported to the production instance. Import the stats to production during a slow period when you can complete the rest of these steps, truncate the MLOGs in production, and then start at Step #3 and continue the steps to completion.

Most Common Performance Problems

The most common performance problem scenarios are:

If you don't know which snapshots are causing the problem, you can fix OE, WIP and BOM all at one time, or you can run a trace of the process Refresh Collection Snapshots (short name: MSRFWOR). The best practice is to use a Level 8 trace per setups in My Oracle Support Note #245974.1 steps #7 or #11 or you can set up a simple level 1 trace by checking the Enable Trace checkbox: From the System Administrator responsibility, navigate to Concurrent, then Program, and then Define AND the DBA must also run:

alter system set max_dump_file_size = 'unlimited';

Then use the Oracle DBA utility TKPROF to format the trace file into a more readable format for performance analysis:

tkprof trace_file_name.trc tkprof_file_name.txt sys=no explain=apps/[apps_pw] sort='(prsela,exeela,fchela)' print=20;

Ignore the first statement in the TKPROF: it is for the total refresh process that refreshed all the snapshots.

BEGIN MRP_CL_REFRESH_SNAPSHOT.REFRESH_SNAPSHOT(:errbuf,:rc,:A0,:A1,:A2,:A3);
END;

Check the next statement(s) in the TKPROF output to determine which snapshots are involved. For example, the first line of the second statement in this TKPROF example shows that WIP snapshots are involved:

INSERT INTO "WIP"."WIP_WOPR_RESS_SN" SELECT /*+ NO_MERGE("JV$") */ /*+ */

When you have determined the problem snapshot(s) or MLOG(s) involved, the steps below can be executed to resolve the issue.

Performance Issue with OE_ODR_LINES_SN

Perform these steps when other processes are not updating the table OE_ORDER_LINES_ALL:

  1. Suspend workflow processes. Some customers have workflow processes that update order lines at a rate of several thousand lines per hour.

  2. Get Count on MLOG$_OE_ORDER_LINES_ALL:

    select count(*) from MLOG$_OE_ORDER_LINES_ALL;
  3. Truncate the MLOG$ table:

    truncate table ONT.MLOG$_OE_ORDER_LINES_ALL;
  4. Confirm ZERO rows in the MLOG:

    select count(*) from MLOG$_OE_ORDER_LINES_ALL;
  5. Gather table stats on the MLOG. From the System Administrator responsibility, run the single request Gather Table Statistics with the following parameters:

    • OWNER: ONT

    • TABLE: MLOG$_OE_ORDER_LINES_ALL

    Use the default for all the other parameters or in SQL*Plus, execute:

    begin FND_STATS.GATHER_TABLE_STATS ( 'ONT','MLOG$_OE_ORDER_LINES_ALL', 10 ); end;
  6. Confirm that num_rows in dba_tables for the MLOG table = 0:

    select table_name, num_rows, last_analyzed from dba_tables
    where table_name in ('MLOG$_OE_ORDER_LINES_ALL');
  7. Lock the Statistics using the following command:

    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('ONT','MLOG$_OE_ORDER_LINES_ALL');
  8. As APPS User in SQL*Plus, set the database parameter _mv_refresh_use_stats = FALSE:

    alter system set "_mv_refresh_use_stats"=FALSE;

    The DBA can set in the init.ora to be used at all times.

  9. Run Refresh Collection Snapshots as a standalone request with the following parameters:

    • Refresh Mode: Complete

    • Snapshot Name: OE_ODR_LINES_SN

    • Other parameters: Use default

  10. Gather table statistics on the snapshot. From the System Administrator responsibility, run the single request Gather Table Statistics with the following parameters:

    • Table Owner: ONT [or APPS in the latest code].

    • Table Name: OE_ODR_LINES_SN. You can check for the latest code using:

      Select * from dba_snapshots where name like '&snapshot_name';
    • All other parameters: Use the default.

    Alternatively, in SQL*Plus, execute the following:

    begin FND_STATS.GATHER_TABLE_STATS ( 'ONT [or APPS]','OE_ODR_LINES_SN', 10 ); end;
  11. Run Data Collections with Targeted Refresh with only Planners = Yes. This is a setup step for certain code improvements that help with overall data collections performance. This step must be run for all the organizations being collected. Do not use an Collection Group for this run. To run in targeted mode, set the Planning Data Pull parameter Purge Previously Collected Data to 'No' and then choose the targeted collection mode.

  12. Run Data Collections with either a complete or targeted refresh with only the Sales Orders parameter set to 'Yes'.

    Note: The Sales Order entity is unique. When complete refresh is specified for Data Collections, we default the Sales Order parameter to 'No' and run the net change refresh of sales orders automatically for performance reasons.

  13. Observe that during the run of Data Collections, the performance is improved.

Performance Issue with WIP Snapshots

Use the following steps when other processes are not updating the WIP tables:

  1. Get Count on MLOG$_WIP tables:

    select count(*) from MLOG$_WIP_DISCRETE_JOBS;
    select count(*) from MLOG$_WIP_FLOW_SCHEDULES;
    select count(*) from MLOG$_WIP_LINES;
    select count(*) from MLOG$_WIP_OP_RESOURCE_INST;
    select count(*) from MLOG$_WIP_OPERATION_NETWOR;
    select count(*) from MLOG$_WIP_OPERATION_RESOUR;
    select count(*) from MLOG$_WIP_OPERATIONS;
    select count(*) from MLOG$_WIP_REPETITIVE_ITEMS;
    select count(*) from MLOG$_WIP_REPETITIVE_SCHED;
    select count(*) from MLOG$_WIP_REQUIREMENT_OPER;
    select count(*) from MLOG$_WIP_SUB_OPERATION_RE;
  2. Truncate the MLOG$ tables:

    truncate table WIP.MLOG$_WIP_DISCRETE_JOBS;
    truncate table WIP.MLOG$_WIP_FLOW_SCHEDULES;
    truncate table WIP.MLOG$_WIP_LINES;
    truncate table WIP.MLOG$_WIP_OP_RESOURCE_INST;
    truncate table WIP.MLOG$_WIP_OPERATION_NETWOR;
    truncate table WIP.MLOG$_WIP_OPERATION_RESOUR;
    truncate table WIP.MLOG$_WIP_OPERATIONS;
    truncate table WIP.MLOG$_WIP_REPETITIVE_ITEMS;
    truncate table WIP.MLOG$_WIP_REPETITIVE_SCHED;
    truncate table WIP.MLOG$_WIP_REQUIREMENT_OPER;
    truncate table WIP.MLOG$_WIP_SUB_OPERATION_RE;
  3. Confirm zero rows in the MLOGs:

    select count(*) from MLOG$_WIP_DISCRETE_JOBS;
    select count(*) from MLOG$_WIP_FLOW_SCHEDULES;
    select count(*) from MLOG$_WIP_LINES;
    select count(*) from MLOG$_WIP_OP_RESOURCE_INST;
    select count(*) from MLOG$_WIP_OPERATION_NETWOR;
    select count(*) from MLOG$_WIP_OPERATION_RESOUR;
    select count(*) from MLOG$_WIP_OPERATIONS;
    select count(*) from MLOG$_WIP_REPETITIVE_ITEMS;
    select count(*) from MLOG$_WIP_REPETITIVE_SCHED;
    select count(*) from MLOG$_WIP_REQUIREMENT_OPER;
    select count(*) from MLOG$_WIP_SUB_OPERATION_RE;
  4. Gather table stats on the MLOGs. From the System Administrator responsibility, run the single request Gather Table Statistics with the following parameters:

    • Table Owner: WIP

    • Table Name: MLOG$_WIP_DISCRETE_JOBS

    • Other parameters: Use defaults.

    Alternatively, in SQL*Plus, run:

    begin FND_STATS.GATHER_TABLE_STATS ( 'WIP',' MLOG$_WIP_DISCRETE_JOBS ', 10 ); end;
  5. Repeat Step 4 for the complete list of MLOG tables:

    MLOG$_WIP_FLOW_SCHEDULES
    MLOG$_WIP_LINES
    MLOG$_WIP_OP_RESOURCE_INST
    MLOG$_WIP_OPERATION_NETWOR
    MLOG$_WIP_OPERATION_RESOUR
    MLOG$_WIP_OPERATIONS
    MLOG$_WIP_REPETITIVE_ITEMS
    MLOG$_WIP_REPETITIVE_SCHED
    MLOG$_WIP_REQUIREMENT_OPER
    MLOG$_WIP_SUB_OPERATION_RE
  6. Confirm that num_rows in dba_tables for the MLOG table = 0:

    select table_name, num_rows, last_analyzed from dba_tables
    where table_name like 'MLOG$_WIP%';
  7. Lock the Statistics.

    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_DISCRETE_JOBS');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_FLOW_SCHEDULES');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_LINES');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_OP_RESOURCE_INST');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_OPERATION_NETWOR');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_OPERATION_RESOUR');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_OPERATIONS');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_REPETITIVE_ITEMS');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_REPETITIVE_SCHED');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_REQUIREMENT_OPER');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('WIP','MLOG$_WIP_SUB_OPERATION_RE');
  8. As APPS User in SQL*Plus, set the database parameter _mv_refresh_use_stats = FALSE:

    alter system set "_mv_refresh_use_stats"=FALSE;

    The DBA can set in the init.ora to be used at all times.

  9. Run Refresh Collection Snapshots as a standalone request with parameters:

    • Refresh Mode: Complete

    • Snapshot Name: WIP_DSCR_JOBS_SN

    • All other parameters: Use default.

  10. Run Refresh Collection Snapshots for the rest of the WIP snapshots:

    WIP_FLOW_SCHDS_SN
    WIP_WLINES_SN
    WIP_OPR_RES_INSTS_SN
    WIP_WOPR_NETWORKS_SN
    WIP_WOPR_RESS_SN
    WIP_WOPRS_SN
    WIP_REPT_ITEMS_SN
    WIP_REPT_SCHDS_SN
    WIP_WREQ_OPRS_SN
    WIP_WOPR_SUB_RESS_SN
  11. Gather table stats on the snapshot. From the System Administrator responsibility, run the standalone request Gather Table Statistics with the following parameters:

    • Table Owner: WIP [or APPS in the latest code].

    • Table Name: WIP_DSCR_JOBS_SN

      You can check the name using:

      Select * from dba_snapshots where name like '&snapshot_name';
    • All other parameters: Use default.

    Alternatively, in SQL*Plus, execute:

    begin FND_STATS.GATHER_TABLE_STATS ( 'WIP [or APPS]','WIP_DSCR_JOBS_SN', 10 ); end;
  12. Repeat Step #11 for the entire list of WIP snapshots shown in Step #10.

  13. Run Data Collections with a targeted refresh with only Planners = Yes. This is a setup step for certain code improvements that help overall data collections performance. This must be run for all the organizations being collected; do not use a Collection Group for this run. To run targeted mode, set Planning Data Pull parameter Purge Previously Collected Data to 'No' and then the targeted collection mode becomes available.

  14. Run a Data Collections with a targeted refresh with only Work in Process parameter set to 'Yes' or run Data Collections with a complete refresh and other parameters as default (or your normal Collections setup for a complete refresh).

    Observe that during the run of Data Collections, the performance is improved.

Performance Issue with BOM Snapshots

Warning: BOM snapshots are used by multiple products (MSC - Data Collections and ENI - Product Intelligence), so further steps are involved in clean up of the MLOGs.

Perform these steps when other processes are not updating the BOM tables:

  1. Determine which snapshots are involved. The BOM MLOGs may have multiple snapshots from different products accessing the same MLOG. For example, ENI - Product Intelligence installs snapshots like ENI_DBI_BOM_COMPONENTS_MV1 ENI_DBI_BOM_COMPONENTS_MV2 ENI_DBI_MFG_STEPS_JOIN_MV ENI_RES_1_MV You may observe that two or more of these snapshots exist on the system, or that other snapshots exist as well. Therefore, after fixing the MLOGs manually, in the future you must run Product Intelligence or manually refresh the ENI snapshots after running Data Collections to prevent the MLOGs from growing out of control again.

    Run the following to check which snapshots exist on the system:

    SELECT DISTINCT
    amdr.MVIEW_NAME "Snapshot",
    amdr.OWNER "Snapshot Owner",
    amdr.DETAILOBJ_NAME "Base Table Name",
    amdr.DETAILOBJ_OWNER "Base Table Owner",
    log_table mlog$_name
    FROM ALL_MVIEW_DETAIL_RELATIONS amdr,
    dba_snapshot_logs dsl
    where DETAILOBJ_NAME like 'BOM%'
    and DETAILOBJ_TYPE = 'TABLE'
    and (detailobj_owner, detaiLobj_name) not in (('MSC','MSC_COLLECTED_ORGS'))
    AND amdr.DETAILOBJ_NAME = dsl.MASTER
    order by mview_name;

    All BOM%SN objects are used for Data Collections. Multiple rows returned for a snapshot shows that snapshot is a complex snapshot that references multiple tables. This is normal. In this case, the Base Table Name and MLOG$_NAME show the different tables being used for a snapshot.

    If you want more information on the snapshots, you can review the snapshot query by running the following SQL:

    select * from all_snapshots where name like '&snapshot_name';

    and then review the column 'Query'.

  2. Get Count on MLOGs:

    select count(*) from MLOG$_BOM_COMPONENTS_B;
    select count(*) from MLOG$_BOM_CTO_ORDER_DEMAND;
    select count(*) from MLOG$_BOM_DEPARTMENT_RESOU;
    select count(*) from MLOG$_BOM_DEPARTMENTS;
    select count(*) from MLOG$_BOM_OPERATION_NETWOR;
    select count(*) from MLOG$_BOM_OPERATION_RESOUR;
    select count(*) from MLOG$_BOM_OPERATION_SEQUEN;
    select count(*) from MLOG$_BOM_OPERATIONAL_ROUT;
    select count(*) from MLOG$_BOM_RES_INSTANCE_CHA;
    select count(*) from MLOG$_BOM_RESOURCE_CHANGES;
    select count(*) from MLOG$_BOM_RESOURCES;
    select count(*) from MLOG$_BOM_STRUCTURES_B;
    select count(*) from MLOG$_BOM_SUB_OPERATION_RE;
    select count(*) from MLOG$_BOM_SUBSTITUTE_COMPO;
  3. Truncate the MLOG$ tables:

    truncate table BOM.MLOG$_BOM_COMPONENTS_B;
    truncate table BOM.MLOG$_BOM_CTO_ORDER_DEMAND;
    truncate table BOM.MLOG$_BOM_DEPARTMENT_RESOU;
    truncate table BOM.MLOG$_BOM_DEPARTMENTS;
    truncate table BOM.MLOG$_BOM_OPERATION_NETWOR;
    truncate table BOM.MLOG$_BOM_OPERATION_RESOUR;
    truncate table BOM.MLOG$_BOM_OPERATION_SEQUEN;
    truncate table BOM.MLOG$_BOM_OPERATIONAL_ROUT;
    truncate table BOM.MLOG$_BOM_RES_INSTANCE_CHA;
    truncate table BOM.MLOG$_BOM_RESOURCE_CHANGES;
    truncate table BOM.MLOG$_BOM_RESOURCES;
    truncate table BOM.MLOG$_BOM_STRUCTURES_B;
    truncate table BOM.MLOG$_BOM_SUB_OPERATION_RE;
    truncate table BOM.MLOG$_BOM_SUBSTITUTE_COMPO;
  4. Confirm zero rows in the MLOG:

    select count(*) from MLOG$_BOM_COMPONENTS_B;
    select count(*) from MLOG$_BOM_CTO_ORDER_DEMAND;
    select count(*) from MLOG$_BOM_DEPARTMENT_RESOU;
    select count(*) from MLOG$_BOM_DEPARTMENTS;
    select count(*) from MLOG$_BOM_OPERATION_NETWOR;
    select count(*) from MLOG$_BOM_OPERATION_RESOUR;
    select count(*) from MLOG$_BOM_OPERATION_SEQUEN;
    select count(*) from MLOG$_BOM_OPERATIONAL_ROUT;
    select count(*) from MLOG$_BOM_RES_INSTANCE_CHA;
    select count(*) from MLOG$_BOM_RESOURCE_CHANGES;
    select count(*) from MLOG$_BOM_RESOURCES;
    select count(*) from MLOG$_BOM_STRUCTURES_B;
    select count(*) from MLOG$_BOM_SUB_OPERATION_RE;
    select count(*) from MLOG$_BOM_SUBSTITUTE_COMPO;
  5. Gather table statistics on the MLOG. From the System Administrator responsibility, run the single request Gather Table Statistics with the following parameters:

    • Table Owner: BOM

    • Table Name: MLOG$_BOM_COMPONENTS_B

    • Other parameters: Use default.

      Alternatively, in SQL*Plus run:

      begin FND_STATS.GATHER_TABLE_STATS ( 'BOM',' MLOG$_BOM_COMPONENTS_B ', 10 ); end;
  6. Repeat Step #5 for the complete list of MLOG tables:

    MLOG$_BOM_CTO_ORDER_DEMAND
    MLOG$_BOM_DEPARTMENT_RESOU
    MLOG$_BOM_DEPARTMENTS
    MLOG$_BOM_OPERATION_NETWOR
    MLOG$_BOM_OPERATION_RESOUR
    MLOG$_BOM_OPERATION_SEQUEN
    MLOG$_BOM_OPERATIONAL_ROUT
    MLOG$_BOM_RES_INSTANCE_CHA
    MLOG$_BOM_RESOURCE_CHANGES
    MLOG$_BOM_RESOURCES
    MLOG$_BOM_STRUCTURES_B
    MLOG$_BOM_SUB_OPERATION_RE
    MLOG$_BOM_SUBSTITUTE_COMPO
  7. Confirm that num_rows in dba_tables for the MLOG table = 0:

    select table_name, num_rows, last_analyzed from dba_tables
    where table_name like 'MLOG$_BOM%';
  8. Lock the Statistics.

    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_COMPONENTS_B');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_CTO_ORDER_DEMAND');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_DEPARTMENT_RESOU');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_DEPARTMENTS');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_OPERATION_NETWOR');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_OPERATION_RESOUR');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_OPERATION_SEQUEN');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_OPERATIONAL_ROUT');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_RES_INSTANCE_CHA');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_RESOURCE_CHANGES');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_RESOURCES');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_STRUCTURES_B');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_SUB_OPERATION_RE');
    EXECUTE DBMS_STATS.LOCK_TABLE_STATS ('BOM','MLOG$_BOM_SUBSTITUTE_COMPO');
  9. As APPS user in SQL*Plus, set the database parameter _mv_refresh_use_stats = FALSE:

    alter system set "_mv_refresh_use_stats"=FALSE;

    The DBA should set in the init.ora to be used at all times.

  10. Run Refresh Collection Snapshots as a standalone request with the following parameters:

    • Refresh Mode: Complete

    • Snapshot Name: BOM_BOM_SN

    • Other parameters: Use defaults.

  11. Repeat Step #10 for all BOM snapshots:

    BOM_CTO_ORDER_DMD_SN
    BOM_INV_COMPS_SN
    BOM_OPR_NETWORKS_SN
    BOM_OPR_RESS_SN
    BOM_OPR_RTNS_SN
    BOM_OPR_SEQS_SN
    BOM_RES_CHNGS_SN
    BOM_RES_INST_CHNGS_SN
    BOM_SUB_COMPS_SN
    BOM_SUB_OPR_RESS_SN
  12. Gather table statistics on the snapshot. From the System Administrator, run the single request Gather Table Statistics with the following parameters:

    • Table Owner: BOM [or APPS in the latest code]. Check using:

      select * from all_snapshots where name like '&snapshot_name';
    • Table Name: BOM_BOMS_SN

    • Other parameters: Use default

    Alternatively, in SQL*Plus execute:

    begin FND_STATS.GATHER_TABLE_STATS ( 'BOM [or APPS]','BOM_BOMS_SN', 10 ); end;
  13. Repeat for entire list of BOM snapshots shown in Step #10.

  14. Run Data Collections with a targeted refresh with only Planners = Yes. This is a setup step for certain code improvements that help overall Data Collections performance. This must be run for all the organizations being collected; do not use a Collection Group for this run. To run in targeted mode, set Planning Data Pull parameter Purge Previously Collected Data to 'No' and then Collection Method 'Targeted' becomes available.

  15. Run Data Collections with a targeted refresh with only Bills of Material, Routing, Resources parameter set to 'Yes'. Alternatively, run Data Collections with a complete refresh and all other parameters set to default (or your normal Collections setup for complete refresh).

    Observe that during the run of Data Collections, the performance is improved.

Important: To ensure the problem does not occur again, do the following:

Assuming that you are not currently using Product Intelligence (application short name ENI), you need to refresh the snapshots NOT used by Data Collections after each collections run. You may or may not have all the ENI snapshots listed above in Step #1, but each snapshot you have must be refreshed. You can use the following SQL to run a complete refresh of the snapshots:

exec DBMS_MVIEW.REFRESH ('[Snapshot_owner].[snapshot_name]','C');

Examples:

exec DBMS_MVIEW.REFRESH ('APPS.ENI_RES_1_MV','C');
exec DBMS_MVIEW.REFRESH ('APPS.ENI_DBI_MFG_STEPS_JOIN_MV','C');
exec DBMS_MVIEW.REFRESH ('APPS.ENI_DBI_BOM_COMPONENTS_MV1','C');
exec DBMS_MVIEW.REFRESH ('APPS.ENI_DBI_BOM_COMPONENTS_MV2','C');

This can be setup by the DBA to run as a cron job. Alternately, create a file and save as a .SQL file. Then create a custom concurrent request to run the file. This would be the same type of setup seen for request Create BOM Snapshots. This process runs the file MSCBOMSN.sql and is stored under $MSC_TOP/sql. This could be used as a model for the new concurrent request defined by your System Administrator.

MLOG$_ENI_OLTP_ITEM_STAR is Growing Too Large, Can it Be Truncated?

The snapshot log MLOG$_ENI_OLTP_ITEM_STAR has a large number of records and is growing daily. Is it possible to purge this log?

First, check to see if you have Daily Business Intelligence (DBI) by running the following query on the database:

SQL> SELECT patch_level
FROM fnd_product_installations
WHERE patch_level LIKE %BIS%
/

If no rows are returned, no BIS products (which include DBI) are installed. If the query returns 11i.BIS.K with e-Business Suite 11.5.10, it is likely that you are not using DBI, since this is the base level for 11.5.10. DBI users should be using a higher patchset level. You could check with your users in case they are using a lower patchset level than Oracle recommends, and then upgrade DBI to the latest patchset level at the earliest opportunity.

If you are not using DBI (Oracle Product Intelligence) you can truncate the table and drop the materialized view log on it by running the following commands:

DROP SNAPSHOT LOG ON ENI.ENI_OLTP_ITEM_STAR;
TRUNCATE TABLE ENI.ENI_OLTP_ITEM_STAR;

The ENI_OLTP_ITEM_STAR table is still maintained but the MLOG$ is not be populated. If you prefer, you can truncate the snapshot log instead of dropping it (if you want to maintain the snapshot log object for whatever reason) but you need to execute the truncate command on a regular basis so the log does not grow too large.

The MLOG$ is only there to support fast refresh for the materialized views based on this table. In the absence of this, the materialized views can always be fully refreshed. By dropping the MLOG$, you are not losing any information. Additionally, if you are not using DBI, then you can truncate the two associated materialized views below:

Loading Large Amounts of Data for Implementation

Option 1 -- Periodic Large Data Loads

  1. Import the data.

  2. Truncate the MLOG table using the automatic refresh of the snapshot(s).

  3. Run Data Collections in targeted mode for that entity or a normal run of complete refresh of Data Collections.

Option 2 -- Large Data Load During Implementation

Run Data Collections in targeted mode for that entity or a normal run of complete refresh of Data Collections.

  1. Ensure that you have a lot of space for TEMP and UNDO and for your application tablespaces.

  2. Run Data Collections for all the orgs you are going to use in Value Chain Planning. Do NOT use Org Groups and try to load partial data. This actually takes more TEMP space than loading all the data at one time due to the technology used for handling Org Groups and TEMP tables.

  3. Prepare to have the initial Data Collections set up with a very large timeout parameter. Between 900 - 1600 is not unusual for large amounts of data when loading empty MSC tables.

  4. Once the first run of Data Collections is complete, run Gather Schema Stats on the MSC schema. The follow up runs of Data Collections should run fine and performance should be much better.

  5. If loading heavy BOM or WIP data for the initial load, see Advanced Strategy for Handling MLOGs and implement immediately if you encounter any issues in Refresh Collection Snapshots during initial collections.