2 Discovery Issues

This chapter describes the issues encountered during the Discovery process for the Oracle Configuration Manager (OCM).

2.1 How the Oracle Database Target Appears My Oracle Support via OCM

This chapter explains the logic flow that takes place when a Database target is being collected by the standalone Oracle Configuration Manager (OCM) collector.

It is important to ensure that collections for Oracle Database targets are discovered and consistently updated in My Oracle Support (MOS) so that users can validate, review, and perform futher proactive actions in MOS. For example, gathering patch advice, creating Service Requests, and any other administrative type functions current available in MOS relating to a Database Target.

As an OCM and MOS user, it is equally important to understand the process of how a Database target gets populated into MOS and in some cases, how to troubleshoot when a Database Target information either goes 'stale' (LAST_COLLECTED column not updated) in MOS or simply no longer appears as a Target altogether.

2.1.1 Database Instrumentation for OCM Data Collection

The Database instrumentation for OCM data collection includes several phases:

  • Verification that the database is in OPEN READ WRITE status.

  • Verification that the Database is supported (version 8.1.7 or later).

  • The checking of the utl_file_dir database initialization parameter contains the appropriate directory (the state directory where the output from the instrumentation will go) in versions 9.0.1 and earlier.

  • The checking of the 'Compatible' database initialization parameter is 8.1 or higher in version 8.1.7.

  • The checking of the 'job_queue_processes' database initialization parameter is greater than zero in versions earlier than 10.0.

  • The verification that any existing OCM user and its artifacts are dropped if present.

Note:

If the Database version is 11.0 or greater, the Access Control List (ACL) created for OCM to use (oracle-sysman-ocm-Resolve-Access.xml) is dropped, or the ORACLE_OCM user's privilege to use another ACL for access to the UTL_INADDR package is removed.

2.1.2 What We Execute

If the ORACLE_OCM.MGMT_DB_LL_METRICS PL/SQL package exists, the ORACLE_OCM user is dropped (drop user ORACLE_OCM cascade).

If ORACLE_OCM_CONFIG_DIR or ORACLE_OCM_CONFIG_DIR2 directory objects exist, they are dropped.

Then, we grant:

  • EXECUTE on SYS.UTL_FILE

  • EXECUTE on SYS.UTL_INADDR

  • EXECUTE on SYS.DBMS_SQL

  • EXECUTE on SYS.DBMS_JOB (pre-10.0)

  • EXECUTE on SYS.DBMS_FLASHBACK (pre-10.0)

  • SELECT ANY on lbacsys.lbac$polt (pre-10.0)

  • SELECT ANY on odm.odm_mining_model (pre-10.0)

  • SELECT ANY on olapsys.dba$olap_cubes (pre-10.0)

  • EXECUTE on SYS.DBMS_SCHEDULER (10.0 +)

  • CREATE JOB (10.0 +)

  • SELECT ANY on DVSYS.DBA_DV_REALM

  • SELECT ANY on content.odm_document

  • SELECT ANY on content.odm_record

  • For 11.0+, access is granted via ACL to UTL_INADDR for resolution of the host name for ”localhost”. If an ACL exists for this access, the ORACLE_OCM user is added to the access list. If not, an ACL is created for this purpose and assigned accordingly.

  • SELECT ANY TABLE (8.1.7 only)

  • SELECT ANY DICTIONARY (> 8.1.7)

    • The ORACLE_OCM_CONFIG_DIR or ORACLE_OCM_CONFIG_DIR2 directory objects are created, and ORACLE_OCM is granted READ and WRITE on both

    • The ocmdbd and ocmdbb scripts are run – creating the MGMT_DB_LL_METRICS package under the ORACLE_OCM user

    • The ocmjd/ocmjb (pre10.0) or ocmjd10/ocmjb10 (10.0 +) scripts are run to create the MGMT_CONFIG package under the ORACLE_OCM user

  • Finally, the job for configuration collection is submitted and immediately started via

  • ORACLE_OCM.MGMT_CONFIG.submit_job();

  • ORACLE_OCM.MGMT_CONFIG.run_now();

2.1.3 DBMS_JOB.SUBMIT

Let's look at what happens with the submit_job and run_now job runs:

2.1.3.1 Pre-10.0 Databases

In pre-10.0 databases, two jobs are created:

DBMS_JOB.SUBMIT(l_job, 'ORACLE_OCM.MGMT_CONFIG.collect_config;', TRUNC(SYSDATE) + 1/24, 'TRUNC(SYSDATE) + 1 + 1/24');

DBMS_JOB.SUBMIT(l_job, ’ORACLE_OCM.MGMT_CONFIG.collect_stats;', TRUNC(LAST_DAY(SYSDATE)) + 1/24,  'TRUNC(LAST_DAY(SYSDATE) + 1) + 1/24');

Here we use the DBMS_JOB.SUBMIT procedure to create the jobs, the first using the current date/time (rounded to the next hour) as the initial run time with an interval of 24 hours, the second using the last day of the current month as the first run date and an interval of one month.

The first job will run the MGMT_CONFIG.collect_config procedure; the second will run MGMT_CONFIG.collect_stats.

In MGMT_CONFIG.collect_config and MGMT_CONFIG.collect_stats, a cursor is created to select all inst_id values from gv$instance. In a loop, we iterate overall the instances and submit a distinct job for each:

DBMS_JOB.SUBMIT(l_job, p_what, SYSDATE, NULL, FALSE, inst_id);

We end up with a job per instance – in a RAC environment we will have one per running member of the RAC configuration. p_what is either &rsquor;MGMT_DB_LL_METRICS.collect_config_metrics' or &rsquor;MGMT_DB_LL_METRICS.collect_stats_metrics' plus a directory path enclosed in parentheses (i.e. a parameter to the procedure) indicating where the procedure's output is to be written. This can cause a problem in that we may not see output for all members of the RAC, since the path may contain the host name where the instrumentation script was run ($ORACLE_HOME/ccr/hosts/<hostname>/state). If a host in the RAC setup does not have read/write access to this path, the job will fail. The end result is that on the hosts where the job failed we will not discover the RAC database, and we end up with inconsistent information.

The workaround for this is to set ORACLE_CONFIG_HOME=$ORACLE_HOME prior to the DataBase instrumentation. This will result in the job output going to the $ORACLE_HOME/ccr/state location.

2.1.3.2 Post-10.0 Databases

In post-10.0 databases, we again create two jobs:

sys.dbms_scheduler.create_job(
job_name => 'MGMT_CONFIG_JOB',
job_type => 'STORED_PROCEDURE',
job_action => 'ORACLE_OCM.MGMT_CONFIG.collect_config',
schedule_name=> 'SYS.MAINTENANCE_WINDOW_GROUP',
enabled => TRUE,
auto_drop => FALSE,
comments => 'Configuration collection job.');

and

sys.dbms_scheduler.create_job(
job_name => 'MGMT_STATS_CONFIG_JOB',
job_type => 'STORED_PROCEDURE',
job_action => 'ORACLE_OCM.MGMT_CONFIG.collect_stats',
start_date=> SYSTIMESTAMP,
repeat_interval => 'freq=monthly;interval=1;bymonthday=1;byhour=01;byminute=01;bysecond=01',
end_date => NULL,
enabled => TRUE,
auto_drop => FALSE,
comments => 'OCM Statistics collection job.');

Again - the first job runs MGMT_CONFIG.collect_config, the second MGMT_CONFIG.collect_stats - but here we use DBMS_SCHEDULER.CREATE_JOB rather than DBMS_JOB.SUBMIT. Note that the collect_config job uses the SYS.MAINTENANCE_WINDOW_GROUP for its schedule rather than a specific schedule, as is the case with the collect_stats job.

The procedures themselves are also slightly different in 10.0+ databases. They still set up a cursor and loop over the instances found in gv$instance, but the underlying logic for collect_config will create two jobs per instance if we determine we are in a RAC configuration (select parallel from v$instance returns &rsquor;YES') and the instance for which we are submitting the jobs is NOT the current instance:

sys.dbms_scheduler.create_job(
job_name => p_job_name || '_' || inst_id,
job_type => 'PLSQL_BLOCK',
job_action => p_job_action,
start_date => NULL,
repeat_interval => NULL,
enabled => FALSE,
auto_drop => TRUE,
comments => 'OCM collection job run for an instance.');

p_job_name here is 'MGMT_CONFIG_JOB', p_job_action is

'BEGIN ORACLE_OCM.MGMT_DB_LL_METRICS.COLLECT_CONFIG_METRICS (''ORACLE_OCM_CONFIG_DIR''); END;'

sys.dbms_scheduler.create_job(
job_name => p_job_name || '_2_' || inst_id,
job_type => 'PLSQL_BLOCK',
job_action => p_job_action2 ,
start_date => NULL,
repeat_interval => NULL,
enabled => FALSE,
auto_drop => TRUE,
comments => 'OCM 2nd job run for RAC instance.');

p_job_name is &rsquor;MGMT_CONFIG_JOB' again, but p_job_action is

'BEGIN ORACLE_OCM.MGMT_DB_LL_METRICS.WRITE_DB_CCR_FILE (''ORACLE_OCM_CONFIG_DIR2'', TRUE); END;'

What this second job accomplishes is creation of a minimal file with the information necessary for discovery, using the secondary directory object created during instrumentation to find the output directory. This directory object points at $ORACLE_HOME/ccr/state, which should exist – and hopefully be writeable – on all hosts in the RAC setup.

The collect_stats job is created like the first collect_config job, i.e.:

sys.dbms_scheduler.create_job(
job_name => p_job_name || '_' || inst_id,
job_type => 'PLSQL_BLOCK',
job_action => p_job_action,
start_date => NULL,
repeat_interval => NULL,
enabled => FALSE,
auto_drop => TRUE,
comments => 'OCM collection job run for an instance.');

p_job_name here is 'MGMT_STATS_CONFIG_JOB', p_job_action is:

'BEGIN ORACLE_OCM.MGMT_DB_LL_METRICS.COLLECT_STATS_METRICS (''ORACLE_OCM_CONFIG_DIR''); END;'

It's OK if the output from this job only shows up on one host in the RAC setup, since this is the feature usage information common to all instances. As long as we discover the database on the host where this output shows up we will have the data in the collection.

As each of these jobs is created, they are handed off to the DBMS_SCHEDULER subsystem with two calls:

DBMS_SCHEDULER.SET_ATTRIBUTE (p_job_name || '_' || inst_id,'instance_id',inst_id);
DBMS_SCHEDULER.ENABLE (p_job_name || '_' || inst_id);

The first call ties the job to the specified instance (as retrieved in the loop over instance IDs).

The second enables the job – we created it in a DISABLED state (enabled=FALSE) so that we could set the instance_id attribute first. Note that in a non-RAC environment the call to SET_ATTRIBUTE will throw an exception – but the procedure catches this exception and discards it. We're running on a single instance in this case anyway.

Here is a listing of some important files that directly relate to the discovery, population, and ultimately the proper listing of a Database Target in MOS:

Files Purpose
oracledb.pl Main Database collection file
collectconfig.sh Calls MGMT_CONFIG.submit_job which then calls MGMT_CONFIG.collect_config
ocmdbb.sql OCM DB configuration collection package Body
oracle-sysman-ocm-Resolve-Access.xml The Access Control List (ACL) created for OCM to use
ocmdbd.sql OCM DB configuration collection package Definition
ocmdbb.sql OCM DB configuration collection package Body
ocmjb10.sql OCM db config collection Job package Body for 10g onwards
ocmjb.sql OCM db config collection Job package Body for pre 10g
ocmjd10.sql OCM db config collection Job package Definition for 10g onwards
ocmjd.sql OCM db config collection Job package Definition for pre 10g