Skip Headers
Oracle® Enterprise Manager Cloud Control Administrator's Guide
12c Release 1 (12.1.0.1)

Part Number E24473-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

26 Maintaining and Troubleshooting the Management Repository

This chapter describes maintenance and troubleshooting techniques for maintaining a well-performing Management Repository.

Specifically, this chapter contains the following sections:

Management Repository Deployment Guidelines

To be sure that your management data is secure, reliable, and always available, consider the following settings and configuration guidelines when you are deploying the Management Repository:

Management Repository Data Retention Policies

When the various components of Enterprise Manager are configured and running efficiently, the Oracle Management Service gathers large amounts of raw data from the Management Agents running on your managed hosts and loads that data into the Management Repository. This data is the raw information that is later aggregated, organized, and presented to you in the Cloud Control Console.

After the Oracle Management Service loads information into the Management Repository, Enterprise Manager aggregates and purges the data over time.

The following sections describe:

Management Repository Default Aggregation and Purging Policies

Enterprise Manager aggregates collected metric data by hour and by day to enhance query performance and help minimize the size of the Management Repository. Before the data is aggregated, each data point is stored in a raw metric data table. Once a day, the previous day's raw metric data is rolled up, or aggregated, into a one-hour and a one-day table. These hourly and daily records will have hourly and daily metric data averages, minimums, maximums and standard deviations respectively.

After Enterprise Manager aggregates the data, the data is then considered eligible for purging. A certain period of time must pass for data to actually be purged. This period of time is called the retention time.

The raw data, with the highest insert volume, has the shortest default retention time, which is set to 7 days. As a result, 7 days after it is aggregated into a one-hour record, a raw data point is eligible for purging.

Note:

This data retention policy varies for JVMD and ADP data.

Hourly aggregate metric data records are purged after 31 days. The highest level of aggregation, one day, is kept for 12 months (roughly 365 days).

The default data retention policies are summarized in Table 26-1.

Table 26-1 Default Repository Purging Policies

Aggregate Level Retention Time

Raw metric data

7 days

Hourly aggregated metric data

31 days

Daily aggregated metric data

12 months (~365 days)


If you have configured and enabled Application Performance Management, Enterprise Manager also gathers, saves, aggregates, and purges response time data. The response time data is purged using policies similar to those used for metric data. The Application Performance Management purging policies are shown in Table 26-2.

Table 26-2 Default Repository Purging Policies for Application Performance Management Data

Aggregate Level Retention Time

Raw response time data

24 hours

One-hour aggregated response time data

7 days

One-hour distribution response time data

24 hours

One-day aggregated response time data

31 days

One-day distribution aggregated response time data

31 days


Management Repository Default Aggregation and Purging Policies for Other Management Data

Besides the metric data and Application Performance Monitoring data, other types of Enterprise Manager data accumulates over time in the Management Repository.

For example, the last availability record for a target will also remain in the Management Repository indefinitely, so the last known state of a target is preserved.

Modifying the Default Aggregation and Purging Policies

The Enterprise Manager default aggregation and purging policies were designed to provide the most available data for analysis while still providing the best performance and least disk-space requirements for the Management Repository. As a result, you should not modify these policies to improve performance or increase your available disk space.

However, if you plan to extract or review the raw or aggregated data using data analysis tools other than Enterprise Manager, you may want to increase the amount of raw or aggregated data available in the Management Repository. You can accomplish this by increasing the retention times for the raw or aggregated data.

A PL/SQL API has been provided to modify the default retention time for the core metric data tables in the Enterprise Manager repository. Table 26-3 shows the default number of partitions retained for each of the three tables and the size of the partitions for each table. The API will allow you to change the number of partitions retained only.

Table 26-3 Core EM Metric Data Tables and Default Data Retention in the Management Repository

Table Name Partitions Retained Partition Size

EM_METRIC _VALUES

7

DAY

EM_METRIC_VALUES_HOURLY

32

DAY

EM_METRIC_VALUES_DAILY

12

MONTH


The PL/SQL API is:

gc_interval_partition_mgr.set_retention(<repository schema name>, 
                                          <table name>, 
                                          <number of partitions to retain>);

An example of using the PL/SQL API to change the number of partitions retained in EM_METRIC_VALUES (raw data) from the default of 7 to 10 follows:

BEGIN
  gc_interval_partition_mgr.set_retention('SYSMAN', 'EM_METRIC_VALUES', 10);
END;
/

Modifying Data Retention Policies When Targets Are Deleted

By default, when you delete a target from the Grid Control console, Enterprise Manager automatically deletes all target data from the Management Repository.

However, deleting raw and aggregated metric data for database and other data-rich targets is a resource consuming operation. Targets can have hundreds of thousands of rows of data and the act of deleting this data can degrade performance of Enterprise Manager for the duration of the deletion, especially when several targets are deleted at once.To avoid this resource-consuming operation, you can prevent Enterprise Manager from performing this task each time you delete a target. When you prevent Enterprise Manager from performing this task, the metric data for deleted targets is not purged as part of target deletion task; instead, it is purged as part of the regular purge mechanism, which is more efficient.

In addition, Oracle strongly recommends that you do not add new targets with the same name and type as the deleted targets within 24 hours of target deletion. Adding a new target with the same name and type will result in the Grid Control console showing data belonging to the deleted target for the first 24 hours.

To disable raw metric data deletion:

  1. Use SQL*Plus to connect to the Management Repository as the Management Repository user.

    The default Management Repository user is SYSMAN. For example:

    SQL> connect sysman/sysman_password;
    
  2. To disable metric deletion, run the following SQL command.

    SQL> EXEC MGMT_ADMIN.DISABLE_METRIC_DELETION();
    SQL> COMMIT;
    

To enable metric deletion at a later point, run the following SQL command:

  1. Use SQL*Plus to connect to the Management Repository as the Management Repository user.

    The default Management Repository user is SYSMAN. For example:

    SQL> connect sysman/oldpassword;
    
  2. To enable metric deletion, run the following SQL command.

    SQL> EXEC MGMT_ADMIN.ENABLE_METRIC_DELETION();
    SQL> COMMIT;
    

How to Modify the Retention Period of Job History

Enterprise Manager Cloud Control has a default purge policy which removes all finished job details which are older than 30 days. This section provides details for modifying this default purge policy.

The actual purging of completed job history is implemented via a DBMS_SCHEDULER job that runs once a day in the repository database. When the job runs, it looks for finished jobs that are 'n' number of days older than the current time (value of sysdate in the repository database) and deletes these jobs. The value of 'n' is, by default, set to 30 days.

The default purge policy cannot be modified via the Enterprise Manager console, but it can be changed using SQL*Plus.

To modify this purge policy, follow these steps:

  1. Log in to the repository database as the SYSMAN user, via SQL*Plus.

  2. Check the current values for the purge policies using the following command:

    SQL> select * from mgmt_job_purge_policies;

    POLICY_NAME                      TIME_FRAME
    -------------------------------- ----------
    SYSPURGE_POLICY                          30
    REFRESHFROMMETALINKPURGEPOLICY            7
    FIXINVENTORYPURGEPOLICY                   7
    OPATCHPATCHUPDATE_PAPURGEPOLICY           7
    

    The purge policy responsible for the job deletion is called SYSPURGE_POLICY. As seen above, the default value is set to 30 days.

  3. To change the time period, you must drop and re-create the policy with a different time frame:

    SQL> execute MGMT_JOBS.drop_purge_policy('SYSPURGE_POLICY');

    PL/SQL procedure successfully completed.
    

    SQL> execute MGMT_JOBS.register_purge_policy('SYSPURGE_POLICY', 60, null);

    PL/SQL procedure successfully completed.
    

    SQL> COMMIT;

    Commit complete.
    

    SQL> select * from mgmt_job_purge_policies;

    POLICY_NAME                      TIME_FRAME
    -------------------------------- ---------- 
    SYSPURGE_POLICY                          60
    ....
    

The above commands increase the retention period to 60 days. The time frame can also be reduced below 30 days, depending on the requirement.

You can check when the purge job will be executed next. The actual time that the purge runs is set to 5 AM repository time and can be verified using these steps:

  1. Login to the Repository database using the SYSMAN account.

  2. Execute the following command:

    SQL> select job_name,
                to_char(last_start_date, 'DD-MON-YY HH24:MI:SS') last_run,
                to_char(next_run_date,   'DD-MON-YY HH24:MI:SS') next_run
    from all_scheduler_jobs
    where job_name ='EM_JOB_PURGE_POLICIES';
     
    JOB_NAME              LAST_RUN           NEXT_RUN
    --------------------- ------------------ ------------------
    EM_JOB_PURGE_POLICIES                    07-SEP-11 05:00:00
    

    The schedule can also be verified from the Enterprise Manager console by following these steps:

    1. Select Setup > Management Service and Repository

    2. Click on the Repository Operations tab

    3. Find the Next Scheduled Run and Last Scheduled Run information for Job Purge in the list.

    Please note that the time of the next scheduled execution of the Job Purge does not represent the cutoff time for the retention period; the cutoff time is determined by the purge policy at the time the Job Purge runs.

DBMS_SCHEDULER Troubleshooting

Enterprise Manager uses the database scheduler (dbms_scheduler) to run various processes in the repository. When the dbms_scheduler is stopped or has insufficient resources to operate, the Enterprise Manager processes do not run or are delayed. The following is a list of common causes that may prohibit the dbms_scheduler from running normally.

Job Queue Processes

The dbms_scheduler uses a separate job-queue process for each job it runs. The maximum number of these processes is controlled by the database parameter, job_queue_processes. If all processes are in use, no new jobs will be started.

The following query returns the number of currently running jobs.

SELECT count(*)
FROM dba_scheduler_running_jobs;

If the count is close to the setting of job_queue_processes, it could mean that Enterprise Manager dbms_scheduler jobs cannot be started (on time). Determine if any of the running dbms_scheduler jobs are stuck and consider increasing the setting for job_queue_processes.

Job Slave Processes

The dbms_scheduler also depends on the setting of the dbms_scheduler property MAX_JOB_SLAVE_PROCESSES. If the number of running dbms_scheduler jobs exceeds this setting, no new jobs will be started. This attribute can be checked using this query.

SELECT value
FROM dba_scheduler_global_attribute
WHERE attribute_name='MAX_JOB_SLAVE_PROCESSES';

If the count equals the number of running dbms_scheduler jobs, then determine if any of the running dbms_scheduler jobs are stuck and consult the dbms_scheduler documentation about how to adjust this attribute.

DBMS_SCHEDULER Program Disabled

The dbms_scheduler has an attribute that can be set to disable this feature in the database. When set, the Enterprise Manager dbms_scheduler jobs will not run. To check if this attribute has been set (inadvertently), run this query.

SELECT *
FROM dba_scheduler_global_attribute
WHERE attribute_name = 'SCHEDULER_DISABLED';

When a row is returned, the dbms_scheduler is disabled. Execute dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED', 'FALSE');

Consult the dbms_scheduler documentation about how to remove this attribute.

Too Many Database Sessions

Each dbms_scheduler job requires two database sessions. When no more sessions are available, Enterprise Manager dbms_scheduler jobs will not run. The following two queries give the maximum number of allowed sessions and the current number of active sessions:

SELECT value
FROM v$parameter
WHERE name='sessions';
SELECT count(*)FROM v$session;

When the current number of sessions approaches the maximum, then you should determine if any of the sessions are stuck and consult the Oracle Database documentation about how to increase the maximum number of sessions.

Also the high water mark of the number of sessions may indicate that this issue has played a role in the past:

select * 
from v$resource_limit 
where resource_name = 'sessions' ;

If the MAX_UTILIZATION column indicates a value that is close the maximum number of sessions, it could explain why some of the Enterprise Manager dbms_scheduler jobs may not have run (on time) in the past.

Insufficient Memory

The database may not be able to spawn a new job queue process when there is insufficient memory available. The following message in the database alert file, Unable to spawn jobq slave processes, in combination with, (free memory = 0.00M), would be indicative of this problem. Please consult the Oracle Database documentation about how to diagnose this memory problem further.

Repository and Sizing Requirements for Fusion Middleware Monitoring in Enterprise Manager Release 12g

A Fusion Middleware target is like any other Enterprise Manager target. Therefore any repository or sizing guideline that is applicable for an Enterprise Manager target would be applicable on a Fusion Middleware target.

One major concern in the case of Fusion Middleware discovery is that too many targets may be discovered, created and monitored. This adds additional load on the OMS instance, repository and agent. In the case of very large number of targets, after target discovery Oracle recommends that users should review all the targets and their respective metrics.

Based on requirements, users should finalize which targets and metrics should be monitored and the required frequency those targets should be monitored.

After discovery, Oracle recommends you allow Fusion Middleware/ADP/JVMD monitoring to run for some duration (a few days to possibly a few weeks) and continuously monitor the database size and Operating System file system growth (in the case of ADP; ADP Manager requires a minimum of 10GB of disk space) until it becomes constant. You can then fine tune various parameters associated with these different features.

In version 12g of Enterprise Manager, both ADP and JVMD use Enterprise Manager repository as their repository. Their data are stored in the MGMT_AD4J_TS tablespace.

ADP Monitoring

Use the following information when utilizing ADP Monitoring.

  • ADP Manager Resources Requirement

    While managing 70K managed entities, if the number of managed entities is high you must allocate resources accordingly.

    Resource Amount
    Physical Memory 2 GB
    Minimum Disk Space 10 GB

  • ADP Data requirement

    To monitor each entity per JVM, the MGMT_AD4J_TS tablespace must have 8 MB available.

  • ADP Data Retention Policy

    ADP maintains sophisticated multi-tiered logic for aggregation (or compression) of performance data. This helps to optimize performance of interaction with the internal data repository both when querying data for presentation or inserting new performance metrics.

    Users who want to store longer term data should look for this section in Acsera.properties:

    Example 26-1

    #########################
    # Production setting
    # NOTE: use Model.GlobalSamplingRateSecs to configure Metric.Grain.0
    #########################
    Metric.Grain.0 0s
    Metric.TableInterval.0 = 4h
    Metric.DataLife.0 = 2d
     
    Metric.Grain.1 = 3m
    Metric.TableInterval.1 =1d
    Metric.DataLife.1 = 8d
     
    #Metric.Grain.2 = 30m
    #Metric.TableInterval.2 = 7d
    #Metric.DataLife.2 = 420d
    

    Uncomment the last 3 lines for the Metric.*.2 properties.

JVMD Monitoring

Use the following information when employing JVMD Monitoring.

  • JVMD Manager Resources Requirement

    To manage 200-300 jvms, JVMD manager requires physical memory of 1 GB. JVMD manager caches monitoring data in the TEMP space for each pool and flushes to the database frequently. Usually, depending on the number of pools the manager is monitoring and the amount of data being gathered from each pool, the size requirement of these temporary cache files varies, but it is rare to see more than a few MBs for each pool. If this is a concern, the TEMP space should be allocated accordingly.

  • JVMD Data requirement

    To monitor every JVM with OOB settings, the MGMT_AD4J_TS tablespace must have 50-100MB available.

  • JVM Diagnostics Historical Data and its Retention policy

    Historical data is available at three summary levels 0, 1 and 2.

    • Summary level 0 - is raw sample data taken at the specified pool polling interval (default 2 seconds). If you look at data within one hour on the Performance Diagnostics page, it shows summary level 0 data. Level 0 data is retained for 24 hours and subsequently purged. It can be changed via the Console Setup page, but before increasing the value, you should ensure that the repository is tuned properly to handle such large amounts of data.

    • Summary level 1 - is aggregated data. If you view data after more than one hour but less than 5 hours, it is summary level 1 data. The default aggregation interval is 90 seconds. This value can be changed via the Console Setup page. Level 1 data is retained for 20 days and subsequently purged.

    • Summary level 2 - is further aggregated data. If you view data more than five hours old, it is summary level 2 data. This data is aggregated every 60 minutes. Level 2 data is retained for 400 days and subsequently purged.

There are two JVMD features that can drastically affect MGMT_AD4J_TS tablespace usage:

  • JVMD Heap Dumps

    Analyzing heap requires massive tablespace resources. Oracle recommends having 5 times the size of the heap dump file you are loading free in your tablespace. Since you will have the heap dump file and know its size before you run the load script, you should ensure that you have adequate space to accommodate the dump before you load it into your database.

  • Thread Traces

    While these are smaller than heaps by an order of magnitude, these are loaded into the database automatically by default when you initiate a trace at the console. The size of these traces can vary dramatically depending on the number of active threads during the trace, the duration of the trace, and the sample interval of the trace. They should generally be under 100MB each, but a user utilizing a large number of these could manually fill up the database quickly. Again, since these are created only by manual intervention, you should ensure that there is adequate space to accommodate traces before initiating them.

Changing the SYSMAN Password

The SYSMAN account is the default super user account used to set up and administer Enterprise Manager. It is also the database account that owns the objects stored in the Oracle Management Repository. From this account, you can set up additional administrator accounts and set up Enterprise Manager for use in your organization.The SYSMAN account is created automatically in the Management Repository database during the Enterprise Manager installation. You also provide a password for the SYSMAN account during the installation.

If you later need to change the SYSMAN database account password, use the following procedure:

  1. Shut down all the Oracle Management Service instances that are associated with the Management Repository.

  2. Stop the agent that is monitoring the target OMS and Repository.

    Failure to do this will result in the agent attempting to connect to the target with a wrong password once it is changed with SQL*Plus. This may also result in the SYSMAN account being locked which can subsequently prevent logins to the Cloud Control console to change the password of the target OMS and Repository.

  3. Change the password of the SYSMAN database account using the following SQL*Plus commands:

    SQL>connect sysman/oldpassword;

    SQL>alter user sysman identified by newpassword;

  4. To change the password of the SYSMAN user, enter the following command:

    emctl config oms -change_repos_pwd [-old_pwd <old_pwd>] [-new_pwd <new_pwd>] [-use_sys_pwd [-sys_pwd <sys_pwd>]]

    You must run this command on each Management Service in your environment.

    Parameter Description
    -old_pwd This is the current SYSMAN password.
    -new_pwd This is the new password.
    -use_sys_pwd This parameter is optional and is used to connect to the database as a SYS user.
    -sys_pwd This is the password for the SYS user.

  5. In the Cloud Control console, click the Targets tab and then click All Targets on the sub tab.

  6. Select the Management Services and Repository target and click Configure. Enterprise Manager displays the Monitoring Configurations page.

  7. Enter the new password in the Repository password field and click OK.

To change the password of the SYSMAN user, enter the following command:

emctl config oms -change_repos_pwd [-old_pwd <old_pwd>] [-new_pwd <new_pwd>] [-use_sys_pwd [-sys_pwd <sys_pwd>]]

You must run this command on each Management Service in your environment.

Dropping and Recreating the Management Repository

This section provides information about dropping the Management Repository from your existing database and recreating the Management Repository after you install Enterprise Manager.

Dropping the Management Repository

To recreate the Management Repository, you first remove the Enterprise Manager schema from your Management Repository database. You accomplish this task using the -action drop argument to the RepManager script, which is described in the following procedure.

To remove the Management Repository from your database:

  1. Locate the RepManager script in the following directory of the Oracle Application Server Home where you have installed and deployed the Management Service:

    ORACLE_HOME/sysman/admin/emdrep/bin
    
  2. At the command prompt, enter the following command:

    $PROMPT> RepManager repository_host repository_port repository_SID 
    -sys_password password_for_sys_account -action drop
    

    In this syntax example:

    • repository_host is the machine name where the Management Repository database is located

    • repository_port is the Management Repository database listener port address, usually 1521 or 1526

    • repository_SID is the Management Repository database system identifier

    • password_for_sys_account is the password of the SYS user for the database. For example, change_on_install

    • -action drop indicates that you want to drop the Management Repository

Alternatively, you can use a connect descriptor to identify the database on the RepManager command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.

For example, you can use the connect descriptor as follows to create the Management Repository:

$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)
(HOST=host1)(PORT=1521)) (CONNECT_DATE=(SERVICE_NAME=servicename)))"
-sys_password efkl34lmn -action drop

See Also:

"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using connect descriptors.

Recreating the Management Repository

The preferred method for creating the Management Repository is to create the Management Repository during the Enterprise Manager installation procedure, which is performed using Oracle Universal Installer.

See Also:

Oracle Enterprise Manager Cloud Control Installation and Basic Configuration for information about installing Enterprise Manager.

However, if you need to recreate the Management Repository in an existing database, you can use the RepManager script, which is installed when you install the Management Service. Refer to the following sections for more information:

Using the RepManager Script to Create the Management Repository

To create a Management Repository in an existing database:

  1. Review the hardware and software requirements for the Management Repository as described in Oracle Enterprise Manager Cloud Control Installation and Basic Configuration. and review the section "Management Repository Deployment Guidelines".

  2. Locate the RepManager script in the following directory of the Oracle Management Service home directory:

    ORACLE_HOME/sysman/admin/emdrep/bin
    
  3. At the command prompt, enter the following command:

    $PROMPT> ./RepManager repository_host repository_port repository_SID  -sys_password password_for_sys_account -action create
    

In this syntax example:

  • repository_host is the machine name where the Management Repository database is located

  • repository_port is the Management Repository database listener port address, usually 1521 or 1526

  • repository_SID is the Management Repository database system identifier

  • password_for_sys_account is the password of the SYS user for the database. For example, change_on_install

Enterprise Manager creates the Management Repository in the database you specified in the command line.

Using a Connect Descriptor to Identify the Management Repository Database

Alternatively, you can use a connect descriptor to identify the database on the RepManager command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.

For example, you can use the connect descriptor as follows to create the Management Repository:

$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=host1)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action create

See Also:

"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using a connect descriptor

The ability to use a connect string allows you to provide an address list as part of the connection string. The following example shows how you can provide an address list consisting of two listeners as part of the RepManager command line. If a listener on one host becomes unavailable, the second listener can still accept incoming requests:

$PROMPT> ./RepManager -connect "(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521)
(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)
(CONNECT_DATE=(SERVICE_NAME=servicename)))"
-sys_password efkl34lmn -action create

See Also:

Oracle Database High Availability Architecture and Best Practices

Oracle Enterprise Manager Cloud Control Installation and Basic Configuration

Troubleshooting Management Repository Creation Errors

Oracle Universal Installer creates the Management Repository using a configuration step at the end of the installation process. If the repository configuration tool fails, note the exact error messages displayed in the configuration tools window, wait until the other configuration tools have finished, exit from Universal Installer, and then use the following sections to troubleshoot the problem.

Package Body Does Not Exist Error While Creating the Management Repository

If the creation of your Management Repository is interrupted, you may receive the following when you attempt to create or drop the Management Repository at a later time:

SQL> ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-04068: existing state of packages has been discarded
ORA-04067: not executed, package body "SYSMAN.MGMT_USER" does not exist
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at "SYSMAN.SETEMUSERCONTEXT", line 5
ORA-06512: at "SYSMAN.CLEAR_EMCONTEXT_ON_LOGOFF", line 4
ORA-06512: at line 4

To fix this problem, see "General Troubleshooting Techniques for Creating the Management Repository".

Server Connection Hung Error While Creating the Management Repository

If you receive an error such as the following when you try to connect to the Management Repository database, you are likely using an unsupported version of the Oracle Database:

Server Connection Hung

To remedy the problem, upgrade your database to the supported version as described in Oracle Enterprise Manager Cloud Control Installation and Basic Configuration.

General Troubleshooting Techniques for Creating the Management Repository

If you encounter an error while creating the Management Repository, drop the repository by running the -drop argument to the RepManager script.

If the RepManager script drops the repository successfully, try creating the Management Repository again.

If you encounter errors while dropping the Management Repository, do the following:

  1. Connect to the database as SYSDBA using SQL*Plus.

  2. Check to see if the SYSMAN database user exists in the Management Repository database.

    For example, use the following command to see if the SYSMAN user exists:

    prompt> SELECT username FROM DBA_USERS WHERE username='SYSMAN';
    
  3. If the SYSMAN user exists, drop the user by entering the following SQL*Plus command:

    prompt> DROP USER SYSMAN CASCADE;
    
  4. Check to see if the following triggers exist:

    SYSMAN.EMD_USER_LOGOFF
    SYSMAN.EMD_USER_LOGON
    

    For example, use the following command to see if the EMD_USER_LOGOFF trigger exists in the database:

    prompt> SELECT trigger_name FROM ALL_TRIGGERS 
            WHERE trigger_name='EMD_USER_LOGOFF';
    
  5. If the triggers exist, drop them from the database using the following commands:

    prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGOFF;
    prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGON;
    

Cross Platform Enterprise Manager Repository Migration

There are user requirements for migrating an Enterprise Manager repository across servers - same and cross platforms.

The Enterprise Manager repository migration process is not exactly the same as database migration. In case of Enterprise Manager Repository migration you must take care of Enterprise Manager specific data, options, and pre-requisites for the repository move. You should make sure data integrity is maintained from both the Enterprise Manager and Oracle database perspective.

This brings up need for defining the process that can be followed by end users for successful and reliable migration of repository in minimum time and with maximum efficiency.

The overall strategy for migration depends on:

Cross platform transportable tablespace along with data pump (for metadata) is the fastest and best approach for moving large Enterprise Manager Cloud Control repository from one platform to another. Other option that can be considered for migration is to use Data Pump for both data and metadata moves but this would require more time than the cross platform transportable tablespace approach for the same amount of data. The advantage to using the data pump approach is that it provides granular control over options and the overall process, as in the case of selective data being migrated and not the whole of source data. If the source and target is not on version 11g then export/import is the only way to get the data migrated cross platform.

More details on cross platform transportable tablespace, data pump, and export/import options can be found at the Oracle Technology Network (OTN) or in the Oracle Database Administrator's Guide.

Common Prerequisites

The following lists the common prerequisites for a repository migration:

  • Source and target database must use the same character set and should be at same version.

  • Source and target database should meet all the pre-requisites mentioned for Enterprise Manager Repository software requirements mentioned in Enterprise Manager install guide.

  • If source and target database are NOT on 11g - only Export/Import can be used for cross platform migration.

  • If Source and target database are on 11g - either of three options Cross platform transportable tablespaces migration, Data Pump or Export/Import can be used for cross platform repository migration.

  • You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.

  • To plug a transportable tablespace set into an Oracle Database on a different platform, both databases must have compatibility set to at least 10.0.

  • Most of the platforms (but not all) are supported for cross-platform tablespace transport. You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine their platform IDs and their endian format (byte ordering).

  • Source and Destination host should have Enterprise Manager agent running and configured to the instance which is to be migrated.

  • If target database has Enterprise Manager repository installed, it should be first dropped using RepManager before target database related steps are carried out.

Methodologies

The following sections discuss the methodologies of a repository migration.

Cross Platform Transportable Tablespaces

Oracle's transportable tablespace feature allows users to quickly move a user tablespace across Oracle databases. It is the most efficient way to move bulk data between databases. Prior to Oracle Database Release 11g, if you want to transport a tablespace, both source and target databases need to be on the same platform. Oracle Database Release 11g adds the cross platform support for transportable tablespaces. With the cross platform transportable tablespace, you can transport tablespaces across platforms.

Cross platform transportable tablespaces allows a database to be migrated from one platform to another (use with Data Pump or Import/Export).

Preparation for Transportable Tablespaces

Use these steps to prepare for transportable tablespaces:

  1. Prepare set of user tablespaces and Check for containment violation.

    execute DBMS_TTS.TRANSPORT_SET_CHECK('MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS', TRUE);

    select * FROM transport_set_violations;

  2. Shutdown OMS instances and prepare for migration.

    Shutdown OMS, set job queue_processes to 0 and run:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql

  3. Make the tablespaces to be transported read only.

    alter tablespace MGMT_TABLESPACE read only;

    alter tablespace MGMT_ECM_DEPOT_TS read only;

Extract metadata

Extract Metadata for transportable tablespaces using Data Pump Utility:

  1. Create the data pump directory.

    create directory data_pump_dir as '/scratch/gachawla/EM102/ttsdata';

  2. Extract the metadata using data pump (or export ).

    expdp DUMPFILE=ttsem102.dmp TRANSPORT_TABLESPACES=MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS TRANSPORT_FULL_CHECK=Y

  3. Extract other objects (packages, procedures, functions, temporary tables. and so on -- Not contained in user tablespaces).

    expdp SCHEMAS=SYSMAN CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log JOB_NAME=expmet

Endian check and conversion

Run Endian check and convert the datafiles if endian is different between source and destination:

  1. For Endian check, run this on both source and destination database:

    SELECT endian_format

    FROM v$transportable_platform tp, v$database d

    WHERE tp.platform_name = d.platform_name;

    If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

    Example:

    Source Endian 
    Linux IA (32-bit) - Little
     
    Destination Endian  
    Solaris[tm] OE (32-bit) - Big
    
  2. Ship datafiles, metadata dump to target and Convert datafiles using RMAN:

    Ship the datafiles and the metadata dump to target and On target convert all datafiles to destination endian:

    CONVERT DATAFILE
    '/d14/em10g/oradata/em102/mgmt.dbf',
    '/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf'
    FROM PLATFORM 'Linux IA (32-bit)';
    

    Conversion via RMAN can be done either on source or target (For more details refer RMAN doc). Parallelism can be used to speed up the process if the user tablespaces contains multiple datafiles.

Import metadata and plugin tablespaces

Use the following steps to import metadata and plugin tablespaces:

  1. Run RepManager to drop the target repository (if the target database has the Enterprise Manager repository installed):

    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop

  2. Run the pre-import steps to create the sysman user and grant privileges on the target database:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql

  3. Invoke the Data Pump utility to plug the set of tablespaces into the target database:

    impdp DUMPFILE=ttsem102.dmp DIRECTORY=data_pump_dir

    TRANSPORT_DATAFILES=/d14/em10g/oradata/em102/mgmt.dbf,/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf

  4. Import other objects (packages, procedures, functions, and so on):

    impdp CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log

Post Plug In Steps

Follow these post plug-in steps:

  1. Run post plug-in steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql

    Check for invalid objects -- compare source and destination schemas for any discrepancy in the counts and invalids.

  2. Bring user tablespaces back to read write mode:

    alter tablespace MGMT_TABLESPACE read write;

    alter tablespace MGMT_ECM_DEPOT_TS read write;

  3. Submit Enterprise Manager dbms jobs.

    Reset back job_queue_processes to original value and run:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql

  4. Update OMS properties and startup the OMS.

    Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.

  5. Relocate Management Services and Repository target.

    If the Management Services and repository target need to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.

  6. Discover/relocate Database and database Listener targets.

    Discover the target database and listener in Enterprise Manager or relocate the targets from source agent to destination agent.

Data Pump

Oracle Data Pump technology enables high-speed, parallel movement of bulk data and metadata from one database to another. Data Pump uses APIs to load and unload data instead of usual SQL commands. Data pump operations can be run via Enterprise Manager interface and is very useful for cross platform database migration.

The migration of the database using the Data Pump export and Data Pump import tools comprises these steps: export the data into a dump file on the source server with the expdp command; copy or move the dump file to the target server; and import the dump file into Oracle on the target server by using the impdp command; and run post import Enterprise Manager specific steps.

Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import

Prepare for Data Pump

Use the following steps to prepare for data pump:

  1. Pre-requisite for using Data pump for Enterprise Manager repository:

    Impdp fails for Enterprise Manager repository because of data pump bug - Bug 4386766 - IMPDP WITH COMPRESSED INDEXES FAILS WITH ORA-14071 AND ORA-39083. This bug is fixed in 10.2. Backport is available for 10.1.0.4. This RDBMS patch has to be applied to use expdp/impdp for the Enterprise Manager repository migration or workaround is to use exp/imp for extract and import.

  2. Create the data pump directory:

    create directory data_pump_dir as '/scratch/gachawla/EM102/ttsdata';

  3. Shutdown OMS instances and prepare for migration.

    Shutdown the OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql

    To improve throughput of a job, the PARALLEL parameter should be used to set a degree of parallelism that takes maximum advantage of the current conditions. In general, the degree of parallelism should be set to more than twice the number of CPUs on an instance.

    All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter.

    DBMS_DATAPUMP APIs can also be used to do data pump export/import. Please refer to Data pump section in 10g administration manual for all the options.

Data Pump Export

Use these steps to run data pump export:

  1. Run data pump export:

    expdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull
    Verify the logs for any errors during export
    

    Data pump direct path export sometimes fails for mgmt_metrics_raw and raises ORA 600. This is due to Bug 4221775 (4233303). This bug is fixed in release 10.2. The workaround: if using expdp data pump for mgmt_metrics_raw, run expdp with ACCESS_METHOD+EXTERNAL_TABLE parameter.

    expdp directory=db_export dumpfile=exp_st2.dmp logfile=exp_st2.log tables=sysman.mgmt_metrics_raw access_method=external_table

Data Pump Import

Use these steps to run data pump import:

  1. Run RepManager to drop target repository (if target database has Enterprise Manager repository installed):

    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop

  2. Prepare the target database:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql

  3. Run data pump import:

    Impdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpimpfull.log JOB_NAME=dpimpfull

    Verify the logs for any issues with the import.

Post Import Enterprise Manager Steps

Use the following steps for post import Enterprise Manager steps:

  1. Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql

    Check for invalid objects - compare source and destination schemas for any discrepancy in counts and invalids.

  2. Submit Enterprise Manager dbms jobs.

    Reset back job_queue_processes to original value and run:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql

  3. Update OMS properties and startup the OMS.

    Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.

  4. Relocate Management Services and Repository target.

    If Management Services and the repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.

  5. Discover/relocate Database and database Listener targets.

    Discover the target database and listener in Enterprise Manager or relocate the targets from source agent to destination agent.

Export/Import

If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.

For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH. Do not export to NFS as it will slow down the process considerably. Direct path can be used to increase performance. Note - As Enterprise Manager uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.

Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.

Prepare for Export/Import

Use the following steps to prepare for Export/Import:

  1. Mgmt_metrics_raw partitions check:

    select table_name,partitioning_type type,
    partition_count count, subpartitioning_type subtype from
    dba_part_tables where table_name = 'MGMT_METRICS_RAW'
    

    If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 - This is fixed in release 10.2. The work around is to export mgmt_metrics_raw in conventional mode.

  2. Shutdown OMS instances and prepare for migration

    Shutdown OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql

Export

Follow these steps for export:

  1. Export data:

    exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log

  2. Export without data and with constraints:

    exp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102exp_2.log

Import

Follow these steps to import:

  1. Run RepManager to drop the target repository (if the target database has the Enterprise Manager repository installed):

    RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop

  2. Pre-create the tablespaces and the users in target database:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql

  3. Import data:

    imp full=y constraints=n indexes=n file=fullem102_1.dmp log=fullem102imp_1.log

  4. Import without data and with constraints:

    imp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log

Post-Import Enterprise Manager Steps

Follow these steps for post-import Enterprise Manager steps:

  1. Run post plug-in steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql

    Check for invalid objects -- compare source and destination schemas for any discrepancy in counts and invalids.

  2. Submit the Enterprise Manager dbms jobs.

    Reset back job_queue_processes to its original value and run:

    @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql

  3. Update the OMS properties and startup the OMS:

    Update emoms.properties to reflect the migrated repository. Update host name oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.

  4. Relocate Management Services and the Repository target.

    If Management Services and the repository target need to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.

  5. Discover/relocate Database and database Listener targets.

    Discover the target database and listener in Enterprise Manager or relocate the targets from the source agent to the destination agent.

Post Migration Verification

These verification steps should be carried out post migration to ensure that the migration was completely successful:

  • Verify any discrepancy in objects by comparing source and target databases through Enterprise Manager.

  • Verify the migrated database through Enterprise Manager to determine whether the database is running without any issues.

  • Verify the repository operations, dbms jobs and whether any management system errors are reported.

  • Verify that all Enterprise Manager functionalities are working correctly after the migration.

  • Make sure Management Services and the Repository target is properly relocated by verifying it through Enterprise Manager.