Oracle® Enterprise Manager Cloud Control Administrator's Guide 12c Release 1 (12.1.0.1) Part Number E24473-09 |
|
|
PDF · Mobi · ePub |
This chapter describes maintenance and troubleshooting techniques for maintaining a well-performing Management Repository.
Specifically, this chapter contains the following sections:
To be sure that your management data is secure, reliable, and always available, consider the following settings and configuration guidelines when you are deploying the Management Repository:
Install a RAID-capable Logical Volume Manager (LVM) or hardware RAID on the system where the Management Repository resides. At a minimum the operating system must support disk mirroring and stripping. Configure all the Management Repository data files with some redundant configuration.
Use Real Application Clusters to provide the highest levels of availability for the Management Repository.
If you use Enterprise Manager to alert administrators of errors or availability issues in a production environment, be sure that the Cloud Control components are configured with the same level of availability. At a minimum, consider using Oracle Data Guard to mirror the Management Repository database. Configure Data Guard for zero data loss. Choose between Maximum Availability or Maximum Protection based on your environment and needs.
See Also:
Oracle Database High Availability Architecture and Best PracticesOracle Data Guard Concepts and Administration
Oracle strongly recommends that archive logging be turned on and that a comprehensive backup strategy be in place prior to an Enterprise Manager implementation going live in a production environment. The backup strategy should include both incremental and full backups as required.
See Also:
Oracle Enterprise Manager Cloud Control Installation and Basic Configuration for information about the database initialization parameters required for the Management RepositoryWhen the various components of Enterprise Manager are configured and running efficiently, the Oracle Management Service gathers large amounts of raw data from the Management Agents running on your managed hosts and loads that data into the Management Repository. This data is the raw information that is later aggregated, organized, and presented to you in the Cloud Control console.
After the Oracle Management Service loads information into the Management Repository, Enterprise Manager aggregates and purges the data over time.
The following sections describe:
The default aggregation and purging policies used to maintain data in the Management Repository.
How you can modify the length of time the data is retained before it is aggregated and then purged from the Management Repository.
Enterprise Manager aggregates collected metric data by hour and by day to enhance query performance and help minimize the size of the Management Repository. Before the data is aggregated, each data point is stored in a raw metric data table. Once a day, the previous day's raw metric data is rolled up, or aggregated, into a one-hour and a one-day table. These hourly and daily records will have hourly and daily metric data averages, minimums, maximums and standard deviations respectively.
After Enterprise Manager aggregates the data, the data is then considered eligible for purging. A certain period of time must pass for data to actually be purged. This period of time is called the retention time.
The raw data, with the highest insert volume, has the shortest default retention time, which is set to 7 days. As a result, 7 days after it is aggregated into a one-hour record, a raw data point is eligible for purging.
Note:
This data retention policy varies for JVMD and ADP data.Hourly aggregate metric data records are purged after 31 days. The highest level of aggregation, one day, is kept for 12 months (roughly 365 days).
The default data retention policies are summarized in Table 19-1.
Table 19-1 Default Repository Purging Policies
Aggregate Level | Retention Time |
---|---|
Raw metric data |
7 days |
Hourly aggregated metric data |
31 days |
Daily aggregated metric data |
12 months (~365 days) |
If you have configured and enabled Application Performance Management, Enterprise Manager also gathers, saves, aggregates, and purges response time data. The response time data is purged using policies similar to those used for metric data. The Application Performance Management purging policies are shown in Table 19-2.
Table 19-2 Default Repository Purging Policies for Application Performance Management Data
Aggregate Level | Retention Time |
---|---|
Raw response time data |
24 hours |
One-hour aggregated response time data |
7 days |
One-hour distribution response time data |
24 hours |
One-day aggregated response time data |
31 days |
One-day distribution aggregated response time data |
31 days |
Besides the metric data and Application Performance Monitoring data, other types of Enterprise Manager data accumulates over time in the Management Repository.
For example, the last availability record for a target will also remain in the Management Repository indefinitely, so the last known state of a target is preserved.
The Enterprise Manager default aggregation and purging policies were designed to provide the most available data for analysis while still providing the best performance and least disk-space requirements for the Management Repository. As a result, you should not modify these policies to improve performance or increase your available disk space.
However, if you plan to extract or review the raw or aggregated data using data analysis tools other than Enterprise Manager, you may want to increase the amount of raw or aggregated data available in the Management Repository. You can accomplish this by increasing the retention times for the raw or aggregated data.
A PL/SQL API has been provided to modify the default retention time for the core metric data tables in the Enterprise Manager repository. Table 19-3 shows the default number of partitions retained for each of the three tables and the size of the partitions for each table. The API will allow you to change the number of partitions retained only.
Table 19-3 Core EM Metric Data Tables and Default Data Retention in the Management Repository
Table Name | Partitions Retained | Partition Size |
---|---|---|
7 |
DAY |
|
32 |
DAY |
|
12 |
MONTH |
The PL/SQL API is:
gc_interval_partition_mgr.set_retention(<repository schema name>, <table name>, <number of partitions to retain>);
An example of using the PL/SQL API to change the number of partitions retained in EM_METRIC_VALUES (raw data) from the default of 7 to 10 follows:
BEGIN gc_interval_partition_mgr.set_retention('SYSMAN', 'EM_METRIC_VALUES', 10); END; /
By default, when you delete a target from the Grid Control console, Enterprise Manager automatically deletes all target data from the Management Repository.
However, deleting raw and aggregated metric data for database and other data-rich targets is a resource consuming operation. Targets can have hundreds of thousands of rows of data and the act of deleting this data can degrade performance of Enterprise Manager for the duration of the deletion, especially when several targets are deleted at once.To avoid this resource-consuming operation, you can prevent Enterprise Manager from performing this task each time you delete a target. When you prevent Enterprise Manager from performing this task, the metric data for deleted targets is not purged as part of target deletion task; instead, it is purged as part of the regular purge mechanism, which is more efficient.
In addition, Oracle strongly recommends that you do not add new targets with the same name and type as the deleted targets within 24 hours of target deletion. Adding a new target with the same name and type will result in the Grid Control console showing data belonging to the deleted target for the first 24 hours.
To disable raw metric data deletion:
Use SQL*Plus to connect to the Management Repository as the Management Repository user.
The default Management Repository user is SYSMAN. For example:
SQL> connect sysman/sysman_password;
To disable metric deletion, run the following SQL command.
SQL> EXEC MGMT_ADMIN.DISABLE_METRIC_DELETION(); SQL> COMMIT;
To enable metric deletion at a later point, run the following SQL command:
Use SQL*Plus to connect to the Management Repository as the Management Repository user.
The default Management Repository user is SYSMAN. For example:
SQL> connect sysman/oldpassword;
To enable metric deletion, run the following SQL command.
SQL> EXEC MGMT_ADMIN.ENABLE_METRIC_DELETION(); SQL> COMMIT;
Enterprise Manager Cloud Control has a default purge policy which removes all finished job details which are older than 30 days. This section provides details for modifying this default purge policy.
The actual purging of completed job history is implemented via a DBMS_SCHEDULER job that runs once a day in the repository database. When the job runs, it looks for finished jobs that are 'n' number of days older than the current time (value of sysdate in the repository database) and deletes these jobs. The value of 'n' is, by default, set to 30 days.
The default purge policy cannot be modified via the Enterprise Manager console, but it can be changed using SQL*Plus.
To modify this purge policy, follow these steps:
Log in to the repository database as the SYSMAN user, via SQL*Plus.
Check the current values for the purge policies using the following command:
SQL> select * from mgmt_job_purge_policies;
POLICY_NAME TIME_FRAME -------------------------------- ---------- SYSPURGE_POLICY 30 REFRESHFROMMETALINKPURGEPOLICY 7 FIXINVENTORYPURGEPOLICY 7 OPATCHPATCHUPDATE_PAPURGEPOLICY 7
The purge policy responsible for the job deletion is called SYSPURGE_POLICY. As seen above, the default value is set to 30 days.
To change the time period, you must drop and recreate the policy with a different time frame:
SQL> execute MGMT_JOBS.drop_purge_policy('SYSPURGE_POLICY');
PL/SQL procedure successfully completed.
SQL> execute MGMT_JOBS.register_purge_policy('SYSPURGE_POLICY', 60, null);
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> select * from mgmt_job_purge_policies;
POLICY_NAME TIME_FRAME -------------------------------- ---------- SYSPURGE_POLICY 60 ....
The above commands increase the retention period to 60 days. The time frame can also be reduced below 30 days, depending on the requirement.
You can check when the purge job will be executed next. The actual time that the purge runs is set to 5 AM repository time and can be verified using these steps:
Login to the Repository database using the SYSMAN account.
Execute the following command:
SQL> select job_name, to_char(last_start_date, 'DD-MON-YY HH24:MI:SS') last_run, to_char(next_run_date, 'DD-MON-YY HH24:MI:SS') next_run from all_scheduler_jobs where job_name ='EM_JOB_PURGE_POLICIES'; JOB_NAME LAST_RUN NEXT_RUN --------------------- ------------------ ------------------ EM_JOB_PURGE_POLICIES 07-SEP-11 05:00:00
The schedule can also be verified from the Enterprise Manager console by following these steps:
From the Setup menu, select Management Service, the select Repository.
Click the Repository Operations tab.
Find the Next Scheduled Run and Last Scheduled Run information for Job Purge in the list.
Please note that the time of the next scheduled execution of the Job Purge does not represent the cutoff time for the retention period; the cutoff time is determined by the purge policy at the time the Job Purge runs.
Enterprise Manager uses the database scheduler (dbms_scheduler) to run various processes in the repository. When the dbms_scheduler is stopped or has insufficient resources to operate, the Enterprise Manager processes do not run or are delayed. The following is a list of common causes that may prohibit the dbms_scheduler from running normally.
Job Queue Processes
The dbms_scheduler uses a separate job-queue process for each job it runs. The maximum number of these processes is controlled by the database parameter, job_queue_processes. If all processes are in use, no new jobs will be started.
The following query returns the number of currently running jobs.
SELECT count(*) FROM dba_scheduler_running_jobs;
If the count is close to the setting of job_queue_processes, it could mean that Enterprise Manager dbms_scheduler jobs cannot be started (on time). Determine if any of the running dbms_scheduler jobs are stuck and consider increasing the setting for job_queue_processes.
Job Slave Processes
The dbms_scheduler also depends on the setting of the dbms_scheduler property MAX_JOB_SLAVE_PROCESSES. If the number of running dbms_scheduler jobs exceeds this setting, no new jobs will be started. This attribute can be checked using this query.
SELECT value FROM dba_scheduler_global_attribute WHERE attribute_name='MAX_JOB_SLAVE_PROCESSES';
If the count equals the number of running dbms_scheduler jobs, then determine if any of the running dbms_scheduler jobs are stuck and consult the dbms_scheduler documentation about how to adjust this attribute.
DBMS_SCHEDULER Program Disabled
The dbms_scheduler has an attribute that can be set to disable this feature in the database. When set, the Enterprise Manager dbms_scheduler jobs will not run. To check if this attribute has been set (inadvertently), run this query.
SELECT * FROM dba_scheduler_global_attribute WHERE attribute_name = 'SCHEDULER_DISABLED';
When a row is returned, the dbms_scheduler is disabled. Execute dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED', 'FALSE');
Consult the dbms_scheduler documentation about how to remove this attribute.
Too Many Database Sessions
Each dbms_scheduler job requires two database sessions. When no more sessions are available, Enterprise Manager dbms_scheduler jobs will not run. The following two queries give the maximum number of allowed sessions and the current number of active sessions:
SELECT value FROM v$parameter WHERE name='sessions';
SELECT count(*)FROM v$session;
When the current number of sessions approaches the maximum, then you should determine if any of the sessions are stuck and consult the Oracle Database documentation about how to increase the maximum number of sessions.
Also the high water mark of the number of sessions may indicate that this issue has played a role in the past:
select * from v$resource_limit where resource_name = 'sessions' ;
If the MAX_UTILIZATION column indicates a value that is close the maximum number of sessions, it could explain why some of the Enterprise Manager dbms_scheduler jobs may not have run (on time) in the past.
Insufficient Memory
The database may not be able to spawn a new job queue process when there is insufficient memory available. The following message in the database alert file, Unable to spawn jobq slave processes, in combination with, (free memory = 0.00M), would be indicative of this problem. Please consult the Oracle Database documentation about how to diagnose this memory problem further.
The SYSMAN account is the default super user account used to set up and administer Enterprise Manager. It is also the database account that owns the objects stored in the Oracle Management Repository. From this account, you can set up additional administrator accounts and set up Enterprise Manager for use in your organization.The SYSMAN account is created automatically in the Management Repository database during the Enterprise Manager installation. You also provide a password for the SYSMAN account during the installation.
If you later need to change the SYSMAN database account password, use the following procedure:
Shut down all the Oracle Management Service instances that are associated with the Management Repository.
Stop the Management Agent that is monitoring the target OMS and Repository.
Failure to do this will result in the Management Agent attempting to connect to the target with a wrong password once it is changed with SQL*Plus. This may also result in the SYSMAN account being locked which can subsequently prevent logins to the Cloud Control console to change the password of the target OMS and Repository.
Change the password of the SYSMAN database account using the following SQL*Plus commands:
SQL>connect sysman/oldpassword;
SQL>alter user sysman identified by newpassword;
To change the password of the SYSMAN user, enter the following command. Oracle strongly recommends you use this emctl command to change the password which updates both the sysman and the sysman_mds passwords. If you use other supported methods to change the sysman password but do not change the sysman_mds password, you may experience issues. This is the only command you need to run to change the repository user password:
emctl config oms -change_repos_pwd [-old_pwd <old_pwd>] [-new_pwd <new_pwd>] [-use_sys_pwd [-sys_pwd <sys_pwd>]]
You must run this command on each Management Service in your environment.
Parameter | Description |
---|---|
-old_pwd |
This is the current SYSMAN password. |
-new_pwd |
This is the new password. |
-use_sys_pwd |
This parameter is optional and is used to connect to the database as a SYS user. |
-sys_pwd |
This is the password for the SYS user. |
In the Cloud Control console, click the Targets tab, then click All Targets on the sub tab.
Select the Management Services and Repository target, then click Configure. Enterprise Manager displays the Monitoring Configurations page.
Enter the new password in the Repository password field, then click OK.
In Release 12c, there are two more schemas (SYSMAN_OPSS, SYSMAN_APM) that use the same password. The emctl config oms -change_repos_pwd command updates all these schemas with the new password.
There are two modes in which this emctl command can be launched:
When the current SYSMAN password is known,. In this case the command is launched with both old and new passwords. This command updates the passwords for all the schemas to the new password in the repository and also updates the datasource and all relevant locations where these passwords are stored.
When the current SYSMAN password is not known or if the SYSMAN acct is locked. In these cases you need to provide the SYS credentials and launch the emctl command with the -use_sys_pwd option. This command uses the SYS credentials to update the passwords for all the schemas to the new password in the repository and also updates the datasource and all relevant locations where these passwords are stored.
There are no other commands required to update the schema passwords.
This section provides information about dropping the Management Repository from your existing database and recreating the Management Repository after you install Enterprise Manager.
To recreate the Management Repository, you first remove the Enterprise Manager schema from your Management Repository database. You accomplish this task using the -action drop
argument to the RepManager
script, which is described in the following procedure.
To remove the Management Repository from your database:
Locate the RepManager
script in the following directory of the Middleware Home where you have installed and deployed the Management Service:
ORACLE_HOME/sysman/admin/emdrep/bin
At the command prompt, enter the following command:
$PROMPT> RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action dropall
In this syntax example:
repository_host
is the machine name where the Management Repository database is located
repository_port
is the Management Repository database listener port address, usually 1521
repository_SID
is the Management Repository database system identifier
password_for_sys_account
is the password of the SYS user for the database. For example, change_on_install
-action drop
indicates that you want to drop the Management Repository
Alternatively, you can use a connect descriptor to identify the database on the RepManager command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.
For example, you can use the connect descriptor as follows to create the Management Repository:
$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=host1)(PORT=1521)) (CONNECT_DATE=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action dropall
See Also:
"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using connect descriptors.The preferred method for creating the Management Repository is to create the Management Repository during the Enterprise Manager installation procedure, which is performed using Oracle Universal Installer.
See Also:
Oracle Enterprise Manager Cloud Control Installation and Basic Configuration for information about installing Enterprise Manager.However, if you need to recreate the Management Repository in an existing database, you can use the RepManager
script, which is installed when you install the Management Service. Refer to the following sections for more information:
Using the RepManager Script to Create the Management Repository
Using a Connect Descriptor to Identify the Management Repository Database
To create a Management Repository in an existing database:
Review the hardware and software requirements for the Management Repository as described in Oracle Enterprise Manager Cloud Control Installation and Basic Configuration. and review the section "Management Repository Deployment Guidelines".
Locate the RepManager
script in the following directory of the Oracle Management Service home directory:
ORACLE_HOME/sysman/admin/emdrep/bin
At the command prompt, enter the following command:
$PROMPT> ./RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action create
In this syntax example:
repository_host
is the machine name where the Management Repository database is located
repository_port
is the Management Repository database listener port address, usually 1521 or 1526
repository_SID
is the Management Repository database system identifier
password_for_sys_account
is the password of the SYS user for the database. For example, change_on_install
Enterprise Manager creates the Management Repository in the database you specified in the command line.
Alternatively, you can use a connect descriptor to identify the database on the RepManager
command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.
For example, you can use the connect descriptor as follows to create the Management Repository:
$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=host1)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action create
See Also:
"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using a connect descriptorThe ability to use a connect string allows you to provide an address list as part of the connection string. The following example shows how you can provide an address list consisting of two listeners as part of the RepManager
command line. If a listener on one host becomes unavailable, the second listener can still accept incoming requests:
$PROMPT> ./RepManager -connect "(DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521) (ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521) (CONNECT_DATE=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action create
See Also:
Oracle Database High Availability Architecture and Best PracticesOracle Enterprise Manager Cloud Control Installation and Basic Configuration
Oracle Universal Installer creates the Management Repository using a configuration step at the end of the installation process. If the repository configuration tool fails, note the exact error messages displayed in the configuration tools window, wait until the other configuration tools have finished, exit from Universal Installer, and then use the following sections to troubleshoot the problem.
If the creation of your Management Repository is interrupted, you may receive the following when you attempt to create or drop the Management Repository at a later time:
SQL> ERROR: ORA-00604: error occurred at recursive SQL level 1 ORA-04068: existing state of packages has been discarded ORA-04067: not executed, package body "SYSMAN.MGMT_USER" does not exist ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at "SYSMAN.SETEMUSERCONTEXT", line 5 ORA-06512: at "SYSMAN.CLEAR_EMCONTEXT_ON_LOGOFF", line 4 ORA-06512: at line 4
To fix this problem, see "General Troubleshooting Techniques for Creating the Management Repository".
If you receive an error such as the following when you try to connect to the Management Repository database, you are likely using an unsupported version of the Oracle Database:
Server Connection Hung
To remedy the problem, upgrade your database to the supported version as described in Oracle Enterprise Manager Cloud Control Installation and Basic Configuration.
If you encounter an error while creating the Management Repository, drop the repository by running the -drop
argument to the RepManager
script.
See Also:
Dropping the Management RepositoryIf the RepManager
script drops the repository successfully, try creating the Management Repository again.
If you encounter errors while dropping the Management Repository, do the following:
Connect to the database as SYSDBA using SQL*Plus.
Check to see if the SYSMAN database user exists in the Management Repository database.
For example, use the following command to see if the SYSMAN user exists:
prompt> SELECT username FROM DBA_USERS WHERE username='SYSMAN';
If the SYSMAN user exists, drop the user by entering the following SQL*Plus command:
prompt> DROP USER SYSMAN CASCADE;
Check to see if the following triggers exist:
SYSMAN.EMD_USER_LOGOFF SYSMAN.EMD_USER_LOGON
For example, use the following command to see if the EMD_USER_LOGOFF trigger exists in the database:
prompt> SELECT trigger_name FROM ALL_TRIGGERS WHERE trigger_name='EMD_USER_LOGOFF';
If the triggers exist, drop them from the database using the following commands:
prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGOFF; prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGON;
There are user requirements for migrating an Enterprise Manager repository across servers - same and cross platforms.
The Enterprise Manager repository migration process is not exactly the same as database migration. In case of Enterprise Manager Repository migration you must take care of Enterprise Manager specific data, options, and pre-requisites for the repository move. You should make sure data integrity is maintained from both the Enterprise Manager and Oracle database perspective.
This brings up need for defining the process that can be followed by end users for successful and reliable migration of repository in minimum time and with maximum efficiency.
The overall strategy for migration depends on:
The source and target database version
The amount of data/size of repository
Actual data to migrate [selective/full migration]
Cross platform transportable tablespace along with data pump (for metadata) is the fastest and best approach for moving large Enterprise Manager Cloud Control repository from one platform to another. Other option that can be considered for migration is to use Data Pump for both data and metadata moves but this would require more time than the cross platform transportable tablespace approach for the same amount of data. The advantage to using the data pump approach is that it provides granular control over options and the overall process, as in the case of selective data being migrated and not the whole of source data. If the source and target is not on version 11g then export/import is the only way to get the data migrated cross platform.
More details on cross platform transportable tablespace, data pump, and export/import options can be found at the Oracle Technology Network (OTN) or in the Oracle Database Administrator's Guide.
The following lists the common prerequisites for a repository migration:
Source and target database must use the same character set and should be at same version.
Source and target database should meet all the pre-requisites mentioned for Enterprise Manager Repository software requirements mentioned in Enterprise Manager install guide.
If source and target database are NOT on 11g - only Export/Import can be used for cross platform migration.
If Source and target database are on 11g - either of three options Cross platform transportable tablespaces migration, Data Pump or Export/Import can be used for cross platform repository migration.
You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
To plug a transportable tablespace set into an Oracle Database on a different platform, both databases must have compatibility set to at least 10.0.
Most of the platforms (but not all) are supported for cross-platform tablespace transport. You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine their platform IDs and their endian format (byte ordering).
Source and Destination host should have Enterprise Manager Management Agent running and configured to the instance which is to be migrated.
If target database has Enterprise Manager repository installed, it should be first dropped using RepManager before target database related steps are carried out.
The following sections discuss the methodologies of a repository migration.
Oracle's transportable tablespace feature allows users to quickly move a user tablespace across Oracle databases. It is the most efficient way to move bulk data between databases. Prior to Oracle Database Release 11g, if you want to transport a tablespace, both source and target databases need to be on the same platform. Oracle Database Release 11g adds the cross platform support for transportable tablespaces. With the cross platform transportable tablespace, you can transport tablespaces across platforms.
Cross platform transportable tablespaces allows a database to be migrated from one platform to another (use with Data Pump or Import/Export).
Use these steps to prepare for transportable tablespaces:
Prepare set of user tablespaces and Check for containment violation.
execute DBMS_TTS.TRANSPORT_SET_CHECK('MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS', TRUE);
select * FROM transport_set_violations;
Shutdown OMS instances and prepare for migration.
Shutdown OMS, set job queue_processes to 0 and run:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
Make the tablespaces to be transported read only.
alter tablespace MGMT_TABLESPACE read only;
alter tablespace MGMT_ECM_DEPOT_TS read only;
Extract Metadata for transportable tablespaces using Data Pump Utility:
Create the data pump directory.
create directory data_pump_dir as '/scratch/user/EM102/ttsdata';
Extract the metadata using data pump (or export ).
expdp DUMPFILE=ttsem102.dmp TRANSPORT_TABLESPACES=MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS TRANSPORT_FULL_CHECK=Y
Extract other objects (packages, procedures, functions, temporary tables. and so on -- Not contained in user tablespaces).
expdp SCHEMAS=SYSMAN CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log JOB_NAME=expmet
Run Endian check and convert the datafiles if endian is different between source and destination:
For Endian check, run this on both source and destination database:
SELECT endian_format
FROM v$transportable_platform tp, v$database d
WHERE tp.platform_name = d.platform_name;
If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.
Example:
Source Endian Linux IA (32-bit) - Little Destination Endian Solaris[tm] OE (32-bit) - Big
Ship datafiles, metadata dump to target and Convert datafiles using RMAN:
Ship the datafiles and the metadata dump to target and On target convert all datafiles to destination endian:
CONVERT DATAFILE '/d14/em10g/oradata/em102/mgmt.dbf', '/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf' FROM PLATFORM 'Linux IA (32-bit)';
Conversion via RMAN can be done either on source or target (For more details refer RMAN doc). Parallelism can be used to speed up the process if the user tablespaces contains multiple datafiles.
Use the following steps to import metadata and plugin tablespaces:
Run RepManager to drop the target repository (if the target database has the Enterprise Manager repository installed):
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Run the pre-import steps to create the sysman user and grant privileges on the target database:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Invoke the Data Pump utility to plug the set of tablespaces into the target database:
impdp DUMPFILE=ttsem102.dmp DIRECTORY=data_pump_dir
TRANSPORT_DATAFILES=/d14/em10g/oradata/em102/mgmt.dbf,/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf
Import other objects (packages, procedures, functions, and so on):
impdp CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log
Follow these post plug-in steps:
Run post plug-in steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects -- compare source and destination schemas for any discrepancy in the counts and invalids.
Bring user tablespaces back to read write mode:
alter tablespace MGMT_TABLESPACE read write;
alter tablespace MGMT_ECM_DEPOT_TS read write;
Submit Enterprise Manager RDBMS jobs.
Reset back job_queue_processes to original value and run:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update OMS properties and start the OMS.
Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate the Management Services and Repository target.
If the Management Services and Repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate database and database listener targets.
Discover the target database and listener in Enterprise Manager or relocate the targets from source agent to destination agent.
Oracle Data Pump technology enables high-speed, parallel movement of bulk data and metadata from one database to another. Data Pump uses APIs to load and unload data instead of usual SQL commands. Data pump operations can be run via Enterprise Manager interface and is very useful for cross platform database migration.
The migration of the database using the Data Pump export and Data Pump import tools comprises these steps: export the data into a dump file on the source server with the expdp command; copy or move the dump file to the target server; and import the dump file into Oracle on the target server by using the impdp command; and run post import Enterprise Manager specific steps.
Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import
Use the following steps to prepare for data pump:
Review the following prerequisite for using Data pump for Enterprise Manager repository:
Impdp fails for Enterprise Manager repository because of data pump bug - Bug 4386766 - IMPDP WITH COMPRESSED INDEXES FAILS WITH ORA-14071 AND ORA-39083. This bug is fixed in 10.2. Backport is available for 10.1.0.4. This RDBMS patch has to be applied to use expdp/impdp for the Enterprise Manager repository migration or workaround is to use exp/imp for extract and import.
Create the data pump directory:
create directory data_pump_dir as '/scratch/user/EM102/ttsdata';
Shutdown OMS instances and prepare for migration.
Shutdown the OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
To improve throughput of a job, the PARALLEL parameter should be used to set a degree of parallelism that takes maximum advantage of the current conditions. In general, the degree of parallelism should be set to more than twice the number of CPUs on an instance.
All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter.
DBMS_DATAPUMP APIs can also be used to do data pump export/import. Please refer to Data pump section in 10g administration manual for all the options.
Use these steps to run data pump export:
Run data pump export:
expdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Verify the logs for any errors during export
Data pump direct path export sometimes fails for mgmt_metrics_raw and raises ORA 600. This is due to Bug 4221775 (4233303). This bug is fixed in release 10.2. The workaround: if using expdp data pump for mgmt_metrics_raw, run expdp with ACCESS_METHOD+EXTERNAL_TABLE parameter.
expdp directory=db_export dumpfile=exp_st2.dmp logfile=exp_st2.log tables=sysman.mgmt_metrics_raw access_method=external_table
Use these steps to run data pump import:
Run RepManager to drop target repository (if target database has Enterprise Manager repository installed):
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Prepare the target database:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Run data pump import:
Impdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpimpfull.log JOB_NAME=dpimpfull
Verify the logs for any issues with the import.
Use the following steps for post import Enterprise Manager steps:
Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects - compare source and destination schemas for any discrepancy in counts and invalids.
Submit Enterprise Manager dbms jobs.
Reset back job_queue_processes to original value and run:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update OMS properties and startup the OMS.
Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate Management Services and Repository target.
If Management Services and the repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate Database and database Listener targets.
Discover the target database and listener in Enterprise Manager or relocate the targets from source agent to destination agent.
If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH. Do not export to NFS as it will slow down the process considerably. Direct path can be used to increase performance. Note - As Enterprise Manager uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
Use the following steps to prepare for Export/Import:
Mgmt_metrics_raw partitions check:
select table_name,partitioning_type type, partition_count count, subpartitioning_type subtype from dba_part_tables where table_name = 'MGMT_METRICS_RAW'
If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 - This is fixed in release 10.2. The work around is to export mgmt_metrics_raw in conventional mode.
Shutdown OMS instances and prepare for migration
Shutdown OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
Follow these steps for export:
Export data:
exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
Export without data and with constraints:
exp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102exp_2.log
Follow these steps to import:
Run RepManager to drop the target repository (if the target database has the Enterprise Manager repository installed):
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Pre-create the tablespaces and the users in target database:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Import data:
imp full=y constraints=n indexes=n file=fullem102_1.dmp log=fullem102imp_1.log
Import without data and with constraints:
imp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
Follow these steps for post-import Enterprise Manager steps:
Run post plug-in steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects -- compare source and destination schemas for any discrepancy in counts and invalids.
Submit the Enterprise Manager dbms jobs.
Reset back job_queue_processes to its original value and run:
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update the OMS properties and startup the OMS:
Update emoms.properties to reflect the migrated repository. Update host name oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate Management Services and the Repository target.
If Management Services and the repository target need to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate Database and database Listener targets.
Discover the target database and listener in Enterprise Manager or relocate the targets from the source agent to the destination agent.
These verification steps should be carried out post migration to ensure that the migration was completely successful:
Verify any discrepancy in objects by comparing source and target databases through Enterprise Manager.
Verify the migrated database through Enterprise Manager to determine whether the database is running without any issues.
Verify the repository operations, dbms jobs and whether any management system errors are reported.
Verify that all Enterprise Manager functionalities are working correctly after the migration.
Make sure Management Services and the Repository target is properly relocated by verifying it through Enterprise Manager.