Oracle® Enterprise Manager Advanced Configuration 10g Release 5 (10.2.0.5) Part Number E10954-03 |
|
|
PDF · Mobi · ePub |
This chapter describes maintenance and troubleshooting techniques for maintaining a well-performing Management Repository.
Specifically, this chapter contains the following sections:
To be sure that your management data is secure, reliable, and always available, consider the following settings and configuration guidelines when you are deploying the Management Repository:
Install a RAID-capable Logical Volume Manager (LVM) or hardware RAID on the system where the Management Repository resides. At a minimum the operating system must support disk mirroring and stripping. Configure all the Management Repository data files with some redundant configuration.
Use Real Application Clusters to provide the highest levels of availability for the Management Repository.
If you use Enterprise Manager to alert administrators of errors or availability issues in a production environment, be sure that the Grid Control components are configured with the same level of availability. At a minimum, consider using Oracle Data Guard to mirror the Management Repository database. Configure the Data Guard environment for no data loss.
Oracle strongly recommends that archive logging be turned on and that a comprehensive backup strategy be in place prior to an Enterprise Manager implementation going live in a production environment. The backup strategy should include both incremental and full backups as required.
See Also:
Oracle Enterprise Manager Grid Control Installation and Basic Configuration for information about the database initialization parameters required for the Management RepositoryWhen the various components of Enterprise Manager are configured and running efficiently, the Oracle Management Service gathers large amounts of raw data from the Management Agents running on your managed hosts and loads that data into the Management Repository. This data is the raw information that is later aggregated, organized, and presented to you in the Grid Control Console.
After the Oracle Management Service loads information into the Management Repository, Enterprise Manager aggregates and purges the data over time.
The following sections describe:
The default aggregation and purging policies used to maintain data in the Management Repository.
How you can modify the length of time the data is retained before it is aggregated and then purged from the Management Repository.
Enterprise Manager aggregates your management data by hour and by day to minimize the size of the Management Repository. Before the data is aggregated, each data point is stored in a raw data table. Raw data is rolled up, or aggregated, into a one-hour aggregated metric table. One-hour records are then rolled up into a one-day table.
After Enterprise Manager aggregates the data, the data is then considered eligible for purging. A certain period of time has to pass for data to actually be purged. This period of time is called the retention time.
The raw data, with the highest insert volume, has the shortest default retention time, which is set to 7 days. As a result, 7 days after it is aggregated into a one-hour record, a raw data point is eligible for purging.
One-hour aggregate data records are purged 31 days after they are rolled up to the one-day data table. The highest level of aggregation, one day, is kept for 365 days.
The default data retention policies are summarized in Table 10-1.
Table 10-1 Default Repository Purging Policies
Aggregate Level | Retention Time |
---|---|
Raw metric data |
7 days |
One-hour aggregated metric data |
31 days |
One-day aggregated metric data |
365 days |
If you have configured and enabled Application Performance Management, Enterprise Manager also gathers, saves, aggregates, and purges response time data. The response time data is purged using policies similar to those used for metric data. The Application Performance Management purging policies are shown in Table 10-2.
Table 10-2 Default Repository Purging Policies for Application Performance Management Data
Aggregate Level | Retention Time |
---|---|
Raw response time data |
24 hours |
One-hour aggregated response time data |
7 days |
One-hour distribution response time data |
24 hours |
One-day aggregated response time data |
31 days |
One-day distribution aggregated response time data |
31 days |
Besides the metric data and Application Performance Monitoring data, other types of Enterprise Manager data accumulates over time in the Management Repository.
For example, the last availability record for a target will also remain in the Management Repository indefinitely, so the last known state of a target is preserved.
The Enterprise Manager default aggregation and purging policies were designed to provide the most available data for analysis while still providing the best performance and disk-space requirements for the Management Repository. As a result, you should not modify these policies to improve performance or increase your available disk space. Modifying these default policies can affect the performance of the Management Repository and have adverse reactions on the scalability of your Enterprise Manager installation.
However, if you plan to extract or review the raw or aggregated data using data analysis tools other than Enterprise Manager, you may want to increase the amount of raw or aggregated data available in the Management Repository. You can accomplish this by increasing the retention times for the raw or aggregated data.
To modify the default retention time for each level of management data in the Management Repository, you must insert additional rows into the MGMT_PARAMETERS table in the Management Repository database. Table 10-3 shows the parameters you must insert into the MGMT_PARAMETERS table to modify the retention time for each of the raw data and aggregate data tables.
Table names that contain "_RT_" indicate tables used for Application Performance Monitoring response time data. In the Table Name column, replace datatype with one of the three response time data types: DOMAIN, IP, or URL.
Table 10-3 Parameters for Modifying Default Data Retention Times in the Management Repository
Table Name | Parameter in MGMT_PARAMETERS Table | Default Retention Value |
---|---|---|
mgmt_raw_keep_window |
7 days |
|
mgmt_hour_keep_window |
31 days |
|
mgmt_day_keep_window |
365 days |
|
mgmt_rt_keep_window |
24 hours |
|
mgmt_rt_hour_keep_window |
7 days |
|
mgmt_rt_day_keep_window |
31 days |
|
mgmt_rt_dist_hour_keep_window |
24 hours |
|
mgmt_rt_dist_day_keep_window |
31 days |
Note:
If the first three tables listed in Table 8-3 are not partitioned, the Default Retention Value for each is 1, 7, and 31 days respectively, rather than the 7, 31, and 365 days listed for partitioned tables.For example, to change the default retention time for the table MGMT_METRICS_RAW from seven days to 14 days:
Use SQL*Plus to connect to the Management Repository database as the Management Repository user.
The default Management Repository user is sysman
.
Enter the following SQL to insert the parameter and change the default value:
INSERT INTO MGMT_PARAMETERS (PARAMETER_NAME, PARAMETER_VALUE) VALUES ('mgmt_raw_keep_window','14');
Similarly, to change from the default retention time for all of the MGMT_RT_datatype_1DAY tables from 31 days to 100 days:
Use SQL*Plus to connect to the Management Repository database as the Management Repository user.
The default Management Repository user is sysman
.
Enter the following SQL to insert the parameter and change the default value:
INSERT INTO MGMT_PARAMETERS (PARAMETER_NAME, PARAMETER_VALUE) VALUES ('mgmt_rt_day_keep_window', '100');
By default, when you delete a target from the Grid Control Console, Enterprise Manager automatically deletes all target data from the Management Repository.
However, deleting raw and aggregated metric data for database and other data-rich targets is a resource consuming operation. Targets can have hundreds of thousands of rows of data and the act of deleting this data can degrade performance of Enterprise Manager for the duration of the deletion, especially when several targets are deleted at once.To avoid this resource-consuming operation, you can prevent Enterprise Manager from performing this task each time you delete a target. When you prevent Enterprise Manager from performing this task, the metric data for deleted targets is not purged as part of target deletion task; instead, it is purged as part of the regular purge mechanism, which is more efficient.
In addition, Oracle strongly recommends that you do not add new targets with the same name and type as the deleted targets within 24 hours of target deletion. Adding a new target with the same name and type will result in the Grid Control Console showing data belonging to the deleted target for the first 24 hours.
To disable raw metric data deletion:
Use SQL*Plus to connect to the Management Repository as the Management Repository user.
The default Management Repository user is SYSMAN. For example:
SQL> connect sysman/oldpassword;
To disable metric deletion, run the following SQL command.
SQL> EXEC MGMT_ADMIN.DISABLE_METRIC_DELETION(); SQL> COMMIT;
To enable metric deletion at a later point, run the following SQL command:
Use SQL*Plus to connect to the Management Repository as the Management Repository user.
The default Management Repository user is SYSMAN. For example:
SQL> connect sysman/oldpassword;
To enable metric deletion, run the following SQL command.
SQL> EXEC MGMT_ADMIN.ENABLE_METRIC_DELETION(); SQL> COMMIT;
Enterprise Manager Grid Control has a default purge policy which removes all finished job details which are older than 30 days. This section provides details for modifying this default purge policy.
The actual purging of completed job history is implemented via a DBMS job that runs once a day in the repository database. When the job runs, it looks for finished jobs that are 'n' number of days older than the current time (value of sysdate in the repository database) and deletes these jobs. The value of 'n' is, by default, set to 30 days.
The default purge policy cannot be modified via the Enterprise Manager console, but it can be changed using SQL*Plus.
To modify this purge policy, follow these steps:
Log in to the repository database as the SYSMAN user, via SQL*Plus
Check the current values for the purge policies using the following command:
SQL> select * from mgmt_job_purge_policies;
POLICY_NAME TIME_FRAME -------------------------------- ---------- SYSPURGE_POLICY 30 REFRESHFROMMETALINKPURGEPOLICY 7 FIXINVENTORYPURGEPOLICY 7 OPATCHPATCHUPDATE_PAPURGEPOLICY 7
The purge policy responsible for the job deletion is called SYSPURGE_POLICY. As seen above, the default value is set to 30 days.
To change the time period, you must drop and re-create the policy with a different time frame:
SQL> execute MGMT_JOBS.drop_purge_policy('SYSPURGE_POLICY');
PL/SQL procedure successfully completed.
SQL> execute MGMT_JOBS.register_purge_policy('SYSPURGE_POLICY', 60, null);
PL/SQL procedure successfully completed.
SQL> COMMIT;
Commit complete.
SQL> select * from mgmt_job_purge_policies;
POLICY_NAME TIME_FRAME -------------------------------- ---------- SYSPURGE_POLICY 60 ....
The above commands increase the retention period to 60 days. The timeframe can also be reduced below 30 days, depending on the requirement.
You can check when the purge job will be executed next. The actual time that the job runs may vary with each Enterprise Manager installation. To determine this time in your setup follow these steps:
Login to the Repository database using the SYSMAN account
Execute the following command:
SQL> alter session set nls_date_format='mm/dd/yy hh:mi:ss pm';
SQL> select what, next_date from user_jobs where what like '%JOB_ENGINE%';
WHAT ------------------------------------------------------------------------------ NEXT_DATE -------------------- MGMT_JOB_ENGINE.apply_purge_policies(); 09/23/08 10:26:17 am
In this example, the purge policy DBMS job will run every day at 10:26:17 AM, repository time.
The SYSMAN account is the default super user account used to set up and administer Enterprise Manager. It is also the database account that owns the objects stored in the Oracle Management Repository. From this account, you can set up additional administrator accounts and set up Enterprise Manager for use in your organization.
The SYSMAN account is created automatically in the Management Repository database during the Enterprise Manager installation. You also provide a password for the SYSMAN account during the installation.
See Also:
Oracle Enterprise Manager Grid Control Installation and Basic Configuration for information about installing Enterprise ManagerIf you later need to change the SYSMAN database account password, use the following procedure:
Shut down all the Oracle Management Service instances that are associated with the Management Repository.
Stop the agent that is monitoring the target OMS and Repository.
Failure to do this will result in the agent attempting to connect to the target with a wrong password once it is changed with SQL*Plus. This may also result in the SYSMAN account being locked which can subsequently prevent logins to the Grid Control console to change the password of the target OMS and Repository.
Change the password of the SYSMAN database account using the following SQL*Plus commands:
SQL>connect sysman/oldpassword; SQL>alter user sysman identified by newpassword;
For each Management Service associated with the Management Repository, locate the emoms.properties
configuration file.
The emoms.properties
file can be found in the following directory of the Oracle Application Server Home where the Oracle Management Service is installed and deployed:
IAS_HOME/sysman/config/
Locate the following entries in the emoms.properties
file:
oracle.sysman.eml.mntr.emdRepPwd=ece067ffc15edc4f oracle.sysman.eml.mntr.emdRepPwdEncrypted=TRUE
Enter your new password in the first entry and enter FALSE in the second entry.
For example:
oracle.sysman.eml.mntr.emdRepPwd=new_password
oracle.sysman.eml.mntr.emdRepPwdEncrypted=FALSE
Save and exit the emoms.properties
file and restart each Management Service associated with the Management Repository.
In the Grid Control console, click the Targets tab and then click All Targets on the sub tab.
Select the Management Services and Repository target and click Configure. Enterprise Manager displays the Monitoring Configurations page.
Enter the new password in the Repository password field and click OK.
After the Management Service has started, you can check the contents of the emoms.properties
file to be sure the password you entered has been encrypted.
For example, the entries should appear as follows:
oracle.sysman.eml.mntr.emdRepPwd=ece067ffc15edc4f oracle.sysman.eml.mntr.emdRepPwdEncrypted=TRUE
During repository creation, the MGMT_VIEW user is created. This view is used by Grid Control for the reporting framework to execute queries for Table from SQL and Chart from SQL report elements. The OMS is the only entity that uses the account so there is no need to know the password. However, you can still change the password if you choose, which requires that you bounce the OMS. To change the password, you can use either a PL/SQL call or an EMCTL command:
PL/SQL:
SQL> exec mgmt_view_priv.change_view_user_password('<random pwd>');
EMCTL command:
emctl config oms –change_view_user_pwd [-sysman_pwd <pwd>] [-user_pwd <pwd>] [-autogenerate]
This section provides information about dropping the Management Repository from your existing database and recreating the Management Repository after you install Enterprise Manager.
To recreate the Management Repository, you first remove the Enterprise Manager schema from your Management Repository database. You accomplish this task using the -action drop
argument to the RepManager
script, which is described in the following procedure.
To remove the Management Repository from your database:
Locate the RepManager
script in the following directory of the Oracle Application Server Home where you have installed and deployed the Management Service:
IAS_HOME/sysman/admin/emdrep/bin
At the command prompt, enter the following command:
$PROMPT> RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
In this syntax example:
repository_host
is the machine name where the Management Repository database is located
repository_port
is the Management Repository database listener port address, usually 1521 or 1526
repository_SID
is the Management Repository database system identifier
password_for_sys_account
is the password of the SYS user for the database. For example, change_on_install
.
-action drop
indicates that you want to drop the Management Repository.
Alternatively, you can use a connect descriptor to identify the database on the RepManager command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.
For example, you can use the connect descriptor as follows to create the Management Repository:
$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=host1)(PORT=1521)) (CONNECT_DATE=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action drop
See Also:
"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using connect descriptorsThe preferred method for creating the Management Repository is to create the Management Repository during the Enterprise Manager installation procedure, which is performed using Oracle Universal Installer.
See Also:
Oracle Enterprise Manager Grid Control Installation and Basic Configuration for information about installing Enterprise ManagerHowever, if you need to recreate the Management Repository in an existing database, you can use the RepManager
script, which is installed when you install the Management Service. Refer to the following sections for more information:
Using the RepManager Script to Create the Management Repository
Using a Connect Descriptor to Identify the Management Repository Database
To create a Management Repository in an existing database:
Review the hardware and software requirements for the Management Repository as described in Oracle Enterprise Manager Grid Control Installation and Basic Configuration. and review the section "Management Repository Deployment Guidelines".
Locate the RepManager
script in the following directory of the Oracle Management Service home directory:
ORACLE_HOME/sysman/admin/emdrep/bin
At the command prompt, enter the following command:
$PROMPT> ./RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action create
In this syntax example:
repository_host
is the machine name where the Management Repository database is located
repository_port
is the Management Repository database listener port address, usually 1521 or 1526
repository_SID
is the Management Repository database system identifier
password_for_sys_account
is the password of the SYS user for the database. For example, change_on_install
.
Enterprise Manager creates the Management Repository in the database you specified in the command line.
Alternatively, you can use a connect descriptor to identify the database on the RepManager
command line. The connect descriptor identifies the host, port, and name of the database using a standard Oracle database syntax.
For example, you can use the connect descriptor as follows to create the Management Repository:
$PROMPT> ./RepManager -connect "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP) (HOST=host1)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action create
See Also:
"Establishing a Connection and Testing the Network" in the Oracle Database Net Services Administrator's Guide for more information about connecting to a database using a connect descriptorThe ability to use a connect string allows you to provide an address list as part of the connection string. The following example shows how you can provide an address list consisting of two listeners as part of the RepManager
command line. If a listener on one host becomes unavailable, the second listener can still accept incoming requests:
$PROMPT> ./RepManager -connect "(DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521) (ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521) (CONNECT_DATE=(SERVICE_NAME=servicename)))" -sys_password efkl34lmn -action create
Oracle Universal Installer creates the Management Repository using a configuration step at the end of the installation process. If the repository configuration tool fails, note the exact error messages displayed in the configuration tools window, wait until the other configuration tools have finished, exit from Universal Installer, and then use the following sections to troubleshoot the problem.
If the creation of your Management Repository is interrupted, you may receive the following when you attempt to create or drop the Management Repository at a later time:
SQL> ERROR: ORA-00604: error occurred at recursive SQL level 1 ORA-04068: existing state of packages has been discarded ORA-04067: not executed, package body "SYSMAN.MGMT_USER" does not exist ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at "SYSMAN.SETEMUSERCONTEXT", line 5 ORA-06512: at "SYSMAN.CLEAR_EMCONTEXT_ON_LOGOFF", line 4 ORA-06512: at line 4
To fix this problem, see "General Troubleshooting Techniques for Creating the Management Repository".
If you receive an error such as the following when you try to connect to the Management Repository database, you are likely using an unsupported version of the Oracle Database:
Server Connection Hung
To remedy the problem, upgrade your database to the supported version as described in Oracle Enterprise Manager Grid Control Installation and Basic Configuration.
If you encounter an error while creating the Management Repository, drop the repository by running the -drop
argument to the RepManager
script.
See Also:
"Dropping the Management Repository"If the RepManager
script drops the repository successfully, try creating the Management Repository again.
If you encounter errors while dropping the Management Repository, do the following:
Connect to the database as SYSDBA using SQL*Plus.
Check to see if the SYSMAN database user exists in the Management Repository database.
For example, use the following command to see if the SYSMAN user exists:
prompt> SELECT username FROM DBA_USERS WHERE username='SYSMAN';
If the SYSMAN user exists, drop the user by entering the following SQL*Plus command:
prompt> DROP USER SYSMAN CASCADE;
Check to see if the following triggers exist:
SYSMAN.EMD_USER_LOGOFF SYSMAN.EMD_USER_LOGON
For example, use the following command to see if the EMD_USER_LOGOFF trigger exists in the database:
prompt> SELECT trigger_name FROM ALL_TRIGGERS WHERE trigger_name='EMD_USER_LOGOFF';
If the triggers exist, drop them from the database using the following commands:
prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGOFF; prompt> DROP TRIGGER SYSMAN.EMD_USER_LOGON;
There are user requirements for migrating an Enterprise Manager repository across servers - same and cross platforms.
The Enterprise Manager repository migration process is not exactly the same as database migration. In case of Enterprise Manager Repository migration you must take care of Enterprise Manager specific data, options, and pre-requisites for the repository move. You should make sure data integrity is maintained from both the Enterprise Manager and Oracle database perspective.
This brings up need for defining the process that can be followed by end users for successful and reliable migration of repository in minimum time and with maximum efficiency.
The overall strategy for migration depends on:
The source and target database version
The amount of data/size of repository
Actual data to migrate [selective/full migration]
Cross platform transportable tablespace along with data pump (for metadata) is the fastest and best approach for moving large Enterprise Manager Grid Control repository from one platform to another. Other option that can be considered for migration is to use Data Pump for both data and metadata moves but this would require more time than the cross platform transportable tablespace approach for the same amount of data. The advantage to using the data pump approach is that it provides granular control over options and the overall process, as in the case of selective data being migrated and not the whole of source data. If the source and target is not on version 10g then export/import is the only way to get the data migrated cross platform.
More details on cross platform transportable tablespace, data pump, and export/import options can be found at the Oracle Technology Network (OTN) or in the Oracle Database Administrator's Guide.
The following lists the common prerequisites for a repository migration:
Source and target database must use the same character set and should be at same version.
Source and target database should meet all the pre-requisites mentioned for Enterprise Manager Repository software requirements mentioned in Enterprise Manager install guide.
If source and target database are NOT on 10g - only Export/Import can be used for cross platform migration
If Source and target database are on 10g - either of three options Cross platform transportable tablespaces migration, Data Pump or Export/Import can be used for cross platform repository migration
You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
To plug a transportable tablespace set into an Oracle Database on a different platform, both databases must have compatibility set to at least 10.0.
Most of the platforms(but not all) are supported for cross-platform tablespace transport. You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported, and to determine their platform IDs and their endian format (byte ordering).
Source and Destination host should have EM agent running and configured to the instance which is to be migrated
If target database has EM repository installed, it should be first dropped using RepManager before target database related steps are carried out.
The following sections discuss the methodologies of a repository migration.
Oracle's transportable tablespace feature allows users to quickly move a user tablespace across Oracle databases. It is the most efficient way to move bulk data between databases. Prior to Oracle Database 10g, if you want to transport a tablespace, both source and target databases need to be on the same platform. Oracle Database 10g adds the cross platform support for transportable tablespaces. With the cross platform transportable tablespace, you can transport tablespaces across platforms.
Cross platform transportable tablespaces allows a database to be migrated from one platform to another (use with Data Pump or Import/Export).
Use these steps to prepare for transportable tablespaces:
Prepare set of user tablespaces and Check for containment violation
execute DBMS_TTS.TRANSPORT_SET_CHECK('MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS', TRUE);
select * FROM transport_set_violations;
Shutdown OMS instances and prepare for migration
Shutdown OMS, set job queue_processes to 0 and run
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
Make the tablespaces to be transported read only
alter tablespace MGMT_TABLESPACE read only;
alter tablespace MGMT_ECM_DEPOT_TS read only;
Extract Metadata for transportable tablespaces using Data Pump Utility:
Create data pump directory
create directory data_pump_dir as '/scratch/gachawla/EM102/ttsdata';
Extract the metadata using data pump (or export )
expdp DUMPFILE=ttsem102.dmp TRANSPORT_TABLESPACES=MGMT_TABLESPACE,MGMT_ECM_DEPOT_TS TRANSPORT_FULL_CHECK=Y
Extract other objects ( packages, procedures, functions, temporary tables etc - Not contained in user tablespaces)
expdp SCHEMAS=SYSMAN CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log JOB_NAME=expmet
Run Endian check and convert the datafiles if endian is different between source and destination:
For Endian check, run this on both source and destination database
SELECT endian_format
FROM v$transportable_platform tp, v$database d
WHERE tp.platform_name = d.platform_name;
If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.
Example:
Source Endian Linux IA (32-bit) - Little Destination Endian Solaris[tm] OE (32-bit) - Big
Ship datafiles, metadata dump to target and Convert datafiles using RMAN
Ship the datafiles and the metadata dump to target and On target convert all datafiles to destination endian:
CONVERT DATAFILE '/d14/em10g/oradata/em102/mgmt.dbf', '/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf' FROM PLATFORM 'Linux IA (32-bit)';
Conversion via RMAN can be done either on source or target (For more details refer RMAN doc). Parallelism can be used to speed up the process if the user tablespaces contains multiple datafiles.
Use the following steps to import metadata and plugin tablespaces:
Run RepManager to drop target repository (if target database has EM repository installed)
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Run pre import steps to create sysman user and grant privs on target database
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Invoke Data Pump utility to plug the set of tablespaces into the target database.
impdp DUMPFILE=ttsem102.dmp DIRECTORY=data_pump_dir
TRANSPORT_DATAFILES=/d14/em10g/oradata/em102/mgmt.dbf,/d14/em10g/oradata/em102/mgmt_ecm_depot1.dbf
Import other objects (packages, procedures, functions etc)
impdp CONTENT=METADATA_ONLY EXCLUDE=INDEX,CONSTRAINT DUMPFILE=data_pump_dir:postexp.dmp LOGFILE=data_pump_dir:postexp.log
Follow these post plug in steps:
Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects - compare source and destination schemas for any discrepancy in counts and invalids.
Bring user tablespaces back to read write mode
alter tablespace MGMT_TABLESPACE read write;
alter tablespace MGMT_ECM_DEPOT_TS read write;
Submit EM dbms jobs
Reset back job_queue_processes to original value and run
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update OMS properties and startup OMS
Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate Management Services and Repository target
If Management Services and repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate Database and database Listener targets
Discover the target database and listener in EM or relocate the targets from source agent to destination agent.
Oracle Data Pump technology enables high-speed, parallel movement of bulk data and metadata from one database to another. Data Pump uses APIs to load and unload data instead of usual SQL commands. Data pump operations can be run via EM interface and is very useful for cross platform database migration.
The migration of the database using the Data Pump export and Data Pump import tools comprises these steps: export the data into a dump file on the source server with the expdp command; copy or move the dump file to the target server; and import the dump file into Oracle on the target server by using the impdp command; and run post import EM specific steps.
Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import
Use the following steps to prepare for data pump:
Pre-requisite for using Data pump for EM repository
Impdp fails for EM repository because of data pump bug - Bug 4386766 - IMPDP WITH COMPRESSED INDEXES FAILS WITH ORA-14071 AND ORA-39083. This bug is fixed in 10.2. Backport is available for 10.1.0.4. This RDBMS patch has to be applied to use expdp/impdp for EM repository migration or workaround is to use exp/imp for extract and import.
Create data pump directory
Create directory data_pump_dir as '/scratch/gachawla/EM102/ttsdata';
Shutdown OMS instances and prepare for migration
Shutdown OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
To improve throughput of a job, PARALLEL parameter should be used to set a degree of parallelism that takes maximum advantage of current conditions. In general, the degree of parallelism should be set to more than twice the number of CPUs on an instance.
All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter.
DBMS_DATAPUMP APIs can also be used to do data pump export/import. Please refer to Data pump section in 10g administration manual for all the options.
Use these steps to run data pump export:
Run data pump export:
expdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Verify the logs for any errors during export
Data pump direct path export sometimes fails for mgmt_metrics_raw and raises ORA 600. This is due to Bug 4221775 (4233303). This bug is fixed in 10.2. Workaround: if using expdp data pump for mgmt_metrics_raw , run expdp with ACCESS_METHOD+EXTERNAL_TABLE parameter.
expdp directory=db_export dumpfile=exp_st2.dmp logfile=exp_st2.log tables=sysman.mgmt_metrics_raw access_method=external_table
Use these steps to run data pump import:
Run RepManager to drop target repository (if target database has EM repository installed)
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Prepare the target database
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Run data pump import
Impdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpimpfull.log JOB_NAME=dpimpfull
Verify the logs for any issues with the import.
Use the following steps for post import Enterprise Manager steps:
Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects - compare source and destination schemas for any discrepancy in counts and invalids.
Submit EM dbms jobs
Reset back job_queue_processes to original value and run
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update OMS properties and startup OMS
Update emoms.properties to reflect the migrated repository. Update host name - oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate Management Services and Repository target
If Management Services and repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate Database and database Listener targets
Discover the target database and listener in EM or relocate the targets from source agent to destination agent.
If the source and destination database is non-10g, then export/import is the only option for cross platform database migration.
For performance improvement of export/import, set higher value for BUFFER and RECORDLENGTH . Do not export to NFS as it will slow down the process considerably. Direct path can be used to increase performance. Note - As EM uses VPD, conventional mode will only be used by Oracle on tables where policy is defined.
Also User running export should have EXEMPT ACCESS POLICY privilege to export all rows as that user is then exempt from VPD policy enforcement. SYS is always exempted from VPD or Oracle Label Security policy enforcement, regardless of the export mode, application, or utility that is used to extract data from the database.
Use the following steps to prepare for Export/Import:
Mgmt_metrics_raw partitions check
select table_name,partitioning_type type, partition_count count, subpartitioning_type subtype from dba_part_tables where table_name = 'MGMT_METRICS_RAW'
If MGMT_METRICS_RAW has more than 3276 partitions please see Bug 4376351 - This is Fixed in 10.2 . Workaround is to export mgmt_metrics_raw in conventional mode.
Shutdown OMS instances and prepare for migration
Shutdown OMS, set job queue_processes to 0 and run @IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_remove_dbms_jobs.sql
Follow these steps for export:
Export data
exp full=y constraints=n indexes=n compress=y file=fullem102_1.dmp log=fullem102exp_1.log
Export without data and with constraints
exp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102exp_2.log
Follow these steps to import:
Run RepManager to drop target repository (if target database has EM repository installed)
RepManager repository_host repository_port repository_SID -sys_password password_for_sys_account -action drop
Pre-create the tablespaces and the users in target database
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_tablespaces.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_repos_user.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_pre_import.sql
Import data
imp full=y constraints=n indexes=n file=fullem102_1.dmp log=fullem102imp_1.log
Import without data and with constraints
imp full=y constraints=y indexes=y rows=n ignore=y file=fullem102_2.dmp log=fullem102imp_2.log
Follow these steps for post import EM steps:
Run post plugin steps to recompile any invalids, create public synonyms, create other users, enable VPD policy, repin packages
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_create_synonyms.sql
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_post_import.sql
Check for invalid objects - compare source and destination schemas for any discrepancy in counts and invalids.
Submit EM dbms jobs
Reset back job_queue_processes to original value and run
@IAS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_submit_dbms_jobs.sql
Update OMS properties and startup OMS
Update emoms.properties to reflect the migrated repository. Update host name, oracle.sysman.eml.mntr.emdRepServer and port with the correct value and start the OMS.
Relocate Management Services and Repository target
If Management Services and repository target needs to be migrated to the destination host, run em_assoc. handle_relocated_target to relocate the target or recreate the target on the target host.
Discover/relocate Database and database Listener targets
Discover the target database and listener in EM or relocate the targets from source agent to destination agent.
These verification steps should be carried out post migration to ensure that the migration was completely successful:
Verify any discrepancy in objects by comparing source and target databases through EM
Verify migrated database through EM whether database is running without any issues
Verify repository operations, dbms jobs and whether any management system errors reported
Verify all EM functionalities are working fine after migration
Make sure Management Services and Repository target is properly relocated by verifying through EM
Oracle Enterprise Manager now provides an option that will more quickly display the Console Home page even in a scenario where the Management Repository is very large. Normally, factors such as the number of alerts, errors, policies, and critical patches can contribute to delayed displayed times. Since there is no single factor nor any simple way to scale the SQL or user interface, a simple option flag has been added that removes the following page elements for all users.
When the emoms.properties flag, LargeRepository=
, is set to true (when normally the default is false), the SQL for the following items is not executed and thus the items will not be displayed on the Console page.
Three sections from the Overview Page segment:
All Target Alerts
Critical
Warning
Errors
All Target Policy Violations
Critical
Warning
Informational
All Target Jobs
Problem Executions (last 7 days)
Suspended Executions (last 7 days)
The page segment which includes Security Patch Violations and Critical Patch Advisories.
The Deployment Summary section would move up to fill in the vacated space.