5 Preparing for a Logical Database Migration
The following topics describe how to complete the Zero Downtime Migration prerequisites before running a logical database migration job.
Source Database Prerequisites for Logical Migration
Complete the following prerequisites on the source database to prepare for a logical migration.
Offline and Online Migrations Require:
-
The character set on the source database must be the same as the target database.
-
Configure the streams pool with the initialization parameter
STREAMS_POOL_SIZE
.For offline logical migrations, for optimal Data Pump performance, it is recommended that you set
STREAMS_POOL_SIZE
to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.For online logical migrations, set
STREAMS_POOL_SIZE
to at least 2GB. See https://support.oracle.com/epmos/faces/DocumentDisplay?id=2078459.1 for the recommendation 1GBSTREAMS_POOL_SIZE
per integrated extract + additional 25 percent. -
System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.
If the time on any of these systems varies beyond 6 minutes from the time on OCI, it should be adjusted. You can use
ntp time check
to synchronize the time if NTP is configured. If NTP is not configured, then it is recommended that you configure it. If configuring NTP is not an option, then you need to correct the time manually to ensure it is in sync with OCI time. -
If you are using a database link, and your target database is on Autonomous Database Shared Infrastructure, you must configure TCPS on the source. Autonomous Database Shared Infrastructure doesn't allow a database link to a source that is not configured with TCPS.
-
If you are migrating from an Amazon Web Services RDS environment, see Migrating from Amazon Web Services RDS to Oracle Autonomous Database for information about source environment preparations.
-
In the PDB being exported, if you have created local objects in the C## user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance (for non-Autonomous Database targets) or use the following Zero Downtime Migration parameter to rename the schema on import.
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_SCHEMA,oldValue:c##common_user,newValue:new_ name
-
If you are migrating to Oracle Autonomous Database on Exadata Cloud@Customer from any on-premises Oracle Database, including existing Exadata Cloud@Customer systems, see Migrating to Oracle Autonomous Database on Exadata Cloud@Customer for additional prerequisite setup tasks.
Online Migrations Require:
-
If the source is Oracle Database 11.2, apply mandatory 11.2.0.4 RDBMS patches on the source database.
See My Oracle Support note Oracle GoldenGate -- Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)
-
Database PSU 11.2.0.4.210720 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS
-
OGG RDBMS patch 32248879 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.201020 FOR BUGS 32048478 20448066 - This patch contains mandatory fix for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES
-
- If the source is Oracle Database 12.1.0.2 or a later release, apply mandatory
RDBMS patches on the source database.
See My Oracle Support note Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1), which lists the additional RDBMS patches needed on top of the latest DBBP/RU for Oracle Database 12c and later.
-
Enable
ARCHIVELOG
mode for the database. See Changing the Database Archiving Mode. -
Enable
FORCE LOGGING
to ensure that all changes are found in the redo by the Oracle GoldenGate Extract process. See Specifying FORCE LOGGING Mode -
Enable database minimal supplemental logging. See Minimal Supplemental Logging.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-
Enable initialization parameter
ENABLE_GOLDENGATE_REPLICATION
. -
Install the
UTL_SPADV
orUTL_RPADV
package for Integrated Extract performance analysis.Source database version 19c and later, see UTL_SPADV
Operational notes: To use this package, you must connect to an Oracle database as an Oracle Replication administrator (for example,
ggadmin
) and run theutlrpadv.sql
script in the rdbms/admin directory in ORACLE_HOME.Earlier database versions, see UTL_SPADV Operational Notes
Operational notes: To use this package, you must connect to an Oracle database as an Oracle Replication administrator (for example,
ggadmin
) and run theutlspadv.sql
script in the rdbms/admin directory in ORACLE_HOME. -
During the migration period, to provide the most optimal environment for fast database replication, avoid large batch DML operations. Running large batch operations, like a single transaction that affects multi-millions of rows, can slow down replication rates. Create, alter, and drop DDL operations are not replicated.
Offline Migrations Require:
-
The
DATAPUMP_EXP_FULL_DATABASE
andDATAPUMP_IMP_FULL_DATABASE
roles are required. These roles are required for Data Pump to determine whether privileged application roles should be assigned to the processes comprising the migration job.DATAPUMP_EXP_FULL_DATABASE
is required for the export operation at the source database for the specified database user. TheDATAPUMP_IMP_FULL_DATABASE
role is required for the import operation at the specified target database for specified target database user.See the Oracle Data Pump documentation for more information.
Target Database Prerequisites for Logical Migration
Complete the following prerequisites on the target database to prepare for a logical migration.
Data Pump-only logical migrations require:
-
The
DATAPUMP_IMP_FULL_DATABASE
role is required for the import operation at the specified target database for specified target database user.
All logical migrations require:
-
The character set on the source database must be the same as the target database.
-
System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.
-
All source database requirements be met. Some tasks are performed on both the source and target. See Source Database Prerequisites for Logical Migration
Additional Logical Migration Prerequisites
Complete the following additional prerequisites to prepare for a logical migration.
Create an OCI API key pair
See Required Keys and OCIDs for details.
Configure AWS S3 security credentials
If you are migrating from an Amazon Web Services RDS environment, see Migrating from Amazon Web Services RDS to Oracle Autonomous Database for information about source environment preparations.
Set Up Data Transfer Media
-
To use Object Storage data transfer medium:
Create an Object Store bucket on Oracle Cloud Infrastructure if you are using Object Storage as a data transfer medium. This is not required for Exadata Cloud at Customer or on-premises Exadata Database Machine targets.
-
To use NFS shared storage:
Ensure that the NFS is shared between the source and target database server and mapped to the Database Directory object.
For information about how to mount NFS for dump storage, see Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices (Doc ID 359515.1).
-
To use a database link (DBLINK):
If you are using an existing database link between the target database to an on-premises source database by
global_name
of the source database, ensure that the database link is not broken. Zero Downtime Migration can reuse the pre-existing database link for migration if that data transfer medium is configured.Zero Downtime Migration supports the use of a database link for all Autonomous Database targets; however, when you set up the database link for Autonomous Database Dedicated Infrastructure, you must use the Easy Connect syntax or provide a complete descriptor in the
USING 'connect string'
clause. You cannot use a network service name because the tnsnames.ora file is not available for lookup. Database links can only be used for TCP connections because TCPS connections require a wallet.See SQL Commands with Restrictions in Autonomous Database and Create Database Links from Autonomous Database to Oracle Databases
-
If you are not using a database link for data transfer, ensure that the file system used for the Data Pump export directory has sufficient space to store Data Pump dump files.
-
To use Amazon S3 bucket:
If the source uses self-signed database server certificates:
If the source database listener is configured with TLS (TCPS) using self-signed database server certificates, then ensure that the self-signed certificate is added to the Zero Downtime Migration home cert store as follows.
keytool -import -keystore ZDM_HOME/jdk/jre/lib/security/cacerts -trustcacerts
-alias "src ca cert" -file source_db_server-certificate
Online Migration Additional Prerequisites
For online migration, do the following additional prerequisite tasks:
-
Set up an Oracle GoldenGate Microservices hub:
For Oracle Database Cloud Services targets, deploy the "Oracle GoldenGate for Oracle - Database Migrations" image from Oracle Cloud Marketplace.:
The "Database Migrations" version of the Oracle GoldenGate Marketplace image provides limited free licensing for use with OCI Database Migration Service. See the license agreement for details.
Any other use of GoldenGate requires purchasing a license for the Oracle GoldenGate product. See the Oracle GoldenGate documentation for more information.
- Log in to Oracle Cloud Marketplace.
- Search for the "Oracle GoldenGate for Oracle - Database Migrations" Marketplace listing.
- From the Marketplace search results, select the "Oracle GoldenGate for Oracle - Database Migrations" listing.
- For instructions to deploy the Marketplace listing, see Deploying Oracle GoldenGate Microservices on Oracle Cloud Marketplace.
For guidance in configuring Zero Downtime Migration GoldenGate settings and choosing the correct GoldenGate Hub shape, see Oracle MAA technical brief, Oracle Zero Downtime Migration – Logical Migration Performance Guidelines.
If you are migrating to Exadata Cloud@Customer, or any on-premises Oracle Exadata Database Machine, you must use an on-premises Oracle GoldenGate Microservices instance to create a deployment for the source and target. The “Oracle GoldenGate for Oracle – Database Migrations” marketplace offering now contains a downloadable docker image for migrations to ExaCC. You can review the latest marketplace VM at https://cloudmarketplace.oracle.com/marketplace/en_US/listing/96175416, and the documentation for this functionality can be found at Migrating to Exadata Cloud@Customer Using Oracle Zero Downtime Migration.
-
Create a GoldenGate administration users:
Note that the examples shown below are for basic migrations. For comprehensive Oracle GoldenGate permissions information, see Granting the Appropriate User Privileges in Using Oracle GoldenGate with Oracle Database.
-
On the Source Database:
-
Create a GoldenGate administration user,
ggadmin
, granting all of the permissions listed in the example. If the source database is multitenant (CDB), create the user in the source PDB, as shown in this example.SQL> create user ggadmin identified by password default tablespace users temporary tablespace temp; SQL> grant connect, resource to ggadmin; SQL> alter user ggadmin quota 100M ON USERS; SQL> grant unlimited tablespace to ggadmin; SQL> grant select any dictionary to ggadmin; SQL> grant create view to ggadmin; SQL> grant execute on dbms_lock to ggadmin; SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');
-
If the source database is multitenant (CDB), also create user
c##ggadmin
inCDB$ROOT
as shown in this example.SQL> create user c##ggadmin identified by password default tablespace users temporary tablespace temp; SQL> alter user c##ggadmin quota 100M ON USERS; SQL> grant unlimited tablespace to c##ggadmin; SQL> grant connect, resource to c##ggadmin container=all; SQL> grant select any dictionary to c##ggadmin container=all; SQL> grant create view to c##ggadmin container=all; SQL> grant set container to c##ggadmin container=all; SQL> grant execute on dbms_lock to c##ggadmin container=all; SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('c##ggadmin',container=>'all');
-
-
On the Target Database:
-
If the target is Autonomous Database, unlock the pre-created
ggadmin
user. -
If the target is not Autonomous Database, create a
ggadmin
user in the target PDB as shown in the above examples. This user is similar to theggadmin
user on the source database, but will require more privileges. See Establishing Oracle GoldenGate Credentials for information about privileges required for a "Replicat all modes" user.
-
-
- If the source database is configured to use SSL/TLS:
If the source database is configured to use SSL/TLS, then ensure that the wallet containing certificates for TLS authentication is located in directory
/u02/deployments/deployment_name/etc
on the GoldenGate instance. -
If the target database is configured to use SSL/TLS:
Ensure that the wallet containing certificates for TLS authentication is located in the correct location on the GoldenGate instance, as follows:
-
For an Autonomous Database, the wallet file should be located in directory
/u02/deployments/deployment_name/etc/adb
-
For a co-managed database, the wallet file should be located in directory
/u02/deployments/deployment_name/etc
Autonomous databases are always configured to use TLS.
-
Setting Logical Migration Parameters
Set the required logical migration response file parameters. Get the
response file template, $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
,
which is used to create your Zero Downtime Migration response file for the database migration
procedure, and edit the file as described here.
The logical migration response file settings are described in detail in Zero Downtime Migration Logical Migration Response File Parameters Reference.
The following parameters are required for an offline or online logical migration:
-
MIGRATION_METHOD
: Set toONLINE_LOGICAL
for online migration with GoldenGate orOFFLINE_LOGICAL
for an offline Data Pump transfer. -
DATA_TRANSFER_MEDIUM
: Set toOSS
for Object Storage bucketNFS
for a shared Network File SystemDBLINK
for a direct transfer using a database linkCOPY
to use secure copyAMAZONS3
to use an Amazon S3 bucket (only applies to migrations from an AWS RDS Oracle source), see Migrating from Amazon Web Services RDS Oracle to Oracle Cloud)Unless you are using the default data transfer servers for handling the Data Pump dumps, you may also need to configure the data transfer node settings for the source and target database environments.
See Configuring the Transfer Medium and Specifying Transfer Nodes for details.
-
For a logical migration of an Oracle Database 11g source to an 11g target, set
DATAPUMPSETTINGS_SECUREFILELOB=FALSE
or you may get errors. - Set the following target database parameters.
-
TARGETDATABASE_OCID
specifies the Oracle Cloud resource identifier.For example: ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrwvou52gb6s5a
See also https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/identifiers.htm
-
TARGETDATABASE_ADMINUSERNAME
specifies the database administrator user name. For example, for a co-managed database migration user name assystem
and for an Autonomous Database migration user name asadmin
. -
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME
specifies the fully qualified service name.This parameter is optional for Autonomous Database targets; however if an HTTP proxy is required to connect, specify it.
In addition, for Oracle Autonomous Database Dedicated Infrastructure and Autonomous Database on Cloud@Customer with fractional OCPU service you must specify the appropriate service alias in the parameter.
You can specify any predefined fractional service alias available; however, for Autonomous Transaction Processing workloads TP* services are preferred over LOW* services because LOW* is meant for low priority batch jobs.
TP_TLS
,TP
,LOW_TLS
, orLOW
(for Autonomous Transaction Processing workloads)LOW_TLS
orLOW
(for Autonomous Data Warehouse workloads)
-
- Set the following source database parameters.
-
SOURCEDATABASE_ADMINUSERNAME
specifies the database administrator user name. For example, user name assystem
. -
SOURCEDATABASE_CONNECTIONDETAILS_HOST
specifies the listener host name or IP address. In case of Oracle RAC, the SCAN name can be specified. (not required for Autonomous Database) -
SOURCEDATABASE_CONNECTIONDETAILS_PORT
specifies the listener port number. (not required for Autonomous Database) -
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME
specifies the fully qualified service name. (not required for Autonomous Database)For example: service_name.DB_domain
See also https://docs.cloud.oracle.com/en-us/iaas/Content/Database/Tasks/connectingDB.htm
-
For migrations from an AWS RDS source, see Migrating from Amazon Web Services RDS to Oracle Autonomous Database for additional parameter settings.
-
-
Set the following
OCIAUTHENTICATIONDETAILS
parameters.For more information about the required settings, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#RequiredKeysandOCIDs
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID
specifies the OCID of the OCI tenancy. You can find this value in the Console under Governance and Administration, Administration, Tenancy Details. The tenancy OCID is shown under Tenancy Information.For example: ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq
See also https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/managingtenancy.htm
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID
specifies the OCID of the IAM user. You can find this value in the Console under Profile, User Settings.See also https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingusers.htm
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT
specifies the fingerprint of the public API key. -
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE
specifies the absolute path of API private key file. -
OCIAUTHENTICATIONDETAILS_REGIONID
specifies the OCI region identifier.See the Region Identifier column in the table at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm
-
Oracle GoldenGate Settings
For online logical migrations, in addition to the above, you must also set the
GoldenGate parameters, TARGETDATABASE_GGADMINUSERNAME
,
SOURCEDATABASE_GGADMINUSERNAME
,
SOURCECONTAINERDATABASE_GGADMINUSERNAME
, and the parameters prefixed with
GOLDENGATEHUB
and GOLDENGATESETTINGS
.
By default, Zero Downtime Migration excludes all DDL from GoldenGate replication. However,
you can override this behavior by setting the parameter
GOLDENGATESETTINGS_REPLICATEDDL=true
.
See Zero Downtime Migration Logical Migration Response File Parameters Reference for details about these parameters.
Oracle Data Pump Settings
Zero Downtime Migration automatically sets optimal defaults for Data Pump parameters to achieve better performance and ensure data security. If you need to further tune performance, there are several Data Pump settings that you can configure in the response file.
The default DATAPUMPSETTINGS_JOBMODE=SCHEMA
is recommended for migrations to
Autonomous Database.
See Default Data Pump Parameter Settings for Zero Downtime Migration for information about the default Data Pump property settings, how to select schemas or objects for inclusion or exclusion, and Data Pump error handling.
See Zero Downtime Migration Logical Migration Response File Parameters Reference for all of the Data Pump parameters you can set through Zero Downtime Migration.
See Migrating from Amazon Web Services RDS to Oracle Autonomous Database for information about setting Data Pump parameters for migration from AWS RDS.
Configuring the Transfer Medium and Specifying Transfer Nodes
Zero Downtime Migration offers various transfer options to make Oracle Data Pump dumps available to the target database server.
Using the DATA_TRANSFER_MEDIUM
response file parameter you can configure
the following data transfer methods:
-
OSS
: Oracle Cloud Object Storage.Supported for all migration types and targets.
-
NFS
: Network File SystemSupported for co-managed and user managed targets.
-
DBLINK
: Direct data transfer from the source to the target over a database link.Supported for all migration types and targets.
-
COPY
: Transfer dumps to the target transfer node using secure copy.Supported for co-managed and user managed targets.
AMAZON3
: Amazon S3 bucket- Only applies to migrations from an AWS RDS Oracle source. See Migrating from Amazon Web Services RDS Oracle to Oracle Cloud for more information.
Note:
To take advantage of parallelism and achieve the best data transfer performance, Oracle recommends that you transfer data usingOSS
or
NFS
for databases over 50GB in size. The DBLINK
transfer medium can be convenient for smaller databases, but this choice may involve
uncertainty in performance because of its dependence on network bandwidth for the
duration of the transfer.
Once the export of dumps on the source is completed, the dumps are uploaded or
transferred in parallel as defined by parameter
DUMPTRANSFERDETAILS_PARALLELCOUNT
(defaults to 3), and any transfer
failures are retried by default as specified in the parameter
DUMPTRANSFERDETAILS_RETRYCOUNT
(defaults to 3).
The transfer of dumps can be done from any node at the source data center, provided that the dumps are accessible from the given node. It is crucial to ascertain the network connectivity and transfer workload impact on the source database server in order to decide which data transfer approach to take.
Direct Transfer from Source to Target
This option applies only to co-managed cloud target databases.
Zero Downtime Migration enables logical migration using direct transfer of the Data Pump dump from the source to the target securely. The data is copied over from the source database directory object path to the target database server directory object path, or to a target transfer node, using either secure copy or RSYNC. This avoids the data being transferred over a WAN or needing additional shared storage between the source and target environments. This capability greatly simplifies the logical migration within the data center.
About Transfer Nodes
You will configure a node, referred as a transfer node, for both the source data center and the target tenancy.
The response file parameters that are prefixed with
DUMPTRANSFERDETAILS_SOURCE_TRANSFERNODE
designate the node that
handles the export dumps at the source data center. This source transfer
node defaults to the source database.
Similarly, the response file parameters that are prefixed with
DUMPTRANSFERDETAILS_TARGET_TRANSFERNODE
designate the node that
handles the import of dumps at the target. This target transfer
node defaults to the target database, for co-managed targets.
Transfer Node Requirements
The source transfer node can be any of the following:
- Source database server (default)
- NAS mounted server
- Zero Downtime Migration service node
The target transfer node can be any of the following:
- Target Database server (default)
- NAS mounted server
- Zero Downtime Migration service node
For a server to be designated as transfer node, the following critical considerations are necessary.
-
Availability of CPU and memory to process the upload or transfer workload
-
Connectivity to the specified upload or transfer target
-
Port 443 connectivity to Object Storage Service if the chosen data transfer medium is
OSS
-
Port 22 connectivity to target storage server if the chosen transfer medium is
COPY
-
-
Availability of Oracle Cloud Infrastructure CLI. For speedier and resilient upload of dumps this is the recommended transfer utility for the
OSS
transfer medium. -
OCI CLI must be installed and configured as detailed in https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm.
Installing and configuring OCI CLI on each source database server may not be feasible. In such cases, one of the nodes in the data center can be designated as a transfer node with OCI CLI configured, and this node can share a network storage path with the database servers for Data Pump dumps to be created. This also avoids the upload workload consuming additional CPU and memory on production database servers.
The designated transfer node can act as the gateway server at the data center for the external data transfer allowing transfer data traffic, thus avoiding the need to allow data transfer from the source database server or to the target database server.
Optionally, the additional transfer node requirement can be avoided by leveraging the Zero Downtime Migration server as the transfer node, provided that the Zero Downtime Migration service is placed in an on-premises data center and can meet the transfer node requirements listed above.
Using the Oracle Cloud Object Storage Transfer Medium
Object Storage data transfer medium is supported for all migration types and targets.
When using Object Storage as the data transfer medium, by setting
DATA_TRANSFER_MEDIUM=OSS
, it is recommended that dumps be
uploaded using OCI CLI for faster and more secure and resilient uploads. You must
configure OCI CLI in the upload node, and set parameter
DUMPTRANSFERDETAILS_SOURCE_USEOCICLI
to TRUE
,
the parameters for OCI CLI are
DUMPTRANSFERDETAILS_SOURCE_USEOCICLI
DUMPTRANSFERDETAILS_SOURCE_OCIHOME
Using the Database Link Transfer Medium
Supported for online and offline migrations to all migration targets
When you set DATA_TRANSFER_MEDIUM=DBLINK
, a database link is created
from the OCI co-managed database or Autonomous Database target to the source
database using the global_name
of the specified source
database.
Zero Downtime Migration creates the database link if it does not already exist, and the link is cleaned once the Data Pump import phase is complete.
Using the NFS Transfer Medium
Supported for offline migrations to co-managed target database only.
The NFS mode of transfer is available, by setting
DATA_TRANSFER_MEDIUM=NFS
, for co-managed target databases that
avoid the transfer of dumps. You should ensure that the specified path is accessible
between the source and target database server path.
Zero Downtime Migration ensures the security of dumps in the shared storage by preserving the restricted permission on the dumps such that only the source and target database users are allowed to access the dump.
Using the Copy Transfer Medium
Supported for offline migrations to co-managed target databases only.
Dumps can be transferred from the source to the target securely, by
setting DATA_TRANSFER_MEDIUM=COPY
. The relevant parameters are as
follows:
DUMPTRANSFERDETAILS_TRANSFERTARGET_USER
DUMPTRANSFERDETAILS_TRANSFERTARGET_USERKEY
DUMPTRANSFERDETAILS_TRANSFERTARGET_HOST
DUMPTRANSFERDETAILS_TRANSFERTARGET_SUDOPATH
DUMPTRANSFERDETAILS_TRANSFERTARGET_DUMPDIRPATH
You can leverage the RSYNC utility instead of SCP. Set
DUMPTRANSFERDETAILS_RSYNCAVAILABLE
to TRUE
,
and verify that RSYNC is available both at the source and target transfer nodes.
Selecting Objects for Migration
In Zero Downtime Migration, you can specify objects to include or exclude from a logical migration job using parameters in the response file.
Specify rules for objects to include or exclude in a migration job using the
EXLCUDEOBJECTS-n
and INCLUDEOBJECTS-n
response
file parameters.
You can either include or exclude objects in a migration, but you cannot do both.
If no rule is defined, all schemas and objects of the source database will be migrated, with exceptions listed in What Is Migrated During Initial Load.
If you specify Include rules, the migration will only move the specified objects and their dependent objects; all other objects are automatically excluded. USER and TABLESPACE_QUOTA objectType associated with included schema are included by default.
When specifying Exclude rules, the migration will exclude the specified objects and their dependent objects; all other objects are included in the migration.
You can define multiple include or exclude rules by incrementing the integer appended to the parameter name, as shown in these examples.
Example of a single include rule:
INCLUDEOBJECTS-1=owner:ownerValue1,
objectName:objectNameValue1,
objectType:objectTypeValue1
Example of two exclude rules:
EXCLUDEOBJECTS-1=owner:ownerValue1,
objectName:objectNameValue1,
objectType:objectTypeValue1
EXCLUDEOBJECTS-2=owner:ownerValue2,
objectName:objectNameValue2,
objectType:objectTypeValue2
Required Parameter Fields
To specify INCLUDEOBJECTS-n
and EXLCUDEOBJECTS-n
response file parameters, enter values for each of the following fields:
-
owner specifies the owner of the selected database objects. When using Include rules, all rules must be for the same owner, and wild characters are not allowed.
-
objectName specifies the name of selected database objects
-
objectType specifies the type of selected database objects. You can select ALL to select objects of all types (default).
You can filter owner and objectName fields using any valid pattern in Java class Pattern. For example, you can enter .* in the objectName field to select objects of any name.
Restrictions
Please note the following restrictions:
-
When excluding an object in a specified schema, and an object of the same name exists in a different schema that is also part of the migration, the objects will not be excluded (that is, the rule is ignored). The exclusion can be accomplished by migrating the schemas in separate migrations.
-
When creating Include rules in FULL job mode, only schema-level rules (objectName is .* and objectType is ALL) are allowed.
-
If an Include rule has .* in objectName, no other rule for the same objectType is allowed. If the rule has ALL as objectType, no other rule for any type is allowed.
-
The objectType ALL is only allowed for schema-level rules (objectName is .*).
-
The objectType TABLE is not allowed for schema-level rules where the owner is specified (objectName is .* and owner is any pattern other than .*).
-
Object-level rules (objectName is any pattern other than .*) can only be used for the following object types: DIRECTORY, FUNCTION, JOB, MATERIALIZED_VIEW, PACKAGE, PROCEDURE, TRIGGER, SEQUENCE, TABLE.
All other objectTypes must be either included or excluded using the .* pattern in objectName, and in addition for exclude, the owner should be .*
Supported objectTypes for object-level filtering
objectName can be chosen or can be .*
DIRECTORY
FUNCTION
JOB
MATERIALIZED_VIEW
PACKAGE
PROCEDURE
TRIGGER
SEQUENCE
TABLE
Supported objectTypes for generic objectType-level exclude or include
(objectName must be .*)CLUSTER
DB_LINK
GRANT
OBJECT_GRANT
SYSTEM_GRANT
ROLE_GRANT
PROCOBJ_GRANT
PROCDEPOBJ_GRANT
INDEX
INDEXTYPE
MATERIALIZED_VIEW_LOG
MATERIALIZED_ZONEMAP
POST_TABLE_ACTION
PROCOBJ
PROFILE
REF_CONSTRAINT
RLS_POLICY
SYNONYM
TABLESPACE
TABLESPACE_QUOTA
USER
XMLSCHEMA
VIEW
Filtering other objectTypes
To filter other objectTypes that are NOT listed above like COMMENT, FGA_POLICY, and
so on, use the DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST
parameter.
Examples
Example 5-1 Include all objects of schema MySchema
INCLUDEOBJECTS-1=owner:MySchema, objectName:.*, objectType:ALL
owner | objectName | objectType |
---|---|---|
MySchema | .* | ALL |
Example 5-2 Include all tables starting with PROD and procedure MYPROC of schema MySchema, including all dependent objects
INCLUDEOBJECTS-1=owner:MySchema, objectName:PROD.*,
objectType:TABLE
INCLUDEOBJECTS-2=owner:MySchema, objectName:MYPROC,
objectType:PROCEDURE
owner | objectName | objectType |
---|---|---|
MySchema | PROD.* | TABLE |
MySchema | MYPROC | PROCEDURE |
Example 5-3 Exclude schemas starting with Experimental, the table MySchema.OldTable (also excluding all dependent objects) and all objects of type DB_LINK
Note that MySchema.OldTable will not be excluded if a table called OldTable is present in a different schema that is also migrated.
EXCLUDEOBJECTS-1=owner:Experimental.*, objectName:.*,
objectType:ALL
EXCLUDEOBJECTS-2=owner:MySchema, objectName:OldTable,
objectType:TABLE
EXCLUDEOBJECTS-3=owner:.*, objectName:.*, objectType:DB_LINK
owner | objectName | objectType |
---|---|---|
Experimental.* | .* | ALL |
MySchema | OldTable | TABLE |
.* | .* | DB_LINK |
Specify Included and Excluded Objects with Special Characters
The following examples show you how to specify objects names that use
special characters in the EXCLUDEOBJECTS
and
INCLUDEOBJECTS
parameters.
-
To escape a special character, use two slashes (//) before and after all characters in the string before the special character.
For example, to escape dollar sign ($):
\\INLUDEOBJECTS-3= owner:GRAF_MULTI\\$_HR
-
To match all characters in between prefix and suffix pattern, add a period and an asterisk (.*) where the matching should occur.
For example, to exclude all schemas starting with GRAF and ending with HR:
EXCLUDEOBJECTS-3= owner:GRAF.*HR
Migrating Tables with support_mode = INTERNAL
Zero Downtime Migration detects and notifies you about tables with support_mode = INTERNAL.
Note:
Zero Downtime Migration 21.3 users must apply Patch 33509650: ZDM PATCH USING MOS to use this feature.DML replication: Oracle GoldenGate automaticallys ignore DML for
support_mode = INTERNAL tables. Zero Downtime Migration does not set GoldenGate
Extract parameter TABLEEXCLUDE
for tables with support_mode =
INTERNAL.
Global Temporary Tables are always excluded.
DDL replication: If you set parameter
GOLDENGATESETTINGS_REPLICATEDDL = TRUE
, then Zero Downtime
Migration sets Oracle GodenGate Extract parameter DDLOPTIONS
CAPTUREGLOBALTEMPTABLE
.
Migrating Tables with support_mode = ID KEY
Under normal operation Zero Downtime Migration reports an error listing all user tables with Oracle GoldenGate support_mode = ID KEY.
Note:
Zero Downtime Migration 21.3 users must apply Patch 33509650: ZDM PATCH USING MOS to use this feature.If you set parameter GOLDENGATESETTINGS_USEFLASHBACKQUERY =
TRUE
, Zero Downtime Migration sets the following Oracle GoldenGate
Extract parameters that allow ID KEY support mode objects to be replicated.
-
STATOPTIONS REPORTFETCH
-
FETCHOPTIONS USESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT
Ensure that the source database has sufficient UNDO size to allow Oracle GoldenGate to use Flashback Query. For best fetch results, configure the source database as documented at Setting Flashback Query in Using Oracle GoldenGate with Oracle Database.
Setting Advanced Data Pump Parameters
Zero Downtime Migration automatically sets optimal defaults for Oracle Data Pump parameters to achieve better performance and ensure security of data. To further tune performance, change the export modes, or rename database objects, there are several Data Pump settings that you can configure in the migration response file.
These parameters are set in the response file at
$ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
.
Default Data Pump Parameter Settings for Zero Downtime Migration
The following table lists the Data Pump parameters set by Zero Downtime
Migration, and the values they are set to. If there is a Zero Downtime Migration
response file parameter available to override the default, it is listed in the
Optional Zero Downtime Migration Response File Parameter to Override column. The
override parameters are set in the response file at
$ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
.
Table 5-1 Data Pump Parameter Defaults
Data Pump Parameter | Default Value | Optional ZDM Response File Parameter to Override |
---|---|---|
EXCLUDE |
cluster (ADB-D, ADB-S) indextype (ADW-S) db_link (ADB) statistics (User managed Target and ADB) |
Allows additional
Specifying invalid object types for To see a list of valid object
types, query the following views:
For example, specifying the invalid object type parameter in the response file will lead to export error. ORA-39038: Object
path "<specified invalid>" is not supported for SCHEMA
jobs. |
PARALLEL |
ZDM sets PARALLEL parameter by default as follows For User managed DB :- (Sum of (2 x (no. of physical CPU) per node ) ) with Max 32 cap. For ADB :- No. of OCPUs |
|
CLUSTER |
ZDM always sets the Cluster mode as default |
|
COMPRESSION |
COMPRESSION_ALGORITHM is set to BASIC(for 11.2) and MEDIUM (for 12.1+) COMPRESSION is set to ALL |
N/A |
ENCRYPTION |
ENCRYPTION is set to ALL ENCRYPTION_ALGORITHM is set to AES128 ENCRYPTION_MODE is set to PASSWORD |
N/A |
FILESIZE |
FILESIZE is set to 5G |
N/A |
FLASHBACK_SCN |
For OFFLINE_LOGICAL ZDM set FLASHBACK_TIME System time now. For ONLINE LOGICAL ZDM uses neither FLASHBACK_SCN not FLASHBACK_TIME |
N/A |
REUSE_DUMPFILES |
Always set to YES |
N/A |
TRANSFORM |
Always sets OMIT_ENCRYPTION_CLAUSE:Y for 19c+ targets Always sets LOB_STORAGE:SECUREFILE For ADB target, following transform is set by default SEGMENT_ATTRIBUTES:N DWCS_CVT_IOTS:Y CONSTRAINT_USE_DEFAULT_INDEX:Y |
Allows additional TRANSFORM to be specified |
METRICS |
Always set to Yes |
N/A |
LOGTIME |
Always set to ALL |
N/A |
TRACE |
Always set to 1FF0b00 |
N/A |
LOGFILE |
Always set to Data Pump job name and created under specified export or import directory object. Say if Data Pump job is ZDM_2_DP_EXPORT_8417 and directory object used is DATA_PUMP_DIR, then the operation log is created by name ZDM_2_DP_EXPORT_8417.log under DATA_PUMP_DIR. |
N/A |
Automatic Tablespace Creation
For logical migrations, Zero Downtime Migration automatically discovers the source database tablespaces associated with user schemas that are being migrated, and automatically create them in the target database before the Data Pump import phase.
Zero Downtime Migration generates the DDL required to pre-create the tablespaces, creates the tablespaces on the target, and runs the generated DDL.
Automatic
tablespace creation is enabled by default for ADB-Dedicated and non-ADB targets. With
automatic creation enabled, Zero Downtime Migration skips automatic creation for any
tablespaces that are specified in the REMAP
section in the response
file, or that already exist in the target database.
Zero Downtime Migration validates whether tablespace creation is supported on the given target. There are no limitations for co-managed database systems. If the target is an Autonomous Database system, the following limitations apply:
-
Autonomous Database systems support only BIGFILE tablespaces, so Zero Downtime Migration enforces BIGFILE tablespace by default on Autonomous Database targets, and reports an error if SMALLFILE tablespaces are found. You can remap any SMALLFILE tablespaces instead.
-
Autonomous Database Shared systems do not support the automatic creation of tablespaces. Automatic tablespace REMAP to DATA is applied by default.
Use the following response file parameters to automatically create the required tablespaces at target database.
-
TABLESPACEDETAILS_AUTOCREATE
is enabled by default for automatic tablespace creation.Note that if you set
TABLESPACEDETAILS_AUTOCREATE
toFALSE
, then automatic tablespace remapping (TABLESPACEDETAILS_AUTOREMAP
) is applied by default. If bothAUTOCREATE
andAUTOREMAP
are set toFALSE
, then none is applied. -
TABLESPACEDETAILS_USEBIGFILE
allows you to convert SMALLFILE tablespaces to BIGFILE tablespaces. Normally set toFALSE
by default, Zero Downtime Migration enforcesTRUE
for Autonomous Database targets. -
TABLESPACEDETAILS_EXTENDSIZEMB
enables tablespaces to AUTOEXTEND to avoid extend errors, with a default NEXT EXTEND size of 512MB. -
TABLESPACEDETAILS_EXCLUDE
specifies tablespaces to be excluded from automatic creation at the target database during import of user schemas. By default 'SYSTEM', 'SYSAUX', 'USERS' tablespaces are excluded.
Note:
If the current usage is less than 100M, then a default size of 100M is set as the initial size. If current usage of tablespace is higher than 100M, then the actual current size is set as the initial size.AUTOEXTEND
ensures that there
is no maximum limit.
For SMALLFILE, if the incrementby
clause is 0
(zero), then the default value of 1 is applied to determine the AUTOEXTEND
ON NEXT
size.
Automatic Tablespace Remap
For logical migrations, Zero Downtime Migration can automatically remap tablespaces on the source database to a specified tablespace on the target database.
Zero Downtime Migration automatically discovers the source database tablespaces necessary for migration. With automatic remap enabled, Zero Downtime Migration discovers the source tablespaces that require remapping by excluding any tablespaces that meet the following conditions:
-
Specified for remap in
DATAPUMPSETTINGS_METADATAREMAPS
-
Specified for exclude in
TABLESPACEDETAILS_EXCLUDE
-
Tablespaces with the same name that already exist on the target database
Use the following response file parameters to automatically remap the required tablespaces.
-
TABLESPACEDETAILS_AUTOREMAP
enables automatic tablespace remap. -
TABLESPACEDETAILS_REMAPTARGET
specifies the name of the tablespace on the target database to which to remap the tablespace on the source database. The default value isDATA
.
Note:
Zero Downtime Migration sets AUTOREMAP to TRUE by default for ADB-Serverless targets. You can override this by settingTABLESPACEDETAILS_AUTOREMAP=FALSE
.
Verifying Tablespace Remaps
Run command ZDMCLI migrate database
in evaluation mode
(-eval
) to ensure that all necessary tablespaces to be remapped are
listed. If any tablespaces are missed, you remap them using the
DATAPUMPSETTINGS_METADATAREMAPS
parameter.
Note:
For a tablespace to be used as REMAP target, the user performing the import operation, for example,SYSTEM
, should have some quota on the chosen
tablespace.
Performance Considerations
There is operational overhead involved in tablespace remapping that adds to the overall Data Pump import time. To optimize performance, review and drop unwanted tablespaces from the source database to minimize the number of remapped tablespaces. For more information, see the REMAP_* section in What DataPump And Oracle RDBMS Parameters And Features Can Significantly Affect DataPump Performance ? (Doc ID 1611373.1).
Metadata Remapping
The DATAPUMPSETTINGS_METADATAREMAPS*
parameter lets you rename database
objects during a migration job. Specify the object to rename with type, then
enter the oldValue and newValue.
For example:
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_TABLESPACE,oldValue:TS_DATA_X,newValue:DATA
DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE,oldValue:DBS,newValue:DATA
Supported object types are REMAP_DATAFILE, REMAP_SCHEMA, REMAP_TABLE, and REMAP_TABLESPACE.
Quota grants for individual users to tablespaces are not remapped, so you must manually create these grants for tablespace DATA.
When migrating to an Autonomous Database Shared Infrastructure target, all tablespaces are automatically mapped to DATA. You can override this by explicitly mapping tablespaces to a different target in the response file.
See METADATA_REMAP Procedure for more information.
Data Pump Error Handling
Some errors are ignored by Zero Downtime Migration. You must review any remaining errors appearing in the Data Pump log.
The following Data Pump errors are ignored by Zero Downtime Migration.
-
ORA-31684: XXXX already exists
-
ORA-39111: Dependent object type XXXX skipped, base object type
-
ORA-39082: Object type ALTER_PROCEDURE: XXXX created with compilation warnings
Ensure that you clear all Cloud Premigration Advisor Tool (CPAT) reported errors to avoid any underlying Data Pump errors.
Migrating Schemas in Parallel Using Batches
Zero Downtime Migration lets you run multiple schema migrations as batches in parallel.
You can specify schema batches that need to be migrated in parallel. Determine groups of dependent schemas and specify all dependent schemas as part of same batch.
The migration work flow handles the schemas as batches and runs Data Pump export, dump upload, and import operations in parallel for each batch specified.
Any error that occurs in one batch does not affect the other batch operations.
A progress monitor identifies the progress of each batch progress, reports progress individually, and errors specific to a batch are reported at the completion of all batch Data Pump Import operations.
Zero Downtime Migration also supports random batch selection by batch count, where the identified user schemas to be migrated are grouped randomly by the specified number of batches and migrated in parallel.
A new migration job phase ZDM_PARALLEL_EXPORT_IMPORT
is cumulative of
ZDM_DATAPUMP_EXPORT_SRC
, ZDM_UPLOAD_DUMPS_SRC
, and
ZDM_DATAPUMP_IMPORT_TGT
phases, and it handles all three actions
for an identified sublist of schemas per thread, that is, export, transfer, and import
the sublist of schemas in parallel.
To configure batch mode:
In the RSP, set DATAPUMPSETTINGS_JOBMODE
parameter to
SCHEMA
.
The batch mode of operation is not enabled by default; you must enable it by setting one of the following RSP parameters:
-
DATAPUMPSETTINGS_SCHEMABATCHCOUNT=integer
indicates how many schemas to include in each batch.In this case Zero Downtime Migration does not guarantee the set of schemas per batch (the Data Pump job).
-
DATAPUMPSETTINGS_SCHEMABATCH-n
, where n cam be -1, -2, 03, lets you specify a list schemas to place in a batch for parallel handling.
Note that DATAPUMPSETTINGS_SCHEMABATCHCOUNT
and
DATAPUMPSETTINGS_SCHEMABATCH
are mutually exclusive.
Also, note that database initialization parameter
MAX_DATAPUMP_JOBS_PER_PDB
determines the maximum number of
concurrent Oracle Data Pump jobs for each PDB.
Restrictions
DATAPUMPSETTINGS_JOBMODE
value other than SCHEMA
is
prohibited if a batch of schemas is specified.
Specifying INCLUDEOBJECTS
is prohibited if batch of schemas is
specified.
No SSH Source and parallel migration is not allowed - the job must be paused for manual upload of dumps that defeats the purpose exporting schemas in parallel.
MAX_DATAPUMP_JOBS_PER_PDB
determines the maximum number of
concurrent Oracle Data Pump jobs per PDB. Exceeding this limit causes ORA-39391:
maximum number of Data Pump jobs (n) exceeded.
Migrating to Oracle Autonomous Database on Exadata Cloud@Customer
Zero Downtime Migration supports migrations to Oracle Autonomous Database on Exadata Cloud@Customer from any on-premises Oracle Database, including existing Exadata Cloud@Customer systems, using the offline logical migration method and NFS as a data transfer medium.
Supported Use Cases
The following migration scenarios are supported by Zero Downtime Migration:
-
Exadata Cloud@Customer (Gen 1 or Gen 2) source to Oracle Autonomous Database on Exadata Cloud@Customer target (given that the source and target databases have the same standard UID/GID for the Oracle user)
-
On-premises Oracle Database source to Oracle Autonomous Database on Exadata Cloud@Customer target (given that the source database has a non-standard UID/GID for the Oracle user)
Migration Parameters
In addition to the required source and target connection parameters, set the following in the logical migration response file:
MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=NFS
Source Prerequisites
In addition to the usual source database prerequisites documented in Source Database Prerequisites for Logical Migration, you must also set up access to the Data Pump dump directory as detailed in the procedures below.
Prerequisite Setup for Exadata Cloud@Customer Environments
-
In all Oracle RAC nodes:
[root@onprem ~]# cat /etc/fstab | grep nfsshare nas-server.us.com:/scratch/nfsshare /u02/app/oracle/mount nfs defaults 0 0 [root@onprem ~]#
-
On the Autonomous Database target, mount the path
nas-server.us.com:/scratch/nfsshare
to the Exadata infrastructure resource, giving you
specified_mount_path/CDB/PDB_GUID
For example:
/scratch/nfsshare/CDB/PDB_GUID
For information about the option to mount NFS, contact support for details.
-
On the source PDB, run the following:
SQL> create or replace directory DATA_PUMP_DIR_ADBCC as '/u02/app/oracle/mount/CDB/PDB_GUID'; Directory created. SQL> select grantee from all_tab_privs where table_name = 'DATA_PUMP_DIR_ADBCC'; no rows selected SQL> grant read, write on directory DATA_PUMP_DIR_ADBCC to SYSTEM; Grant succeeded.
-
On the source, mount point permissions expected (
drwxr-x---
)[oracle@onprem opc]$ ls -ldrt /u02/app/oracle/mount/CDB/PDB_GUID drwxr-x--- 2 oracle asmadmin 4096 Jul 12 11:34 /u02/app/oracle/mount/CDB/PDB_GUID [oracle@onprem opc]$
Prerequisite Setup for On-Premises Environments
-
In all Oracle RAC nodes:
[root@onprem ~]# cat /etc/fstab | grep nfsshare nas-server.us.com:/scratch/nfsshare /u02/app/oracle/mount nfs defaults 0 0 [root@onprem ~]#
-
Create a group with GID 1001 - miggrp
root> groupadd -g 1001 miggrp
-
Add the database user to this group.
root> usermod -aG migrp oracle
-
On the Autonomous Database target, mount the NFS share (Group should get
rwx
)nas-server.us.com:/scratch/nfsshare
to the Exadata infrastructure resource, giving you
specified_mount_path/CDB/PDB_GUID
For example:
/scratch/nfsshare/CDB/PDB_GUID
For information about the option to mount NFS, contact support for details.
-
Ensure that the directory is writable.
Touch
specified_mount_path/CDB/PDB_GUID/test.txt
-
In the source PDB, run the following:
SQL> create or replace directory DATA_PUMP_DIR_ADBCC as '/u02/app/oracle/mount/CDB/PDB_GUID'; Directory created. SQL> select grantee from all_tab_privs where table_name = 'DATA_PUMP_DIR_ADBCC'; no rows selected SQL> grant read, write on directory DATA_PUMP_DIR_ADBCC to SYSTEM; Grant succeeded.
-
On the source, mount point permission expected (
drwxrwx—
), and the group should match the migration dummy group created.[oracle@onprem opc]$ ls -ldrt /u02/app/oracle/mount/CDB/PDB_GUID drwxrwx--- 2 1001 asmadmin 4096 Jul 12 11:34 /u02/app/oracle/mount/CDB/PDB_GUID [oracle@onprem opc]$