5 Preparing for a Logical Database Migration
The following topics describe how to complete the Zero Downtime Migration prerequisites before running a logical database migration job.
Source Database Prerequisites for Logical Migration
Complete the following prerequisites on the source database to prepare for a logical migration.
Offline and Online Migrations Require:
-
The character set on the source database must be the same as the target database.
-
Configure the streams pool with the initialization parameter
STREAMS_POOL_SIZE
.For offline logical migrations, for optimal Data Pump performance, it is recommended that you set
STREAMS_POOL_SIZE
to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.For online logical migrations, set
STREAMS_POOL_SIZE
to at least 2GB. See https://support.oracle.com/epmos/faces/DocumentDisplay?id=2078459.1 for the recommendation 1GBSTREAMS_POOL_SIZE
per integrated extract + additional 25 percent. -
System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.
If the time on any of these systems varies beyond 6 minutes from the time on OCI, it should be adjusted. You can use
ntp time check
to synchronize the time if NTP is configured. If NTP is not configured, then it is recommended that you configure it. If configuring NTP is not an option, then you need to correct the time manually to ensure it is in sync with OCI time. -
If you are using a database link, and your target database is on Autonomous Database Shared Infrastructure, you must configure TCPS on the source. Autonomous Database Shared Infrastructure doesn't allow a database link to a source that is not configured with TCPS.
-
If you are migrating from an Amazon Web Services RDS environment, see Migrating from Amazon Web Services RDS to Oracle Autonomous Database for information about source environment preparations.
-
In the PDB being exported, if you have created local objects in the C## user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance (for non-Autonomous Database targets) or use the following Zero Downtime Migration parameter to rename the schema on import.
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_SCHEMA,oldValue:c##common_user,newValue:new_ name
-
If you are migrating to Oracle Autonomous Database on Exadata Cloud@Customer from any on-premises Oracle Database, including existing Exadata Cloud@Customer systems, see Migrating to Oracle Autonomous Database on Exadata Cloud@Customer for additional prerequisite setup tasks.
Online Migrations Require:
-
If the source is Oracle Database 11.2, apply mandatory 11.2.0.4 RDBMS patches on the source database.
See My Oracle Support note Oracle GoldenGate -- Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)
-
Database PSU 11.2.0.4.200414 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS
-
OGG RDBMS patch 31704157 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.200414 FOR BUGS 31182000 20448066 - This patch combines mandatory fixes for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES and required OGG RDBMS patch 31182000 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.200414 FOR BUGS 2990912 12668795.
Although MOS note 1557031.1 mentions OGG patch 31177512, it conflicts with a patch for bug 20448066. As such, OGG patch 31704157 should be used instead of OGG patch 31177512.
-
- If the source is Oracle Database 12.1.0.2 or a later release, apply mandatory
RDBMS patches on the source database.
See My Oracle Support note Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1), which lists the additional RDBMS patches needed on top of the latest DBBP/RU for Oracle Database 12c and later.
-
Enable
ARCHIVELOG
mode for the database. See Changing the Database Archiving Mode. -
Enable
FORCE LOGGING
to ensure that all changes are found in the redo by the Oracle GoldenGate Extract process. See Specifying FORCE LOGGING Mode -
Enable database minimal supplemental logging. See Minimal Supplemental Logging.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-
Enable initialization parameter
ENABLE_GOLDENGATE_REPLICATION
. -
Install the
UTL_SPADV
orUTL_RPADV
package for Integrated Extract performance analysis.See Collecting XStream Statistics Using the UTL_RPADV Package. Note that the package changes name from
UTL_SPADV
toUTL_RPADV
in Oracle Database 19c. -
Create a GoldenGate administration user,
ggadmin
, granting all of the permissions listed in the example. If the source database is multitenant (CDB), create the user in the source PDB.SQL> create user ggadmin identified by password default tablespace users temporary tablespace temp; SQL> grant connect, resource to ggadmin; SQL> alter user ggadmin quota 100M ON USERS; SQL> grant unlimited tablespace to ggadmin; SQL> grant select any dictionary to ggadmin; SQL> grant create view to ggadmin; SQL> grant execute on dbms_lock to ggadmin; SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');
-
If the source database is multitenant (CDB), also create user
c##ggadmin
inCDB$ROOT
as shown here.SQL> create user c##ggadmin identified by password default tablespace users temporary tablespace temp; SQL> alter user c##ggadmin quota 100M ON USERS; SQL> grant unlimited tablespace to c##ggadmin; SQL> grant connect, resource to c##ggadmin container=all; SQL> grant select any dictionary to c##ggadmin container=all; SQL> grant create view to c##ggadmin container=all; SQL> grant execute on dbms_lock to c##ggadmin container=all; SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('c##ggadmin',container=>'all');
-
During the migration period, to provide the most optimal environment for fast database replication, avoid large batch DML operations. Running large batch operations, like a single transaction that affects multi-millions of rows, can slow down replication rates. Create, alter, and drop DDL operations are not replicated.
Offline Migrations Require:
-
The
DATAPUMP_EXP_FULL_DATABASE
andDATAPUMP_IMP_FULL_DATABASE
roles are required. These roles are required for Data Pump to determine whether privileged application roles should be assigned to the processes comprising the migration job.DATAPUMP_EXP_FULL_DATABASE
is required for the export operation at the source database for the specified database user. TheDATAPUMP_IMP_FULL_DATABASE
role is required for the import operation at the specified target database for specified target database user.See the Oracle Data Pump documentation for more information.
Target Database Prerequisites for Logical Migration
Complete the following prerequisites on the target database to prepare for a logical migration.
Logical migrations with Oracle GoldenGate require:
-
If the target is Autonomous Database, unlock the pre-created
ggadmin
user. -
If the target is not Autonomous database, create a
ggadmin
user in the target PDB. This user is similar to theggadmin
user on the source database, but will require more privileges. See Establishing Oracle GoldenGate Credentials for information about privileges required for a "Replicat all modes" user.
Data Pump-only logical migrations require:
-
The
DATAPUMP_IMP_FULL_DATABASE
role is required for the import operation at the specified target database for specified target database user.
All logical migrations require:
-
The character set on the source database must be the same as the target database.
-
System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.
-
All source database requirements be met. Some tasks are performed on both the source and target. See Source Database Prerequisites for Logical Migration
Additional Logical Migration Prerequisites
Complete the following additional prerequisites to prepare for a logical migration.
Create an OCI API key pair
See Required Keys and OCIDs for details.
Set Up Data Transfer Media
-
To use Object Storage data transfer medium:
Create an Object Store bucket on Oracle Cloud Infrastructure if you are using Object Storage as a data transfer medium. This is not required for Exadata Cloud at Customer or on-premises Exadata Database Machine targets.
-
To use a database link (DBLINK):
If you are using an existing database link between the target database to an on-premises source database by
global_name
of the source database, ensure that the DBLINK is not broken. Zero Downtime Migration can reuse the pre-existing DBLINK for migration if that data transfer medium is configured.Zero Downtime Migration supports DBLINK for Autonomous Database Dedicated Infrastructure, provided that there is TCP or TCPS connectivity available between the source and Autonomous Database instance. Customer need to setup IPSEC connectivity or FASTCONNECT.
-
If you are not using a database link for data transfer, ensure that the file system used for the Data Pump export directory has sufficient space to store Data Pump dump files.
If the source uses self-signed database server certificates:
If the source database listener is configured with TLS (TCPS) using self-signed database server certificates, then ensure that the self-signed certificate is added to the Zero Downtime Migration home cert store as follows.
keytool -import -keystore ZDM_HOME/jdk/jre/lib/security/cacerts -trustcacerts
-alias "src ca cert" -file source_db_server-certificate
Online Migration Additional Prerequisites
For online migration, do the following additional prerequisite tasks:
-
Set up an Oracle GoldenGate Microservices hub:
For Oracle Database Cloud Services targets, deploy the "Oracle GoldenGate for Oracle - Database Migrations" image from Oracle Cloud Marketplace.:
The "Database Migrations" version of the Oracle GoldenGate Marketplace image provides limited free licensing for use with OCI Database Migration Service. See the license agreement for details.
Any other use of GoldenGate requires purchasing a license for the Oracle GoldenGate product. See the Oracle GoldenGate documentation for more information.
- Log in to Oracle Cloud Marketplace.
- Search for the "Oracle GoldenGate for Oracle - Database Migrations" Marketplace listing.
- From the Marketplace search results, select the "Oracle GoldenGate for Oracle - Database Migrations" listing.
- For instructions to deploy the Marketplace listing, see Deploying Oracle GoldenGate Microservices on Oracle Cloud Marketplace.
If you are migrating to Exadata Cloud@Customer, or any on-premises Oracle Exadata Database Machine, you must use an on-premises Oracle GoldenGate Microservices instance to create a deployment for the source and target.
- If the source database is configured to use SSL/TLS:
If the source database is configured to use SSL/TLS, then ensure that the wallet containing certificates for TLS authentication is located in directory
/u02/deployments/deployment_name/etc
on the GoldenGate instance. -
If the target database is configured to use SSL/TLS:
Ensure that the wallet containing certificates for TLS authentication is located in the correct location on the GoldenGate instance, as follows:
-
For an Autonomous Database, the wallet file should be located in directory
/u02/deployments/deployment_name/etc/adb
-
For a co-managed database, the wallet file should be located in directory
/u02/deployments/deployment_name/etc
Autonomous databases are always configured to use TLS.
-
Setting Logical Migration Parameters
Set the required logical migration response file parameters. Get the
response file template, $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
,
which is used to create your Zero Downtime Migration response file for the database migration
procedure, and edit the file as described here.
The logical migration response file settings are described in detail in Zero Downtime Migration Logical Migration Response File Parameters Reference.
The following parameters are required for an offline or online logical migration:
-
MIGRATION_METHOD
: Set toONLINE_LOGICAL
for online migration with GoldenGate orOFFLINE_LOGICAL
for an offline Data Pump transfer. -
DATA_TRANSFER_MEDIUM
: Set toOSS
for Object Storage bucketNFS
for a shared Network File SystemDBLINK
for a direct transfer using a database linkCOPY
to use secure copyAMAZONS3
to use an Amazon S3 bucket (only applies to migrations from an AWS RDS source to Oracle Autonomous Database targets; see Migrating from Amazon Web Services RDS to Oracle Autonomous Database)Unless you are using the default data transfer servers for handling the Data Pump dumps, you may also need to configure the data transfer node settings for the source and target database environments.
See Configuring the Transfer Medium and Specifying Transfer Nodes for details.
-
For an offline logical migration of an Oracle Database 11g source to an 11g target, set
DATAPUMPSETTINGS_SECUREFILELOB=FALSE
or you may get errors. - Set the following target database parameters.
-
TARGETDATABASE_OCID
specifies the Oracle Cloud resource identifier.For example: ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrwvou52gb6s5a
See also https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/identifiers.htm
-
TARGETDATABASE_ADMINUSERNAME
specifies the database administrator user name. For example, for a co-managed database migration user name assystem
and for an Autonomous Database migration user name asadmin
.
-
- Set the following source database parameters.
-
SOURCEDATABASE_ADMINUSERNAME
specifies the database administrator user name. For example, user name assystem
. -
SOURCEDATABASE_CONNECTIONDETAILS_HOST
specifies the listener host name or IP address. In case of Oracle RAC, the SCAN name can be specified. (not required for Autonomous Database) -
SOURCEDATABASE_CONNECTIONDETAILS_PORT
specifies the listener port number. (not required for Autonomous Database) -
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME
specifies the fully qualified service name. (not required for Autonomous Database)For example: service_name.DB_domain
See also https://docs.cloud.oracle.com/en-us/iaas/Content/Database/Tasks/connectingDB.htm
-
For migrations from an AWS RDS source, see Migrating from Amazon Web Services RDS to Oracle Autonomous Database for additional parameter settings.
-
-
Set the following
OCIAUTHENTICATIONDETAILS
parameters.For more information about the required settings, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#RequiredKeysandOCIDs
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID
specifies the OCID of the OCI tenancy. You can find this value in the Console under Governance and Administration, Administration, Tenancy Details. The tenancy OCID is shown under Tenancy Information.For example: ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq
See also https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/managingtenancy.htm
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID
specifies the OCID of the IAM user. You can find this value in the Console under Profile, User Settings.See also https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingusers.htm
-
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT
specifies the fingerprint of the public API key. -
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE
specifies the absolute path of API private key file. -
OCIAUTHENTICATIONDETAILS_REGIONID
specifies the OCI region identifier.See the Region Identifier column in the table at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm
-
Oracle GoldenGate Settings
For online logical migrations, in addition to the above, you must also set the
GoldenGate parameters, TARGETDATABASE_GGADMINUSERNAME
,
SOURCEDATABASE_GGADMINUSERNAME
,
SOURCECONTAINERDATABASE_GGADMINUSERNAME
, and the parameters prefixed with
GOLDENGATEHUB
and GOLDENGATESETTINGS
.
See Zero Downtime Migration Logical Migration Response File Parameters Reference for details about these parameters.
Oracle Data Pump Settings
Zero Downtime Migration automatically sets optimal defaults for Data Pump parameters to achieve better performance and ensure data security. If you need to further tune performance, there are several Data Pump settings that you can configure in the response file.
The default DATAPUMPSETTINGS_JOBMODE=SCHEMA
is recommended for migrations to
Autonomous Database.
See Default Data Pump Parameter Settings for Zero Downtime Migration for information about the default Data Pump property settings, how to select schemas or objects for inclusion or exclusion, and Data Pump error handling.
See Zero Downtime Migration Logical Migration Response File Parameters Reference for all of the Data Pump parameters you can set through Zero Downtime Migration.
See Migrating from Amazon Web Services RDS to Oracle Autonomous Database for information about setting Data Pump parameters for migration from AWS RDS.
Configuring the Transfer Medium and Specifying Transfer Nodes
Zero Downtime Migration offers various transfer options to make Oracle Data Pump dumps available to the target database server.
Using the DATA_TRANSFER_MEDIUM
response file parameter you can configure
the following data transfer methods:
-
OSS
: Oracle Cloud Object Storage.Supported for all migration types and targets.
-
NFS
: Network File SystemSupported for offline migrations to co-managed target database only.
-
DBLINK
: Direct data transfer from the source to the target over a database link.Supported for online and offline migrations to Autonomous Database Shared (Data Warehouse or Transaction Processing) and co-managed targets only.
-
COPY
: Transfer dumps to the target transfer node using secure copy.Supported for offline migrations to co-managed target databases only.
AMAZON3
: Amazon S3 bucket- Only applies to migrations from an AWS RDS source to an Oracle Autonomous Database target. See Migrating from Amazon Web Services RDS to Oracle Autonomous Database for more information.
Note:
To take advantage of parallelism and achieve the best data transfer performance, Oracle recommends that you transfer data usingOSS
or
NFS
for databases over 50GB in size. The DBLINK
transfer medium can be convenient for smaller databases, but this choice may involve
uncertainty in performance because of its dependence on network bandwidth for the
duration of the transfer.
Once the export of dumps on the source is completed, the dumps are uploaded or
transferred in parallel as defined by parameter
DUMPTRANSFERDETAILS_PARALLELCOUNT
(defaults to 3), and any transfer
failures are retried by default as specified in the parameter
DUMPTRANSFERDETAILS_RETRYCOUNT
(defaults to 3).
The transfer of dumps can be done from any node at the source data center, provided that the dumps are accessible from the given node. It is crucial to ascertain the network connectivity and transfer workload impact on the source database server in order to decide which data transfer approach to take.
Direct Transfer from Source to Target
This option applies only to co-managed cloud target databases.
Zero Downtime Migration enables logical migration using direct transfer of the Data Pump dump from the source to the target securely. The data is copied over from the source database directory object path to the target database server directory object path, or to a target transfer node, using either secure copy or RSYNC. This avoids the data being transferred over a WAN or needing additional shared storage between the source and target environments. This capability greatly simplifies the logical migration within the data center.
About Transfer Nodes
You will configure a node, referred as a transfer node, for both the source data center and the target tenancy.
The response file parameters that are prefixed with
DUMPTRANSFERDETAILS_SOURCE_TRANSFERNODE
designate the node that
handles the export dumps at the source data center. This source transfer
node defaults to the source database.
Similarly, the response file parameters that are prefixed with
DUMPTRANSFERDETAILS_TARGET_TRANSFERNODE
designate the node that
handles the import of dumps at the target. This target transfer
node defaults to the target database, for co-managed targets.
Transfer Node Requirements
The source transfer node can be any of the following:
- Source database server (default)
- NAS mounted server
- Zero Downtime Migration service node
The target transfer node can be any of the following:
- Target Database server (default)
- NAS mounted server
- Zero Downtime Migration service node
For a server to be designated as transfer node, the following critical considerations are necessary.
-
Availability of CPU and memory to process the upload or transfer workload
-
Connectivity to the specified upload or transfer target
-
Port 443 connectivity to Object Storage Service if the chosen data transfer medium is
OSS
-
Port 22 connectivity to target storage server if the chosen transfer medium is
COPY
-
-
Availability of Oracle Cloud Infrastructure CLI. For speedier and resilient upload of dumps this is the recommended transfer utility for the
OSS
transfer medium. -
OCI CLI must be installed and configured as detailed in https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm.
Installing and configuring OCI CLI on each source database server may not be feasible. In such cases, one of the nodes in the data center can be designated as a transfer node with OCI CLI configured, and this node can share a network storage path with the database servers for Data Pump dumps to be created. This also avoids the upload workload consuming additional CPU and memory on production database servers.
The designated transfer node can act as the gateway server at the data center for the external data transfer allowing transfer data traffic, thus avoiding the need to allow data transfer from the source database server or to the target database server.
Optionally, the additional transfer node requirement can be avoided by leveraging the Zero Downtime Migration server as the transfer node, provided that the Zero Downtime Migration service is placed in an on-premises data center and can meet the transfer node requirements listed above.
Using the Oracle Cloud Object Storage Transfer Medium
Object Storage data transfer medium is supported for all migration types and targets.
When using Object Storage as the data transfer medium, by setting
DATA_TRANSFER_MEDIUM=OSS
, it is recommended that dumps be
uploaded using OCI CLI for faster and more secure and resilient uploads. You must
configure OCI CLI in the upload node, and set parameter
DUMPTRANSFERDETAILS_SOURCE_USEOCICLI
to TRUE
,
the parameters for OCI CLI are
DUMPTRANSFERDETAILS_SOURCE_USEOCICLI
DUMPTRANSFERDETAILS_SOURCE_OCIHOME
Using the Database Link Transfer Medium
Supported for online and offline migrations to Autonomous Database Shared (Data Warehouse or Transaction Processing) and co-managed targets only.
When you set DATA_TRANSFER_MEDIUM=DBLINK
, a database link is created
from the OCI co-managed database or Autonomous Database target to the source
database using the global_name
of the specified source
database.
Zero Downtime Migration creates the database link if it does not already exist, and the link is cleaned once the Data Pump import phase is complete.
Using the NFS Transfer Medium
Supported for offline migrations to co-managed target database only.
The NFS mode of transfer is available, by setting
DATA_TRANSFER_MEDIUM=NFS
, for co-managed target databases that
avoid the transfer of dumps. You should ensure that the specified path is accessible
between the source and target database server path.
Zero Downtime Migration ensures the security of dumps in the shared storage by preserving the restricted permission on the dumps such that only the source and target database users are allowed to access the dump.
Using the Copy Transfer Medium
Supported for offline migrations to co-managed target databases only.
Dumps can be transferred from the source to the target securely, by
setting DATA_TRANSFER_MEDIUM=COPY
. The relevant parameters are as
follows:
DUMPTRANSFERDETAILS_TRANSFERTARGET_USER
DUMPTRANSFERDETAILS_TRANSFERTARGET_USERKEY
DUMPTRANSFERDETAILS_TRANSFERTARGET_HOST
DUMPTRANSFERDETAILS_TRANSFERTARGET_SUDOPATH
DUMPTRANSFERDETAILS_TRANSFERTARGET_DUMPDIRPATH
You can leverage the RSYNC utility instead of SCP. Set
DUMPTRANSFERDETAILS_RSYNCAVAILABLE
to TRUE
,
and verify that RSYNC is available both at the source and target transfer nodes.
Default Data Pump Parameter Settings for Zero Downtime Migration
Zero Downtime Migration automatically sets optimal defaults for Data Pump parameters to achieve better performance and ensure security of data. The following table lists the Data Pump parameters set by Zero Downtime Migration, and the values they are set to.
If there is a Zero Downtime Migration response file parameter available to override
the default, it is listed in the Optional Zero Downtime Migration Response File
Parameter to Override column. The override parameters are set in the response file
at $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
.
Table 5-1 Data Pump Parameter Defaults
Data Pump Parameter | Default Value | Optional ZDM Response File Parameter to Override |
---|---|---|
EXCLUDE |
cluster (ADB-D, ADB-S) indextype (ADW-S) db_link (ADB) statistics (User managed Target and ADB) |
Allows additional
Specifying
invalid object types for To see a list of valid object
types, query the following views:
For example, specifying the invalid object type parameter in the response file will lead to export error. ORA-39038: Object
path "<specified invalid>" is not supported for SCHEMA
jobs. |
PARALLEL |
ZDM sets PARALLEL parameter by default as follows For User managed DB :- (Sum of (2 x (no. of physical CPU) per node ) ) with Max 32 cap. For ADB :- No. of OCPUs |
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE |
CLUSTER |
ZDM always sets the Cluster mode as default |
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_NOCLUSTER |
COMPRESSION |
COMPRESSION_ALGORITHM is set to BASIC(for 11.2) and MEDIUM (for 12.1+) COMPRESSION is set to ALL |
N/A |
ENCRYPTION |
ENCRYPTION is set to ALL ENCRYPTION_ALGORITHM is set to AES128 ENCRYPTION_MODE is set to PASSWORD |
N/A |
FILESIZE |
FILESIZE is set to 5G |
N/A |
FLASHBACK_SCN |
For OFFLINE_LOGICAL ZDM set FLASHBACK_TIME System time now. For ONLINE LOGICAL ZDM uses neither FLASHBACK_SCN not FLASHBACK_TIME |
N/A |
REUSE_DUMPFILES |
Always set to YES |
N/A |
TRANSFORM |
Always sets OMIT_ENCRYPTION_CLAUSE:Y for 19c+ targets Always sets LOB_STORAGE:SECUREFILE For ADB target, following transform is set by default SEGMENT_ATTRIBUTES:N DWCS_CVT_IOTS:Y CONSTRAINT_USE_DEFAULT_INDEX:Y |
Allows additional TRANSFORM to be specified |
METRICS |
Always set to Yes |
N/A |
LOGTIME |
Always set to ALL |
N/A |
TRACE |
Always set to 1FF0b00 |
N/A |
LOGFILE |
Always set to Data Pump job name and created under specified export or import directory object. Say if Data Pump job is ZDM_2_DP_EXPORT_8417 and directory object used is DATA_PUMP_DIR, then the operation log is created by name ZDM_2_DP_EXPORT_8417.log under DATA_PUMP_DIR. |
N/A |
Setting Advanced Data Pump Parameters
You might want to select specific schemas to migrate, rename tablespaces, or include or exclude specific objects from the as part of a migration.
The following are example parameter settings you can use to specify
these selections or changes when you set
DATAPUMPSETTINGS_JOBMODE=FULL
or
DATAPUMPSETTINGS_JOBMODE=SCHEMA
job modes.
These parameters are set in the response file at
$ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp
.
To exclude specific object types
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=COMMENT,DOMAIN_INDEX,MATERIALIZED_VIEW_LOG,RLS_POLICY,TRIGGER
To exclude select SCHEMA objects for DATAPUMPSETTINGS_JOBMODE=FULL mode:
DATAPUMPSETTINGS_METADATAFILTERS-1=name:NAME_EXPR,value:'NOT
IN(''SYSMAN'')',objectType:SCHEMA
DATAPUMPSETTINGS_METADATAFILTERS-3=name:NAME_EXPR,value:'NOT
IN(''SH'')',objectType:SCHEMA
Note:
TheSCHEMA
name
SYSMAN
is surrounded by two single quotes and not a double
quote.
To exclude select SCHEMA objects for DATAPUMPSETTINGS_JOBMODE=SCHEMA mode:
EXCLUDEOBJECTS-1=owner:SYSMAN
EXCLUDEOBJECTS-2=owner:SCOTT
By default, Zero Downtime Migration ignores Oracle Maintained Objects.
To include select SCHEMA objects for DATAPUMPSETTINGS_JOBMODE=SCHEMA mode:
INCLUDEOBJECTS-1=owner:SYSMAN
INCLUDEOBJECTS-2=owner:SCOTT
By default, Zero Downtime Migration ignores Oracle Maintained Objects.
Specify Included and Excluded Objects with Special Characters
The following examples show you how to specify objects names that use
special characters in the EXCLUDEOBJECTS
and
INCLUDEOBJECTS
parameters.
-
To escape a special character, use two slashes (//) before and after all characters in the string before the special character.
For example, to escape dollar sign ($):
\\INLUDEOBJECTS-3= owner:GRAF_MULTI\\$_HR
-
To match all characters in between prefix and suffix pattern, add a period and an asterisk (.*) where the matching should occur.
For example, to exclude all schemas starting with GRAF and ending with HR:
EXCLUDEOBJECTS-3= owner:GRAF.*HR
To REMAP the tablespaces:
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_TABLESPACE,oldValue:TS_DATA_X,newValue:DATA
DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE,oldValue:DBS,newValue:DATA
Data Pump Error Handling
Some errors are ignored by Zero Downtime Migration. You must review any remaining errors appearing in the Data Pump log.
The following Data Pump errors are ignored by Zero Downtime Migration.
-
ORA-31684: XXXX already exists
-
ORA-39111: Dependent object type XXXX skipped, base object type
-
ORA-39082: Object type ALTER_PROCEDURE: XXXX created with compilation warnings
Ensure that you clear all Cloud Premigration Advisor Tool (CPAT) reported errors to avoid any underlying Data Pump errors.
Automatic Tablespace Creation
For logical migrations, Zero Downtime Migration can automatically discover the source database tablespaces associated with user schemas that are being migrated, and automatically create them in the target database before the Data Pump import phase.
Zero Downtime Migration generates the DDL required to pre-create the tablespaces, creates the tablespaces on the target, and runs the generated DDL.
With automatic creation enabled, Zero Downtime Migration skips automatic
creation for any tablespaces that are specified in the REMAP
section in
the response file, or that already exist in the target database.
Zero Downtime Migration validates whether tablespace creation is supported on the given target. There are no limitations for co-managed database systems. If the target is an Autonomous Database system, the following limitations apply:
-
Autonomous Database systems support only BIGFILE tablespaces, so Zero Downtime Migration enforces BIGFILE tablespace by default on Autonomous Database targets, and reports an error if SMALLFILE tablespaces are found. You can remap any SMALLFILE tablespaces instead.
-
Autonomous Database Shared systems do not support the automatic creation of tablespaces.
Use the following response file parameters to automatically create the required tablespaces at target database.
-
TABLESPACEDETAILS_AUTOCREATE
enables automatic tablespace creation. -
TABLESPACEDETAILS_USEBIGFILE
allows you to convert SMALLFILE tablespaces to BIGFILE tablespaces. Normally set to FALSE by default, Zero Downtime Migration enforces TRUE for Autonomous Database targets. -
TABLESPACEDETAILS_EXTENTSIZEMB
enables tablespaces to AUTOEXTEND to avoid extend errors, with a default NEXT EXTENT size of 500MB. -
TABLESPACEDETAILS_EXCLUDE
specifies tablespaces to be excluded from automatic creation at the target database during import of user schemas. By default 'SYSTEM', 'SYSAUX', 'USERS' tablespaces are excluded.
Automatic Tablespace Remap
For logical migrations, Zero Downtime Migration can automatically remap tablespaces on the source database to a specified tablespace on the target database.
Zero Downtime Migration automatically discovers the source database tablespaces necessary for migration. With automatic remap enabled, Zero Downtime Migration discovers the source tablespaces that require remapping by excluding any tablespaces that meet the following conditions:
-
Specified for remap in
DATAPUMPSETTINGS_METADATAREMAPS
-
Specified for exclude in
TABLESPACEDETAILS_EXCLUDE
-
Tablespaces with the same name that already exist on the target database
Use the following response file parameters to automatically remap the required tablespaces.
-
TABLESPACEDETAILS_AUTOREMAP
enables automatic tablespace remap. -
TABLESPACEDETAILS_REMAPTARGET
specifies the name of the tablespace on the target database to which to remap the tablespace on the source database. The default value isDATA
.
Verifying Tablespace Remaps
Run command ZDMCLI migrate database
in evaluation mode
(-eval
) to ensure that all necessary tablespaces to be remapped are
listed. If any tablespaces are missed, you remap them using the
DATAPUMPSETTINGS_METADATAREMAPS
parameter.
Note:
For a tablespace to be used as REMAP target, the user performing the import operation, for example,SYSTEM
, should have some quota on the chosen
tablespace.
Performance Considerations
There is operational overhead involved in tablespace remapping that adds to the overall Data Pump import time. To optimize performance, review and drop unwanted tablespaces from the source database to minimize the number of remapped tablespaces. For more information, see the REMAP_* section in What DataPump And Oracle RDBMS Parameters And Features Can Significantly Affect DataPump Performance ? (Doc ID 1611373.1).
Migrating to Oracle Autonomous Database on Exadata Cloud@Customer
Zero Downtime Migration supports migrations to Oracle Autonomous Database on Exadata Cloud@Customer from any on-premises Oracle Database, including existing Exadata Cloud@Customer systems, using the offline logical migration method and NFS as a data transfer medium.
Supported Use Cases
The following migration scenarios are supported by Zero Downtime Migration:
-
Exadata Cloud@Customer (Gen 1 or Gen 2) source to Oracle Autonomous Database on Exadata Cloud@Customer target (given that the source and target databases have the same standard UID/GID for the Oracle user)
-
On-premises Oracle Database source to Oracle Autonomous Database on Exadata Cloud@Customer target (given that the source database has a non-standard UID/GID for the Oracle user)
Migration Parameters
In addition to the required source and target connection parameters, set the following in the logical migration response file:
MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=NFS
Source Prerequisites
In addition to the usual source database prerequisites documented in Source Database Prerequisites for Logical Migration, you must also set up access to the Data Pump dump directory as detailed in the procedures below.
Prerequisite Setup for Exadata Cloud@Customer Environments
-
In all Oracle RAC nodes:
[root@onprem ~]# cat /etc/fstab | grep nfsshare nas-server.us.com:/scratch/nfsshare /u02/app/oracle/mount nfs defaults 0 0 [root@onprem ~]#
-
On the Autonomous Database target, mount the path
nas-server.us.com:/scratch/nfsshare
to the Exadata infrastructure resource, giving you
specified_mount_path/CDB/PDB_GUID
For example:
/scratch/nfsshare/CDB/PDB_GUID
For information about the option to mount NFS, contact support for details.
-
On the source PDB, run the following:
SQL> create or replace directory DATA_PUMP_DIR_ADBCC as '/u02/app/oracle/mount/CDB/PDB_GUID'; Directory created. SQL> select grantee from all_tab_privs where table_name = 'DATA_PUMP_DIR_ADBCC'; no rows selected SQL> grant read, write on directory DATA_PUMP_DIR_ADBCC to SYSTEM; Grant succeeded.
-
On the source, mount point permissions expected (
drwxr-x---
)[oracle@onprem opc]$ ls -ldrt /u02/app/oracle/mount/CDB/PDB_GUID drwxr-x--- 2 oracle asmadmin 4096 Jul 12 11:34 /u02/app/oracle/mount/CDB/PDB_GUID [oracle@onprem opc]$
Prerequisite Setup for On-Premises Environments
-
In all Oracle RAC nodes:
[root@onprem ~]# cat /etc/fstab | grep nfsshare nas-server.us.com:/scratch/nfsshare /u02/app/oracle/mount nfs defaults 0 0 [root@onprem ~]#
-
Create a group with GID 1001 - miggrp
root> groupadd -g 1001 miggrp
-
Add the database user to this group.
root> usermod -aG migrp oracle
-
On the Autonomous Database target, mount the NFS share (Group should get
rwx
)nas-server.us.com:/scratch/nfsshare
to the Exadata infrastructure resource, giving you
specified_mount_path/CDB/PDB_GUID
For example:
/scratch/nfsshare/CDB/PDB_GUID
For information about the option to mount NFS, contact support for details.
-
Ensure that the directory is writable.
Touch
specified_mount_path/CDB/PDB_GUID/test.txt
-
In the source PDB, run the following:
SQL> create or replace directory DATA_PUMP_DIR_ADBCC as '/u02/app/oracle/mount/CDB/PDB_GUID'; Directory created. SQL> select grantee from all_tab_privs where table_name = 'DATA_PUMP_DIR_ADBCC'; no rows selected SQL> grant read, write on directory DATA_PUMP_DIR_ADBCC to SYSTEM; Grant succeeded.
-
On the source, mount point permission expected (
drwxrwx—
), and the group should match the migration dummy group created.[oracle@onprem opc]$ ls -ldrt /u02/app/oracle/mount/CDB/PDB_GUID drwxrwx--- 2 1001 asmadmin 4096 Jul 12 11:34 /u02/app/oracle/mount/CDB/PDB_GUID [oracle@onprem opc]$