3 Preparing for Database Migration
Before starting a Zero Downtime Migration database migration you must configure connectivity between the servers, prepare the source and target databases, set parameters in the response file, and configure any required migration job customization.
See the Zero Downtime Migration Release Notes for the latest information about new features, known issues, and My Oracle Support notes.
- Configuring Connectivity Prerequisites
Connectivity must be set up between the Zero Downtime Migration service host and the source and target database servers. - Preparing the Source and Target Databases
See the following topics for information about preparing the source and target databases for migration. - Preparing the Response File
Set the response file parameters for the migration target and backup medium you are using in the migration process. - Preparing for Automatic Application Switchover
To minimize or eliminate service interruptions on the application after you complete the database migration and switchover, prepare your application to automatically switch over connections from the source database to the target database. - Customizing a Migration Job
You can customize the Zero Downtime Migration workflow by registering action scripts or plug-ins as pre-actions or post-actions to be performed as part of the operational phases involved in your migration job.
Configuring Connectivity Prerequisites
Connectivity must be set up between the Zero Downtime Migration service host and the source and target database servers.
The following topics describe how to configure the Zero Downtime Migration connectivity prerequisites before running a migration job.
- Configuring Connectivity From the Zero Downtime Migration Service Host to the Source and Target Database Servers
Complete the following procedure to ensure the required connectivity between the Zero Downtime Migration service host and the source and target database servers. - Configuring SUDO Access
You may need to grant certain users authority to perform operations usingsudo
on the source and target database servers. - Configuring Connectivity Between the Source and Target Database Servers
You have two options for configuring connectivity between the source and target database servers: SCAN or SSH. - Generate SSH Keys Without a Passphrase
You can generate a new SSH key without a passphrase if on the Zero Downtime Migration service host the authentication key pairs are not available without a passphrase for the Zero Downtime Migration software installed user.
Parent topic: Preparing for Database Migration
Configuring Connectivity From the Zero Downtime Migration Service Host to the Source and Target Database Servers
Complete the following procedure to ensure the required connectivity between the Zero Downtime Migration service host and the source and target database servers.
Parent topic: Configuring Connectivity Prerequisites
Configuring SUDO Access
You may need to grant certain users authority to perform operations using
sudo
on the source and target database servers.
For source database servers:
-
If the source database server is accessed with the
root
user, then there is no need to configure Sudo operations. -
If the source database server is accessed through SSH, then configure Sudo operations to run without prompting for a password for the database installed user and the
root
user.For example, if database installed user is
oracle
, then runsudo su - oracle
.For the
root
user runsudo su -
.
For target database servers:
-
Because the target database server is on the cloud only, any Sudo operations are configured already. Otherwise, configure all Sudo operations to run without prompting for a password for the database installed user and the
root
user.For example, if database installed user is
oracle
, then runsudo su - oracle
.For the
root
user runsudo su -
.
Note, for example, if the login user is opc
, then you
can enable Sudo operations for the opc
user.
Parent topic: Configuring Connectivity Prerequisites
Configuring Connectivity Between the Source and Target Database Servers
You have two options for configuring connectivity between the source and target database servers: SCAN or SSH.
Configure connectivity using one of the following options.
- Option 1: Use SCAN
To use this option, the SCAN of the target should be resolvable from the source database server, and the SCAN of the source should be resolvable from the target server. - Option 2: Set up an SSH Tunnel
If connectivity using SCAN and the SCAN port is not possible between the source and target database servers, set up an SSH tunnel from the source database server to the target database server.
Parent topic: Configuring Connectivity Prerequisites
Option 1: Use SCAN
To use this option, the SCAN of the target should be resolvable from the source database server, and the SCAN of the source should be resolvable from the target server.
The specified source database server in the ZDMCLI MIGRATE
DATABASE
command -sourcenode
parameter can connect to
the target database instance over target SCAN through the respective SCAN port and
vice versa.
With SCAN connectivity from both sides, the source database and target
databases can synchronize from either direction. If the source database server SCAN
cannot be resolved from the target database server, then the
SKIP_FALLBACK
parameter in the response file must be set to
TRUE
, and the target database and source database cannot
synchronize after switchover.
Test Connectivity
To test connectivity from the source to the target environment, add the TNS entry of
the target database to the source database server
$ORACLE_HOME/network/admin/tnsnames.ora
file.
[oracle@sourcedb ~] tnsping target-tns-string
To test connectivity from the target to the source environment, add the TNS entry of
the source database to the target database server
$ORACLE_HOME/network/admin/tnsnames.ora
file
[oracle@targetdb ~] tnsping source-tns-string
Note:
Database migration to Exadata Cloud at Customer using the Zero Data Loss Recovery Appliance requires mandatory SQL*Net connectivity from the target database server to the source database server.Option 2: Set up an SSH Tunnel
If connectivity using SCAN and the SCAN port is not possible between the source and target database servers, set up an SSH tunnel from the source database server to the target database server.
The following procedure sets up an SSH tunnel on the source database servers for the root user. Note that this procedure amounts to setting up what may be considered a temporary channel. Using this connectivity option, you will not be able to synchronize between the target database and source database after switchover, and with this configuration you cannot fall back to the original source database.
Note:
The following steps refer to Oracle Cloud Infrastructure, but are also applicable to Exadata Cloud at Customer and Exadata Cloud Service.Generate SSH Keys Without a Passphrase
You can generate a new SSH key without a passphrase if on the Zero Downtime Migration service host the authentication key pairs are not available without a passphrase for the Zero Downtime Migration software installed user.
Note:
Currently, only the RSA key format is supported for configuring SSH
connectivity, so use the ssh-keygen
command, which generates both
of the authentication key pairs (public and private).
The following example shows you how to generate an SSH key pair for the Zero
Downtime Migration software installed user. You can also use this command to generate
the SSH key pair for the opc
user.
Run the following command on the Zero Downtime Migration service host.
zdmuser> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/zdmuser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/zdmuser/.ssh/id_rsa.
Your public key has been saved in /home/zdmuser/.ssh/id_rsa.pub.
The key fingerprint is:
c7:ed:fa:2c:5b:bb:91:4b:73:93:c1:33:3f:23:3b:30 zdmuser@zdm_service_host
The key's randomart image is:
+--[ RSA 2048]----+
| |
| |
| |
| . . . |
| S o . = |
| . E . * |
| X.+o.|
| .= Bo.o|
| o+*o. |
+-----------------+
This command generates the id_rsa
and id_rsa.pub
files in the zdmuser
home, for example,
/home/zdmuser/.ssh
.
Parent topic: Configuring Connectivity Prerequisites
Preparing the Source and Target Databases
See the following topics for information about preparing the source and target databases for migration.
- Source Database Prerequisites
Meet the following prerequisites on the source database before the Zero Downtime Migration process starts. - Target Database Prerequisites
The following prerequisites must be met on the target database before you begin the Zero Downtime Migration process. - Setting Up the Transparent Data Encryption Wallet
For Oracle Database 12c Release 2 and later, if the soure and target databases do not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins.
Parent topic: Preparing for Database Migration
Source Database Prerequisites
Meet the following prerequisites on the source database before the Zero Downtime Migration process starts.
-
The source database must be running in archive log mode.
-
Configure the TDE wallet on Oracle Database 12c Release 2 and later. Enabling TDE on Oracle Database 11g Release 2 (11.2.0.4) and Oracle Database 12c Release 1 is optional.
For Oracle Database 12c Release 2 and later, if the source database does not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins. The
WALLET_TYPE
can beAUTOLOGIN
(preferred) orPASSWORD
based.Ensure that the wallet
STATUS
isOPEN
andWALLET_TYPE
isAUTOLOGIN
(For anAUTOLOGIN
wallet type), orWALLET_TYPE
isPASSWORD
(For aPASSWORD
based wallet type). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.SQL> SELECT * FROM v$encryption_wallet;
-
If the source is an Oracle RAC database, and
SNAPSHOT CONTROLFILE
is not on a shared location, configureSNAPSHOT CONTROLFILE
to point to a shared location on all Oracle RAC nodes to avoid the ORA-00245 error during backups to Oracle Object Store.For example, if the database is deployed on ASM storage,
$ rman target / RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DATA/snapcf_matrix.f';
If the database is deployed on an ACFS file system, specify the shared ACFS location in the above command.
-
Verify that port 22 on the source and target database servers allow incoming connections from the Zero Downtime Migration service host.
-
Ensure that the scan listener ports (1521, for example) on the source database servers allow incoming connections from the target database servers and outgoing connections to the target database servers.
Alternate SQL connectivity should be made available if a firewall blocks incoming remote connection using the SCAN listener port.
-
To preserve the source database Recovery Time Objective (RTO) and Recovery Point Objective (RPO) during the migration, the existing RMAN backup strategy should be maintained.
During the migration a dual backup strategy will be in place; the existing backup strategy and the strategy used by Zero Downtime Migration. Avoid having two RMAN backup jobs running simultaneously (the existing one and the one initiated by Zero Downtime Migration). If archive logs were to be deleted on the source database, and these archive logs are needed by Zero Downtime Migration to synchronize the target cloud database, then these files should be restored so that Zero Downtime Migration can continue the migration process.
-
If the source database is deployed using Oracle Grid Infrastructure and the database is not registered using SRVCTL, then you must register the database before the migration.
-
The source database must use a server parameter file (SPFILE).
-
The source database must have a password file in location
$ORACLE_HOME/dbs/orapwORACLE_SID
; otherwise, create it using theORAPWD
utility. -
If RMAN is not already configured to automatically back up the control file and SPFILE, then set
CONFIGURE CONTROLFILE AUTOBACKUP
toON
and revert the setting back toOFF
after migration is complete.RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
Target Database Prerequisites
The following prerequisites must be met on the target database before you begin the Zero Downtime Migration process.
-
You must create a placeholder target database using Grid Infrastructure based Database Services before database migration begins.
Note:
For this release, only Grid Infrastructure-based database services are supported as targets. For example, an LVM-based instance or an instance created in compute node without Grid Infrastructure are not supported targets.The placeholder target database is overwritten during migration, but it retains the overall configuration.
Pay careful attention to the following requirements:
- Size for the future - When you create the database from the console, ensure that your chosen shape can accommodate the source database, plus any future sizing requirements. A good guideline is to use a shape similar to or larger in size than source database.
- Set name parameters
DB_NAME
- If the target database is Exadata Cloud Service or Exadata Cloud at Customer, then the databaseDB_NAME
should be the same as the source databaseDB_NAME
. If the target database is Oracle Cloud Infrastructure, then the databaseDB_NAME
can be the same as or different from the source databaseDB_NAME
.DB_UNIQUE_NAME
: If the target database is Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer, the target databaseDB_UNIQUE_NAME
parameter value must be unique to ensure that Oracle Data Guard can identify the target as a different database from the source database.
- Match the source SYS password - Specify a
SYS
password that matches that of the source database. - Disable automatic backups - Provision the target
database from the console without enabling automatic backups.
For Oracle Cloud Infrastructure and Exadata Cloud Service, do not select the Enable automatic backups option under the section Configure database backups.
For Exadata Cloud at Customer, set Backup destination Type to None under the section Configure Backups.
-
The target database version should be the same as the source database version. The target database patch level should also be the same as (or higher than) the source database.
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then you must run the datapatch utility after database migration.
-
The target database time zone version must be the same as the source database time zone version. To check the current time zone version, query the
V$TIMEZONE_FILE
view as shown here, and upgrade the time zone file if necessary.SQL> SELECT * FROM v$timezone_file;
-
Verify that the TDE wallet folder exists, and ensure that the wallet
STATUS
isOPEN
andWALLET_TYPE
isAUTOLOGIN
(For an auto-login wallet type), orWALLET_TYPE
isPASSWORD
(For a password-based wallet). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.SQL> SELECT * FROM v$encryption_wallet;
-
The target database must use a server parameter file (SPFILE).
-
If the target is an Oracle RAC database, then you must set up SSH connectivity without a passphrase between the Oracle RAC servers for the oracle user.
-
Check the size of the disk groups and usage on the target database (ASM disk groups or ACFS file systems) and make sure adequate storage is provisioned and available on the target database servers.
-
Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
-
Verify that ports 22 and 1521 on the target servers in the Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer environment are open and not blocked by a firewall.
-
Capture the output of the RMAN
SHOW ALL
command, so that you can compare RMAN settings after the migration, then reset any changed RMAN configuration settings to ensure that the backup works without any issues.RMAN> show all;
See Also:
Managing User Credentials for information about generating the auth token for Object Storage backups
Parent topic: Preparing the Source and Target Databases
Setting Up the Transparent Data Encryption Wallet
For Oracle Database 12c Release 2 and later, if the soure and target databases do not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins.
TDE should be enabled and the TDE WALLET
status on both source and target
databases must be set to OPEN
. The WALLET_TYPE
can be
AUTOLOGIN
, for an auto-login wallet (preferred), or
PASSWORD
, for a password-based wallet. On a multitenant database, make
sure that the wallet is open on all PDBs as well as the CDB, and that the master key is set
for all PDBs and the CDB.
If TDE is not already configured as required on the source and target databases, use the following instructions to set up the TDE wallet.
For a password-based wallet, you only need to do steps 1, 2, and 4; for an auto-login wallet, complete all of the steps.
-
Set
ENCRYPTION_WALLET_LOCATION
in the$ORACLE_HOME/network/admin/sqlnet.ora
file./home/oracle>cat /u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE) (METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/)))
For an Oracle RAC instance, also set
ENCRYPTION_WALLET_LOCATION
in the second Oracle RAC node. -
Create and configure the keystore.
-
Connect to the database and create the keystore.
$ sqlplus "/as sysdba" SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin' identified by password;
-
Open the keystore.
For a non-CDB environment, run the following command.
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password; keystore altered.
For a CDB environment, run the following command.
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password container = ALL; keystore altered.
-
Create and activate the master encryption key.
For a non-CDB environment, run the following command.
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup; keystore altered.
For a CDB environment, run the following command.
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup container = ALL; keystore altered.
-
Query
V$ENCRYPTION_KEYS
to get the wallet status, wallet type, and wallet location.SQL> SELECT * FROM v$encryption_keys; WRL_TYPE WRL_PARAMETER -------------------- -------------------------------------------------------------------------------- STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID ------------------------------ -------------------- --------- --------- ---------- FILE /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ OPEN PASSWORD SINGLE NO 0
The configuration of a password-based wallet is complete at this stage, and the wallet is enabled with status
OPEN
andWALLET_TYPE
is shown asPASSWORD
in the query output above.Continue to step 3 only if you need to configure an auto-login wallet, otherwise skip to step 4.
-
-
For an auto-login wallet only, complete the keystore configuration.
-
Create the auto-login keystore.
SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/' IDENTIFIED BY password; keystore altered.
-
Close the password-based wallet.
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY password; keystore altered.
-
Query
V$ENCRYPTION_WALLET
to get the wallet status, wallet type, and wallet location.SQL> SELECT * FROM v$encryption_wallet; WRL_TYPE WRL_PARAMETER -------------------- -------------------------------------------------------------------------------- STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID ------------------------------ -------------------- --------- --------- --------- FILE /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ OPEN AUTOLOGIN SINGLE NO
In the query output, verify that the TDE wallet
STATUS
isOPEN
andWALLET_TYPE
set toAUTOLOGIN
, otherwise the auto-login wallet is not set up correctly.This completes the suto-login wallet configuration.
-
-
Copy the wallet files to the second Oracle RAC node.
If you confiugured the wallet in a shared file system for Oracle RAC, or if you are enabling TDE for a single instance database, then no action is required.
If you are enabling TDE for Oracle RAC database without shared access to the wallet, copy the following files to the same location on second node.
-
/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ew*
-
/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/cw*
-
Parent topic: Preparing the Source and Target Databases
Preparing the Response File
Set the response file parameters for the migration target and backup medium you are using in the migration process.
The response file settings in the following topics show you how to configure a typical use case. To further customize your configuration you can find additional parameters described in Zero Downtime Migration Response File Parameters Reference.
- Response File Settings for Migration to Oracle Cloud Infrastructure
Configure the following response file settings to migrate data to an Oracle Cloud Infrastructure virtual machine or bare metal target. - Response File Settings for Migration to Exadata Cloud Service
Configure the following response file settings to migrate data to an Exadata Cloud Service target. - Response File Settings for Exadata Cloud at Customer with Zero Data Loss Recovery Appliance Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Zero Data Loss Recovery Appliance as the backup medium. - Response File Settings for Exadata Cloud at Customer with Object Storage Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Oracle Cloud Infrastructure Object Storage service as the backup medium. - Response File Settings for Exadata Cloud at Customer with NFS Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using NFS storage as the backup medium. - Response File Settings for Offline Migration (Backup and Recovery)
Configure the following response file settings before migrating a database offline to an Oracle Cloud Infrastructure, Exadata Cloud at Customer, or Exadata Cloud Service target environment.
Parent topic: Preparing for Database Migration
Response File Settings for Migration to Oracle Cloud Infrastructure
Configure the following response file settings to migrate data to an Oracle Cloud Infrastructure virtual machine or bare metal target.
Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
-
Set
PLATFORM_TYPE
to VMDB. -
Set
MIGRATION_METHOD
to DG_OSS, where DG stands for Data Guard and OSS stands for Object Storage service. -
If SSH tunneling is set up, set the
TGT_SSH_TUNNEL_PORT
parameter. -
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
Set
SKIP_FALLBACK=TRUE
if you do not want to ship redo logs from the target to the source standby either voluntarily or because there is no connectivity between the target and source. -
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
ZDM_LOG_OSS_PAR_URL
to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= ZDM_CLONE_TGT_MONITORING_INTERVAL= ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
-
Set
ZDM_BACKUP_RETENTION_WINDOW=number of days
if you wish to retain source database backup after the migration. -
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location. -
To access the Oracle Cloud Object Storage, set the following parameters in the response file.
-
Set HOST to the cloud storage REST endpoint URL.
-
For Oracle Cloud Infrastructure storage the typical value format is
HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace
To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:
-
For Oracle Cloud Infrastructure Classic storage the typical value format is
HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name
-
-
Set the Object Storage bucket
OPC_CONTAINER
parameter.The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
-
Parent topic: Preparing the Response File
Response File Settings for Migration to Exadata Cloud Service
Configure the following response file settings to migrate data to an Exadata Cloud Service target.
Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
-
Set
PLATFORM_TYPE
to EXACS. -
Set
MIGRATION_METHOD
to DG_OSS, where DG stands for Data Guard and OSS stands for Object Storage service. -
If SSH tunneling is set up, set the
TGT_SSH_TUNNEL_PORT
parameter. -
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
Set
SKIP_FALLBACK=TRUE
if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source. -
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
ZDM_LOG_OSS_PAR_URL
to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= ZDM_CLONE_TGT_MONITORING_INTERVAL= ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
-
Set
ZDM_BACKUP_RETENTION_WINDOW=number of days
if you wish to retain source database backup after the migration. -
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location. -
To access the Oracle Cloud Object Storage, set the following parameters in the response file.
-
Set HOST to the cloud storage REST endpoint URL.
-
For Oracle Cloud Infrastructure storage the typical value format is
HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace
To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:
-
For Oracle Cloud Infrastructure Classic storage the typical value format is
HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name
-
-
Set the Object Storage bucket
OPC_CONTAINER
parameter.The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
-
Parent topic: Preparing the Response File
Response File Settings for Exadata Cloud at Customer with Zero Data Loss Recovery Appliance Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Zero Data Loss Recovery Appliance as the backup medium.
Get the response file template, which is used to create your Zero Downtime Migration
response file for the database migration procedure, from location
$ZDM_HOME/rhp/zdm/template/zdm_template.rsp
, and update the file as
follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
For Cloud type Exadata Cloud at Customer Gen 1, set
TGT_DB_UNIQUE_NAME
to a differentDB_UNIQUE_NAME
not currently in use -
Set
PLATFORM_TYPE
to EXACC. -
Set
MIGRATION_METHOD
to DG_ZDLRA, where DG stands for Data Guard and ZDLRA for Zero Data Loss Recovery Appliance. -
Set the following Zero Data Loss Recovery Appliance parameters to use a backup residing in Zero Data Loss Recovery Appliance.
-
Set
SRC_ZDLRA_WALLET_LOC
for the wallet location, for example,SRC_ZDLRA_WALLET_LOC=/u02/app/oracle/product/12.1.0/dbhome_3/dbs/zdlra
-
Set
TGT_ZDLRA_WALLET_LOC
for the wallet location, for example,TGT_ZDLRA_WALLET_LOC=target_database_oracle_home/dbs/zdlra
. -
Set
ZDLRA_CRED_ALIAS
for the wallet credential alias, for example,ZDLRA_CRED_ALIAS=zdlra_scan:listener_port/zdlra9:dedicated
-
-
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
Set
SKIP_FALLBACK=TRUE
if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source. -
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of the restore operation at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set the value to 0 (zero).ZDM_CLONE_TGT_MONITORING_INTERVAL=
-
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location.
Parent topic: Preparing the Response File
Response File Settings for Exadata Cloud at Customer with Object Storage Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Oracle Cloud Infrastructure Object Storage service as the backup medium.
Get the response file template, which is used to create your Zero Downtime
Migration response file for the database migration procedure, from location
$ZDM_HOME/rhp/zdm/template/zdm_template.rsp
, and update the file as
follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
For Cloud type Exadata Cloud at Customer Gen 1, set
TGT_DB_UNIQUE_NAME
to a differentDB_UNIQUE_NAME
not currently in use -
Set
PLATFORM_TYPE
to EXACC. -
Set
MIGRATION_METHOD
to DG_OSS, where DG stands for Data Guard and OSS for the Object Storage service. -
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
Set
SKIP_FALLBACK=TRUE
if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source. -
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= ZDM_CLONE_TGT_MONITORING_INTERVAL= ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
-
Set
ZDM_BACKUP_RETENTION_WINDOW=number of days
if you wish to retain source database backup after the migration. -
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location. -
To access the Oracle Cloud Object Storage, set the following parameters in the response file.
The source database is backed up to the specified container and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.
-
Set HOST to the cloud storage REST endpoint URL.
-
For Oracle Cloud Infrastructure storage the typical value format is
HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace
To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:
-
For Oracle Cloud Infrastructure Classic storage the typical value format is
HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name
-
-
Set the Object Storage bucket
OPC_CONTAINER
parameter.The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
-
Parent topic: Preparing the Response File
Response File Settings for Exadata Cloud at Customer with NFS Backup
Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using NFS storage as the backup medium.
Get the response file template, which is used to create your Zero Downtime
Migration response file for the database migration procedure, from location
$ZDM_HOME/rhp/zdm/template/zdm_template.rsp
, and update the file as
follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
For Cloud type Exadata Cloud at Customer Gen 1, set
TGT_DB_UNIQUE_NAME
to a differentDB_UNIQUE_NAME
not currently in use -
Set
PLATFORM_TYPE
to EXACC. -
Set
MIGRATION_METHOD
to DG_SHAREDPATH or DG_EXTBACKUP, where DG stands for Data Guard.Use
DG_STORAGEPATH
when a new backup needs to be taken and placed on an external storage mount (for example, an NFS mount point).Use
DG_EXTBACKUP
when using an existing backup, already placed on an external shared mount (for example, NFS storage).Note that if
MIGRATION_METHOD
is set to DG_EXTBACKUP then Zero Downtime Migration does not perform a new backup. -
Set
BACKUP_PATH
to specify the actual NFS path which is made accessible from both the source and target database servers, for example, an NFS mount point. The NFS mount path should be same for both source and target database servers. This path does not need to be mounted on the Zero Downtime Migration service host.Note the following considerations:
-
The source database is backed up to the specified path and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.
-
The path set in
BACKUP_PATH
should have ‘rwx’ permissions for the source database user, and at least read permissions for the target database user. -
In the path specified by
BACKUP_PATH
, the Zero Downtime Migration backup procedure will create a directory,$BACKUP_PATH/dbname
, and place the backup pieces in this directory.
-
-
If you use
DG_EXTBACKUP
as theMIGRATION_METHOD
, then you should create a standby control file backup in the specified path and provide read permissions to the backup pieces for the target database user. For example,RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '< BACKUP_PATH >/lower_case_dbname/standby_ctl_%U';
Where standby_ctl_%U is a system-generated unique file name.
-
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
Set
SKIP_FALLBACK=TRUE
if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source. -
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= ZDM_CLONE_TGT_MONITORING_INTERVAL= ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
-
Set
ZDM_BACKUP_RETENTION_WINDOW=number of days
if you wish to retain source database backup after the migration. -
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location.
Parent topic: Preparing the Response File
Response File Settings for Offline Migration (Backup and Recovery)
Configure the following response file settings before migrating a database offline to an Oracle Cloud Infrastructure, Exadata Cloud at Customer, or Exadata Cloud Service target environment.
Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
-
Set
TGT_DB_UNIQUE_NAME
to the target databaseDB_UNIQUE_NAME
value. To findDB_UNIQUE_NAME
runSQL> show parameter db_unique_name
-
Set
PLATFORM_TYPE
to the appropriate value, depending on your target environment.- For Oracle Cloud Infrastructure, set
PLATFORM_TYPE=VMDB
. - For Exadata Cloud at Customer, set
PLATFORM_TYPE=EXACC
. - For Exadata Cloud Service, set
PLATFORM_TYPE=EXACS
.
- For Oracle Cloud Infrastructure, set
-
Where Object Storage Service is used for the backup medium, set
MIGRATION_METHOD
to BACKUP_RESTORE_OSS.The Exadata Cloud at Customer platform can also use the NFS backup medium. If this is the case, set
MIGRATION_METHOD
toBACKUP_RESTORE_NFS
, and ignore the Oracle Cloud Object Storage parameter settings. -
Zero Downtime Migration automatically discovers the location for
data
,redo
, andreco
storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.- ASM:
TGT_DATADG
,TGT_REDODG
, andTGT_RECODG
- ACFS:
TGT_DATAACFS
,TGT_REDOACFS
, andTGT_RECOACFS
- ASM:
-
If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the
TGT_SKIP_DATAPATCH=FALSE
parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration. -
Set
ZDM_LOG_OSS_PAR_URL
to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole. -
Set
phase_name_MONITORING_INTERVAL=n mins
if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= ZDM_CLONE_TGT_MONITORING_INTERVAL= ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
-
Set
ZDM_BACKUP_RETENTION_WINDOW=number of days
if you wish to retain source database backup after the migration. -
Set
ZDM_SRC_TNS_ADMIN=TNS_ADMIN value
in case of custom location. -
To access the Oracle Cloud Object Storage, set the following parameters in the response file.
-
Set HOST to the cloud storage REST endpoint URL.
-
For Oracle Cloud Infrastructure storage the typical value format is
HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace
To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:
-
For Oracle Cloud Infrastructure Classic storage the typical value format is
HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name
-
-
Set the Object Storage bucket
OPC_CONTAINER
parameter.The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
-
Parent topic: Preparing the Response File
Preparing for Automatic Application Switchover
To minimize or eliminate service interruptions on the application after you complete the database migration and switchover, prepare your application to automatically switch over connections from the source database to the target database.
In the following example connect string, the application connects to the source database, and when it is not available the connection is switched over to the target database.
(DESCRIPTION=
(FAILOVER=on)(LOAD_BALANCE=on)(CONNECT_TIMEOUT=3)(RETRY_COUNT=3)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=source_database_scan)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=target_database_scan)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=zdm_prod_svc)))
On the source database, create the service, named zdm_prod_svc in the examples.
srvctl add service -db clever -service zdm_prod_svc -role PRIMARY
-notification TRUE -session_state dynamic -failovertype transaction
-failovermethod basic -commit_outcome TRUE -failoverretry 30 -failoverdelay 10
-replay_init_time 900 -clbgoal SHORT -rlbgoal SERVICE_TIME -preferred clever1,clever2
-retention 3600 -verbose
See Also:
Oracle MAA white papers about client failover best practices on the Oracle Data Guard page at https://www.oracle.com/goto/maaHigh Availability in Oracle Database Development Guide
Parent topic: Preparing for Database Migration
Customizing a Migration Job
You can customize the Zero Downtime Migration workflow by registering action scripts or plug-ins as pre-actions or post-actions to be performed as part of the operational phases involved in your migration job.
The following topics describe how to customize a migration job.
- Registering Action Plug-ins
Custom plug-ins must be registered to the Zero Downtime Migration service host to be plugged in as customizations for a particular operational phase. - Creating an Action Template
After the useraction plug-ins are registered, you create an action template that combines a set of action plug-ins which can be associated with a migration job. - Updating Action Plug-ins
You can update action plug-ins registered with the Zero Downtime Migration service host. - Associating an Action Template with a Migration Job
When you run a migration job you can specify the image type that specifies the plug-ins to be run as part of your migration job.
Parent topic: Preparing for Database Migration
Registering Action Plug-ins
Custom plug-ins must be registered to the Zero Downtime Migration service host to be plugged in as customizations for a particular operational phase.
Determine the operational phase the given plug-in has to be associated with, and run
the ZDMCLI
command ADD
USERACTION
, specifying -optype
MIGRATE_DATABASE
and the respective phase
of the operation, whether the plug-in is run
-pre
or -post
relative to that phase, and any on-error requirements. You can
register custom plug-ins for operational phases after
ZDM_SETUP_TGT in the migration job workflow.
What happens at runtime if the user action encounters an error can be specified with
the -onerror
option, which you can set to
either ABORT
, to end the process, or
CONTINUE
, to continue the migration
job even if the custom plug-in exits with an error. See the
example command usage below.
Use the Zero Downtime Migration software installed user (for example, zmduser) to add
user actions to a database migration job. Adding user actions
zdmvaltgt
and
zdmvalsrc
with the ADD
USERACTION
command would look like the
following.
zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvaltgt -optype MIGRATE_DATABASE
-phase ZDM_VALIDATE_TGT -pre -onerror ABORT -actionscript /home/zdmuser/useract.sh
zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvalsrc -optype MIGRATE_DATABASE
-phase ZDM_VALIDATE_SRC -pre -onerror CONTINUE -actionscript /home/zdmuser/useract1.sh
In the above command, the scripts useract.sh
and
useract1.sh
, specified in the
-actionscript
option, are copied to
the Zero Downtime Migration service host repository, and they
are run if they are associated with any migration job run using
an action template.
Parent topic: Customizing a Migration Job
Creating an Action Template
After the useraction plug-ins are registered, you create an action template that combines a set of action plug-ins which can be associated with a migration job.
An action template is created using the ZDMCLI command add
imagetype
, where the image type, imagetype
, is a bundle of all
of the useractions required for a specific type of database migration. Create an image
type that associates all of the useraction plug-ins needed for the migration of the
database. Once created, the image type can be reused for all migration operations for
which the same set of plug-ins are needed.
The base type for the image type created here must be CUSTOM_PLUGIN
, as shown in the example below.
For example, you can create an image type ACTION_ZDM
that bundles
both of the useractions created in the previous example, zdmvalsrc and zdmvaltgt.
zdmuser> $ZDM_HOME/bin/zdmcli add imagetype -imagetype ACTION_ZDM -basetype
CUSTOM_PLUGIN -useractions zdmvalsrc,zdmvaltgt
Parent topic: Customizing a Migration Job
Updating Action Plug-ins
You can update action plug-ins registered with the Zero Downtime Migration service host.
The following example shows you how to modify the useraction zdmvalsrc to be a
-post
action, instead of a -pre
action.
zdmuser> $ZDM_HOME/bin/zdmcli modify useraction -useraction zdmvalsrc -phase ZDM_VALIDATE_SRC
-optype MIGRATE_DATABASE -post
This change is propagated to all of the associated action templates, so you do not need to update the action templates.
Parent topic: Customizing a Migration Job
Associating an Action Template with a Migration Job
When you run a migration job you can specify the image type that specifies the plug-ins to be run as part of your migration job.
As an example, run the migration command specifying the action template ACTION_ZDM created in previous examples, -imagetype ACTION_ZDM
, including the image type results in running the useract.sh and useract1.sh scripts as part of the migration job workflow.
By default, the action plug-ins are run for the specified operational phase on all nodes of the cluster. If the access credential specified in the migration command option -tgtarg2
is unique for a specified target node, then an additional auth argument should be included to specify the auth credentials required to access the other cluster nodes. For example, specify -tgtarg2 nataddrfile:auth_file_with_node_and_identity_file_mapping
.
A typical nataddrfile for a 2 node cluster with node1 and node2 is shown here.
node1:node1:identity_file_path_available_on_zdmservice_node
node2:node2:identity_file_path_available_on_zdmservice_node
Parent topic: Customizing a Migration Job