3 Preparing for Database Migration

Before starting a Zero Downtime Migration database migration you must configure connectivity between the servers, prepare the source and target databases, set parameters in the response file, and configure any required migration job customization.

See the Zero Downtime Migration Release Notes for the latest information about new features, known issues, and My Oracle Support notes.

Configuring Connectivity Prerequisites

Connectivity must be set up between the Zero Downtime Migration service host and the source and target database servers.

The following topics describe how to configure the Zero Downtime Migration connectivity prerequisites before running a migration job.

Configuring Connectivity From the Zero Downtime Migration Service Host to the Source and Target Database Servers

Complete the following procedure to ensure the required connectivity between the Zero Downtime Migration service host and the source and target database servers.

  1. On the Zero Downtime Migration service host, verify that the authentication key pairs are available without a passphrase for the Zero Downtime Migration software installed user.
    If a new key pair must be generated without the passphrase, then, as a Zero Downtime Migration software installed user, generate new key pairs as described in Generate SSH Keys Without a Passphrase.
  2. Rename the private key file.
    Rename the zdm_installed_user_home/.ssh/id_rsa file name to zdm_installed_user_home/.ssh/zdm_service_host.ppk.
  3. Add the contents of the zdm_installed_user_home/.ssh/id_rsa.pub file to the opc_user_home/.ssh/authorized_keys file, with the following dependencies:

    For the source database server:

    • If the source database server is accessed with the root user, then no action is required.
    • If the source database server is accessed through SSH, then add the contents of thezdm_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the source database servers.

    For the target database server:

    • Because the target database server is on cloud only and access is through SSH, add the contents of the zdm_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the target database servers.

    Note that the opc user is a standard Oracle cloud user that is used to access database servers, but you can use any user and you can use different users for the source and target database servers.

  4. Make sure that the source and target database server names are resolvable from the Zero Downtime Migration service host through either resolving name servers or alternate ways approved by your IT infrastructure.
    One method of resolving source and target database server names is to add the source and target database server names and IP address details to the Zero Downtime Migration service host /etc/hosts file.

    In the following example, the IP address entries are shown as 192.x.x.x, but you must add your actual public IP addresses.

    #OCI public IP two node RAC server details
    192.0.2.1 ocidb1
    192.0.2.2 ocidb2
    #OCIC public IP two node RAC server details
    192.0.2.3 ocicdb1
    192.0.2.4 ocicdb2
  5. Make certain that port 22 in the source and target database servers accept incoming connections from the Zero Downtime Migration service host.
  6. Test the connectivity from the Zero Downtime Migration service host to all source and target database servers.
    zdmuser> ssh -i zdm_service_host_private_key_file_location user@source/target_database_server_name

    For example,

    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_host.ppk opc@ocidb1
    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_host.ppk opc@ocicdb1

    Note:

    SSH connectivity during Zero Downtime Migration operations requires direct, non-interactive access between the Zero Downtime Migration service host and the source and target database servers without the need to enter a passphrase.

Configuring SUDO Access

You may need to grant certain users authority to perform operations using sudo on the source and target database servers.

For source database servers:

  • If the source database server is accessed with the root user, then there is no need to configure Sudo operations.

  • If the source database server is accessed through SSH, then configure Sudo operations to run without prompting for a password for the database installed user and the root user.

    For example, if database installed user is oracle, then run sudo su - oracle.

    For the root user run sudo su -.

For target database servers:

  • Because the target database server is on the cloud only, any Sudo operations are configured already. Otherwise, configure all Sudo operations to run without prompting for a password for the database installed user and the root user.

    For example, if database installed user is oracle, then run sudo su - oracle.

    For the root user run sudo su -.

Note, for example, if the login user is opc, then you can enable Sudo operations for the opc user.

Configuring Connectivity Between the Source and Target Database Servers

You have two options for configuring connectivity between the source and target database servers: SCAN or SSH.

Configure connectivity using one of the following options.

Option 1: Use SCAN

To use this option, the SCAN of the target should be resolvable from the source database server, and the SCAN of the source should be resolvable from the target server.

The specified source database server in the ZDMCLI MIGRATE DATABASE command -sourcenode parameter can connect to the target database instance over target SCAN through the respective SCAN port and vice versa.

With SCAN connectivity from both sides, the source database and target databases can synchronize from either direction. If the source database server SCAN cannot be resolved from the target database server, then the SKIP_FALLBACK parameter in the response file must be set to TRUE, and the target database and source database cannot synchronize after switchover.

Test Connectivity

To test connectivity from the source to the target environment, add the TNS entry of the target database to the source database server $ORACLE_HOME/network/admin/tnsnames.ora file.

[oracle@sourcedb ~] tnsping target-tns-string

To test connectivity from the target to the source environment, add the TNS entry of the source database to the target database server $ORACLE_HOME/network/admin/tnsnames.ora file

[oracle@targetdb ~] tnsping source-tns-string

Note:

Database migration to Exadata Cloud at Customer using the Zero Data Loss Recovery Appliance requires mandatory SQL*Net connectivity from the target database server to the source database server.
Option 2: Set up an SSH Tunnel

If connectivity using SCAN and the SCAN port is not possible between the source and target database servers, set up an SSH tunnel from the source database server to the target database server.

The following procedure sets up an SSH tunnel on the source database servers for the root user. Note that this procedure amounts to setting up what may be considered a temporary channel. Using this connectivity option, you will not be able to synchronize between the target database and source database after switchover, and with this configuration you cannot fall back to the original source database.

Note:

The following steps refer to Oracle Cloud Infrastructure, but are also applicable to Exadata Cloud at Customer and Exadata Cloud Service.
  1. Generate an SSH key file without a passphrase for the opc user on the target Oracle Cloud Infrastructure server, using the information in Generate SSH Keys Without a Passphrase. If the target is an Oracle RAC database, then generate an SSH key file without a passphrase from the first Oracle RAC server.
  2. Add the contents of the Oracle Cloud Infrastructure server opc_user_home/.ssh/id_rsa.pub file into the Oracle Cloud Infrastructure server opc_user_home/.ssh/authorized_keys file.
  3. Copy the target Oracle Cloud Infrastructure server private SSH key file onto the source server in the /root/.ssh/ directory. If the source is an Oracle RAC database, copy the file into all of the source servers.
    For better manageability, keep the private SSH key file name the same as the target server name, and keep the .ppk extension. For example, ocidb1.ppk (where ocidb1 is the target server name).

    The file permissions should be similar to the following.

    /root/.ssh>ls -l ocidb1.ppk
    -rw------- 1 root root 1679 Oct 16 10:05 ocidb1.ppk
  4. Put the following entries in the source server /root/.ssh/config file.
    Host *
      ServerAliveInterval 10  
      ServerAliveCountMax 2
    
    Host OCI_server_name   
      HostName OCI_server_IP_address
      IdentityFile Private_key_file_location 
      User OCI_user_login  
      ProxyCommand /usr/bin/nc -X connect -x proxy_name:proxy_port %h %p

    Where

    • OCI_server_name is the Oracle Cloud Infrastructure target database server name without the domain name. For an Oracle RAC database use the first Oracle RAC server name without the domain name.
    • OCI_server_IP_address is the Oracle Cloud Infrastructure target database server IP address. For an Oracle RAC database use the first Oracle RAC server IP address.
    • Private_key_file_location is the location of the private key file on the source database server, which you copied from the target database server in step 3 above.
    • OCI_user_login is the OS user used to access the target database servers.
    • proxy_name is the host name of the proxy server.
    • proxy_port is the port of the proxy server.

    Note that the proxy setup might not be required when you are not using a proxy server for connectivity. For example, when the source database server is on Oracle Cloud Infrastructure Classic, you can remove or comment the line starting with ProxyCommand.

    For example, after specifying the relevant values, the /root/.ssh/config file should be similar to the following.

    Host *
      ServerAliveInterval 10  
      ServerAliveCountMax 2
    
    Host ocidb1
      HostName 192.0.2.1
      IdentityFile /root/.ssh/ocidb1.ppk
      User opc
      ProxyCommand /usr/bin/nc -X connect -x www-proxy.example.com:80 %h %p
    

    The file permissions should be similar to the following.

    /root/.ssh>ls -l config
    -rw------- 1 root root 1679 Oct 16 10:05 config

    In the above example, the Oracle Cloud Infrastructure server name is ocidb1, and the Oracle Cloud Infrastructure server public IP address is 192.0.2.1.

    If the source is an Oracle Cloud Infrastructure Classic server, the proxy_name is not required, so you can remove or comment the line starting with ProxyCommand.

    If the source is an Oracle RAC database, then copy the same /root/.ssh/config file onto all of the source Oracle RAC database servers. This file will have the Oracle Cloud Infrastructure server name, Oracle Cloud Infrastructure server public IP address, and private key file location of first Oracle Cloud Infrastructure Oracle RAC server information configured.

  5. Make sure that you can SSH to the first target Oracle Cloud Infrastructure server from the source server before you enable the SSH tunnel.
    For an Oracle RAC database, test the connection from all of the source servers to the first target Oracle Cloud Interface server.

    Using the private key:

    [root@ocicdb1 ~] ssh -i /root/.ssh/ocidb1.ppk opc@ocidb1
    Last login: Fri Dec  7 14:53:09 2018 from 192.0.2.3
    
    [opc@ocidb1 ~]$

    Note:

    SSH connectivity requires direct, non-interactive access between the source and target database servers, without the need to enter a passphrase.
  6. Run the following command on the source server to enable the SSH tunnel.
    ssh -f OCI_hostname_without_domain_name -L ssh_tunnel_port_number:OCI_server_IP_address:OCI_server_listener_port -N

    Where

    • OCI_hostname_without_domain_name is the Oracle Cloud Infrastructure target database server name without a domain name. For an Oracle RAC database use the first Oracle RAC server name without domain name.
    • ssh_tunnel_port_number is any available ephemeral port in the range (1024-65545). Make sure that the SSH tunnel port is not used by any other process in the server before using it.
    • OCI_server_listener_port is the target database listener port number. The listener port must be open between the source database servers and Oracle Cloud Infrastructure target servers.
    • OCI_server_IP_address is the IP address of the target database server. For a single instance database, specify the Oracle Cloud Infrastructure server IP address. For an Oracle RAC database, specify the Oracle Cloud Infrastructure scan name with the domain name. If the scan name with domain name is not resolvable or not working, then specify the IP address obtained using the lsnrctl status command output. For example,
      Listening Endpoints Summary...
        (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.9)(PORT=1521)))
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.10)(PORT=1521)))

    The following is an example of the command run to enable the SSH tunnel.

    [root@ocicdb1~]ssh -f ocidb1 -L 9000:192.0.2.9:1521 -N

    For an Oracle RAC database, this step must be repeated on all of the source servers.

  7. Test the SSH tunnel.
    Log in to source server, switch to the oracle user and source the database environment, and run the following command.
    tnsping localhost:ssh_tunnel_port

    For example,

    [oracle@ocicdb1 ~] tnsping localhost:9000

    The command output is similar to the following.

    TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 22-JAN-2019 05:41:57
    Copyright (c) 1997, 2014, Oracle.  All rights reserved.
    Used parameter files:
    Used HOSTNAME adapter to resolve the alias
    Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=9000)))
    OK (50 msec)

    If tnsping does not work, then the SSH tunnel is not enabled.

    For Oracle RAC, this step must be repeated on all of the source servers.

Generate SSH Keys Without a Passphrase

You can generate a new SSH key without a passphrase if on the Zero Downtime Migration service host the authentication key pairs are not available without a passphrase for the Zero Downtime Migration software installed user.

Note:

Currently, only the RSA key format is supported for configuring SSH connectivity, so use the ssh-keygen command, which generates both of the authentication key pairs (public and private).

The following example shows you how to generate an SSH key pair for the Zero Downtime Migration software installed user. You can also use this command to generate the SSH key pair for the opc user.

Run the following command on the Zero Downtime Migration service host.

zdmuser> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/zdmuser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/zdmuser/.ssh/id_rsa.
Your public key has been saved in /home/zdmuser/.ssh/id_rsa.pub.
The key fingerprint is:
c7:ed:fa:2c:5b:bb:91:4b:73:93:c1:33:3f:23:3b:30 zdmuser@zdm_service_host
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         . . .   |
|        S o . =  |
|         . E . * |
|            X.+o.|
|          .= Bo.o|
|          o+*o.  |
+-----------------+

This command generates the id_rsa and id_rsa.pub files in the zdmuser home, for example, /home/zdmuser/.ssh.

Preparing the Source and Target Databases

See the following topics for information about preparing the source and target databases for migration.

Source Database Prerequisites

Meet the following prerequisites on the source database before the Zero Downtime Migration process starts.

  • The source database must be running in archive log mode.

  • Configure the TDE wallet on Oracle Database 12c Release 2 and later. Enabling TDE on Oracle Database 11g Release 2 (11.2.0.4) and Oracle Database 12c Release 1 is optional.

    For Oracle Database 12c Release 2 and later, if the source database does not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins. The WALLET_TYPE can be AUTOLOGIN (preferred) or PASSWORD based.

    Ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (For an AUTOLOGIN wallet type), or WALLET_TYPE is PASSWORD (For a PASSWORD based wallet type). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.

    SQL> SELECT * FROM v$encryption_wallet;
  • If the source is an Oracle RAC database, and SNAPSHOT CONTROLFILE is not on a shared location, configure SNAPSHOT CONTROLFILE to point to a shared location on all Oracle RAC nodes to avoid the ORA-00245 error during backups to Oracle Object Store.

    For example, if the database is deployed on ASM storage,

    $ rman target /  
    RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DATA/snapcf_matrix.f';

    If the database is deployed on an ACFS file system, specify the shared ACFS location in the above command.

  • Verify that port 22 on the source and target database servers allow incoming connections from the Zero Downtime Migration service host.

  • Ensure that the scan listener ports (1521, for example) on the source database servers allow incoming connections from the target database servers and outgoing connections to the target database servers.

    Alternate SQL connectivity should be made available if a firewall blocks incoming remote connection using the SCAN listener port.

  • To preserve the source database Recovery Time Objective (RTO) and Recovery Point Objective (RPO) during the migration, the existing RMAN backup strategy should be maintained.

    During the migration a dual backup strategy will be in place; the existing backup strategy and the strategy used by Zero Downtime Migration. Avoid having two RMAN backup jobs running simultaneously (the existing one and the one initiated by Zero Downtime Migration). If archive logs were to be deleted on the source database, and these archive logs are needed by Zero Downtime Migration to synchronize the target cloud database, then these files should be restored so that Zero Downtime Migration can continue the migration process.

  • If the source database is deployed using Oracle Grid Infrastructure and the database is not registered using SRVCTL, then you must register the database before the migration.

  • The source database must use a server parameter file (SPFILE).

  • The source database must have a password file in location $ORACLE_HOME/dbs/orapwORACLE_SID; otherwise, create it using the ORAPWD utility.

  • If RMAN is not already configured to automatically back up the control file and SPFILE, then set CONFIGURE CONTROLFILE AUTOBACKUP to ON and revert the setting back to OFF after migration is complete.

    RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

Target Database Prerequisites

The following prerequisites must be met on the target database before you begin the Zero Downtime Migration process.

  • You must create a placeholder target database using Grid Infrastructure based Database Services before database migration begins.

    Note:

    For this release, only Grid Infrastructure-based database services are supported as targets. For example, an LVM-based instance or an instance created in compute node without Grid Infrastructure are not supported targets.

    The placeholder target database is overwritten during migration, but it retains the overall configuration.

    Pay careful attention to the following requirements:

    • Size for the future - When you create the database from the console, ensure that your chosen shape can accommodate the source database, plus any future sizing requirements. A good guideline is to use a shape similar to or larger in size than source database.
    • Set name parameters
      • DB_NAME - If the target database is Exadata Cloud Service or Exadata Cloud at Customer, then the database DB_NAME should be the same as the source database DB_NAME. If the target database is Oracle Cloud Infrastructure, then the database DB_NAME can be the same as or different from the source database DB_NAME.
      • DB_UNIQUE_NAME: If the target database is Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer, the target database DB_UNIQUE_NAME parameter value must be unique to ensure that Oracle Data Guard can identify the target as a different database from the source database.
    • Match the source SYS password - Specify a SYS password that matches that of the source database.
    • Disable automatic backups - Provision the target database from the console without enabling automatic backups.

      For Oracle Cloud Infrastructure and Exadata Cloud Service, do not select the Enable automatic backups option under the section Configure database backups.

      For Exadata Cloud at Customer, set Backup destination Type to None under the section Configure Backups.

  • The target database version should be the same as the source database version. The target database patch level should also be the same as (or higher than) the source database.

    If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then you must run the datapatch utility after database migration.

  • The target database time zone version must be the same as the source database time zone version. To check the current time zone version, query the V$TIMEZONE_FILE view as shown here, and upgrade the time zone file if necessary.

    SQL> SELECT * FROM v$timezone_file;
  • Verify that the TDE wallet folder exists, and ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (For an auto-login wallet type), or WALLET_TYPE is PASSWORD (For a password-based wallet). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.

    SQL> SELECT * FROM v$encryption_wallet;
  • The target database must use a server parameter file (SPFILE).

  • If the target is an Oracle RAC database, then you must set up SSH connectivity without a passphrase between the Oracle RAC servers for the oracle user.

  • Check the size of the disk groups and usage on the target database (ASM disk groups or ACFS file systems) and make sure adequate storage is provisioned and available on the target database servers.

  • Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

  • Verify that ports 22 and 1521 on the target servers in the Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer environment are open and not blocked by a firewall.

  • Capture the output of the RMAN SHOW ALL command, so that you can compare RMAN settings after the migration, then reset any changed RMAN configuration settings to ensure that the backup works without any issues.

    RMAN> show all;

See Also:

Managing User Credentials for information about generating the auth token for Object Storage backups

Zero Downtime Migration Port Requirements

Setting Up the Transparent Data Encryption Wallet

For Oracle Database 12c Release 2 and later, if the soure and target databases do not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins.

TDE should be enabled and the TDE WALLET status on both source and target databases must be set to OPEN. The WALLET_TYPE can be AUTOLOGIN, for an auto-login wallet (preferred), or PASSWORD, for a password-based wallet. On a multitenant database, make sure that the wallet is open on all PDBs as well as the CDB, and that the master key is set for all PDBs and the CDB.

If TDE is not already configured as required on the source and target databases, use the following instructions to set up the TDE wallet.

For a password-based wallet, you only need to do steps 1, 2, and 4; for an auto-login wallet, complete all of the steps.

  1. Set ENCRYPTION_WALLET_LOCATION in the $ORACLE_HOME/network/admin/sqlnet.ora file.

    /home/oracle>cat /u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/sqlnet.ora 
    
    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)
      (METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/)))

    For an Oracle RAC instance, also set ENCRYPTION_WALLET_LOCATION in the second Oracle RAC node.

  2. Create and configure the keystore.

    1. Connect to the database and create the keystore.

      $ sqlplus "/as sysdba"
      SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin'
       identified by password;
    2. Open the keystore.

      For a non-CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password;
      keystore altered.

      For a CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password container = ALL;
      keystore altered.
    3. Create and activate the master encryption key.

      For a non-CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup;
      keystore altered.

      For a CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup container = ALL;
      keystore altered.
    4. Query V$ENCRYPTION_KEYS to get the wallet status, wallet type, and wallet location.

      SQL> SELECT * FROM v$encryption_keys;
      
      WRL_TYPE    WRL_PARAMETER
      --------------------            --------------------------------------------------------------------------------
      STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC    CON_ID
      ------------------------------ -------------------- --------- ---------            ----------
      FILE        /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
      OPEN                           PASSWORD             SINGLE    NO         0

    The configuration of a password-based wallet is complete at this stage, and the wallet is enabled with status OPEN and WALLET_TYPE is shown as PASSWORD in the query output above.

    Continue to step 3 only if you need to configure an auto-login wallet, otherwise skip to step 4.

  3. For an auto-login wallet only, complete the keystore configuration.

    1. Create the auto-login keystore.

      SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE
       '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/' IDENTIFIED BY password;
      keystore altered.
    2. Close the password-based wallet.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY password;
      keystore altered.
    3. Query V$ENCRYPTION_WALLET to get the wallet status, wallet type, and wallet location.

      SQL> SELECT * FROM v$encryption_wallet;
      WRL_TYPE WRL_PARAMETER
      -------------------- --------------------------------------------------------------------------------
      STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
      ------------------------------ -------------------- --------- --------- ---------
      FILE /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
      OPEN AUTOLOGIN SINGLE NO 

      In the query output, verify that the TDE wallet STATUS is OPEN and WALLET_TYPE set to AUTOLOGIN, otherwise the auto-login wallet is not set up correctly.

      This completes the suto-login wallet configuration.

  4. Copy the wallet files to the second Oracle RAC node.

    If you confiugured the wallet in a shared file system for Oracle RAC, or if you are enabling TDE for a single instance database, then no action is required.

    If you are enabling TDE for Oracle RAC database without shared access to the wallet, copy the following files to the same location on second node.

    • /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ew*

    • /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/cw*

Preparing the Response File

Set the response file parameters for the migration target and backup medium you are using in the migration process.

The response file settings in the following topics show you how to configure a typical use case. To further customize your configuration you can find additional parameters described in Zero Downtime Migration Response File Parameters Reference.

Response File Settings for Migration to Oracle Cloud Infrastructure

Configure the following response file settings to migrate data to an Oracle Cloud Infrastructure virtual machine or bare metal target.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name
  • Set PLATFORM_TYPE to VMDB.

  • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS stands for Object Storage service.

  • If SSH tunneling is set up, set the TGT_SSH_TUNNEL_PORT parameter.

  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby either voluntarily or because there is no connectivity between the target and source.

  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set ZDM_LOG_OSS_PAR_URL to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

    ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
    ZDM_CLONE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
  • Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

  • To access the Oracle Cloud Object Storage, set the following parameters in the response file.

    • Set HOST to the cloud storage REST endpoint URL.

      • For Oracle Cloud Infrastructure storage the typical value format is HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace

        To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:

      • For Oracle Cloud Infrastructure Classic storage the typical value format is HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name

    • Set the Object Storage bucket OPC_CONTAINER parameter.

      The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

Response File Settings for Migration to Exadata Cloud Service

Configure the following response file settings to migrate data to an Exadata Cloud Service target.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name
  • Set PLATFORM_TYPE to EXACS.

  • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS stands for Object Storage service.

  • If SSH tunneling is set up, set the TGT_SSH_TUNNEL_PORT parameter.

  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.

  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set ZDM_LOG_OSS_PAR_URL to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

    ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
    ZDM_CLONE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
  • Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

  • To access the Oracle Cloud Object Storage, set the following parameters in the response file.

    • Set HOST to the cloud storage REST endpoint URL.

      • For Oracle Cloud Infrastructure storage the typical value format is HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace

        To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:

      • For Oracle Cloud Infrastructure Classic storage the typical value format is HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name

    • Set the Object Storage bucket OPC_CONTAINER parameter.

      The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

Response File Settings for Exadata Cloud at Customer with Zero Data Loss Recovery Appliance Backup

Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Zero Data Loss Recovery Appliance as the backup medium.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name

    For Cloud type Exadata Cloud at Customer Gen 1, set TGT_DB_UNIQUE_NAME to a different DB_UNIQUE_NAME not currently in use

  • Set PLATFORM_TYPE to EXACC.

  • Set MIGRATION_METHOD to DG_ZDLRA, where DG stands for Data Guard and ZDLRA for Zero Data Loss Recovery Appliance.

  • Set the following Zero Data Loss Recovery Appliance parameters to use a backup residing in Zero Data Loss Recovery Appliance.

    • Set SRC_ZDLRA_WALLET_LOC for the wallet location, for example,

      SRC_ZDLRA_WALLET_LOC=/u02/app/oracle/product/12.1.0/dbhome_3/dbs/zdlra
    • Set TGT_ZDLRA_WALLET_LOC for the wallet location, for example, TGT_ZDLRA_WALLET_LOC=target_database_oracle_home/dbs/zdlra.

    • Set ZDLRA_CRED_ALIAS for the wallet credential alias, for example,

      ZDLRA_CRED_ALIAS=zdlra_scan:listener_port/zdlra9:dedicated
  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.

  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of the restore operation at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set the value to 0 (zero).

    ZDM_CLONE_TGT_MONITORING_INTERVAL=
  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

Response File Settings for Exadata Cloud at Customer with Object Storage Backup

Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using Oracle Cloud Infrastructure Object Storage service as the backup medium.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name

    For Cloud type Exadata Cloud at Customer Gen 1, set TGT_DB_UNIQUE_NAME to a different DB_UNIQUE_NAME not currently in use

  • Set PLATFORM_TYPE to EXACC.

  • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS for the Object Storage service.

  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.

  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

    ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
    ZDM_CLONE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
  • Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

  • To access the Oracle Cloud Object Storage, set the following parameters in the response file.

    The source database is backed up to the specified container and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.

    • Set HOST to the cloud storage REST endpoint URL.

      • For Oracle Cloud Infrastructure storage the typical value format is HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace

        To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:

      • For Oracle Cloud Infrastructure Classic storage the typical value format is HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name

    • Set the Object Storage bucket OPC_CONTAINER parameter.

      The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

Response File Settings for Exadata Cloud at Customer with NFS Backup

Configure the following response file settings to migrate data to an Exadata Cloud at Customer target using NFS storage as the backup medium.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name

    For Cloud type Exadata Cloud at Customer Gen 1, set TGT_DB_UNIQUE_NAME to a different DB_UNIQUE_NAME not currently in use

  • Set PLATFORM_TYPE to EXACC.

  • Set MIGRATION_METHOD to DG_SHAREDPATH or DG_EXTBACKUP, where DG stands for Data Guard.

    Use DG_STORAGEPATH when a new backup needs to be taken and placed on an external storage mount (for example, an NFS mount point).

    Use DG_EXTBACKUP when using an existing backup, already placed on an external shared mount (for example, NFS storage).

    Note that if MIGRATION_METHOD is set to DG_EXTBACKUP then Zero Downtime Migration does not perform a new backup.

  • Set BACKUP_PATH to specify the actual NFS path which is made accessible from both the source and target database servers, for example, an NFS mount point. The NFS mount path should be same for both source and target database servers. This path does not need to be mounted on the Zero Downtime Migration service host.

    Note the following considerations:

    • The source database is backed up to the specified path and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.

    • The path set in BACKUP_PATH should have ‘rwx’ permissions for the source database user, and at least read permissions for the target database user.

    • In the path specified by BACKUP_PATH, the Zero Downtime Migration backup procedure will create a directory, $BACKUP_PATH/dbname, and place the backup pieces in this directory.

  • If you use DG_EXTBACKUP as the MIGRATION_METHOD, then you should create a standby control file backup in the specified path and provide read permissions to the backup pieces for the target database user. For example,

    RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '< BACKUP_PATH >/lower_case_dbname/standby_ctl_%U';

    Where standby_ctl_%U is a system-generated unique file name.

  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.

  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

    ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
    ZDM_CLONE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
  • Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

Response File Settings for Offline Migration (Backup and Recovery)

Configure the following response file settings before migrating a database offline to an Oracle Cloud Infrastructure, Exadata Cloud at Customer, or Exadata Cloud Service target environment.

Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file as follows.

  • Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

    SQL> show parameter db_unique_name
  • Set PLATFORM_TYPE to the appropriate value, depending on your target environment.

    • For Oracle Cloud Infrastructure, set PLATFORM_TYPE=VMDB.
    • For Exadata Cloud at Customer, set PLATFORM_TYPE=EXACC.
    • For Exadata Cloud Service, set PLATFORM_TYPE=EXACS.
  • Where Object Storage Service is used for the backup medium, set MIGRATION_METHOD to BACKUP_RESTORE_OSS.

    The Exadata Cloud at Customer platform can also use the NFS backup medium. If this is the case, set MIGRATION_METHOD to BACKUP_RESTORE_NFS, and ignore the Oracle Cloud Object Storage parameter settings.

  • Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

    • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
    • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS
  • If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then use the TGT_SKIP_DATAPATCH=FALSE parameter to run the datapatch utility to apply a database patch on the target database as part of the post-migration tasks. Otherwise, you need to run the datapatch utility manually after the migration.

  • Set ZDM_LOG_OSS_PAR_URL to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole.

  • Set phase_name_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

    ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
    ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
    ZDM_CLONE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
    ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=
  • Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

  • Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

  • To access the Oracle Cloud Object Storage, set the following parameters in the response file.

    • Set HOST to the cloud storage REST endpoint URL.

      • For Oracle Cloud Infrastructure storage the typical value format is HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace

        To find the Object Storage Namespace value, log in to the Cloud Console and select Menu > Administration > Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:

      • For Oracle Cloud Infrastructure Classic storage the typical value format is HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name

    • Set the Object Storage bucket OPC_CONTAINER parameter.

      The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

Preparing for Automatic Application Switchover

To minimize or eliminate service interruptions on the application after you complete the database migration and switchover, prepare your application to automatically switch over connections from the source database to the target database.

In the following example connect string, the application connects to the source database, and when it is not available the connection is switched over to the target database.

(DESCRIPTION=
    (FAILOVER=on)(LOAD_BALANCE=on)(CONNECT_TIMEOUT=3)(RETRY_COUNT=3)
    (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=TCP)(HOST=source_database_scan)(PORT=1521))
        (ADDRESS=(PROTOCOL=TCP)(HOST=target_database_scan)(PORT=1521)))
    (CONNECT_DATA=(SERVICE_NAME=zdm_prod_svc)))

On the source database, create the service, named zdm_prod_svc in the examples.

srvctl add service -db clever -service zdm_prod_svc -role PRIMARY
 -notification TRUE -session_state dynamic -failovertype transaction
 -failovermethod basic -commit_outcome TRUE -failoverretry 30 -failoverdelay 10
 -replay_init_time 900 -clbgoal SHORT -rlbgoal SERVICE_TIME -preferred clever1,clever2
 -retention 3600 -verbose

See Also:

Oracle MAA white papers about client failover best practices on the Oracle Active Data Guard Best Practices page at https://www.oracle.com/goto/maa

High Availability in Oracle Database Development Guide

Customizing a Migration Job

You can customize the Zero Downtime Migration workflow by registering action scripts or plug-ins as pre-actions or post-actions to be performed as part of the operational phases involved in your migration job.

The following topics describe how to customize a migration job.

Registering Action Plug-ins

Custom plug-ins must be registered to the Zero Downtime Migration service host to be plugged in as customizations for a particular operational phase.

Determine the operational phase the given plug-in has to be associated with, and run the ZDMCLI command ADD USERACTION, specifying -optype MIGRATE_DATABASE and the respective phase of the operation, whether the plug-in is run -pre or -post relative to that phase, and any on-error requirements. You can register custom plug-ins for operational phases after ZDM_SETUP_TGT in the migration job workflow.

What happens at runtime if the user action encounters an error can be specified with the -onerror option, which you can set to either ABORT, to end the process, or CONTINUE, to continue the migration job even if the custom plug-in exits with an error. See the example command usage below.

Use the Zero Downtime Migration software installed user (for example, zmduser) to add user actions to a database migration job. Adding user actions zdmvaltgt and zdmvalsrc with the ADD USERACTION command would look like the following.

zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvaltgt -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_TGT -pre -onerror ABORT -actionscript /home/zdmuser/useract.sh

zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvalsrc -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_SRC -pre -onerror CONTINUE -actionscript /home/zdmuser/useract1.sh

In the above command, the scripts useract.sh and useract1.sh, specified in the -actionscript option, are copied to the Zero Downtime Migration service host repository, and they are run if they are associated with any migration job run using an action template.

Creating an Action Template

After the useraction plug-ins are registered, you create an action template that combines a set of action plug-ins which can be associated with a migration job.

An action template is created using the ZDMCLI command add imagetype, where the image type, imagetype, is a bundle of all of the useractions required for a specific type of database migration. Create an image type that associates all of the useraction plug-ins needed for the migration of the database. Once created, the image type can be reused for all migration operations for which the same set of plug-ins are needed.

The base type for the image type created here must be CUSTOM_PLUGIN, as shown in the example below.

For example, you can create an image type ACTION_ZDM that bundles both of the useractions created in the previous example, zdmvalsrc and zdmvaltgt.

zdmuser> $ZDM_HOME/bin/zdmcli add imagetype -imagetype ACTION_ZDM -basetype 
CUSTOM_PLUGIN -useractions zdmvalsrc,zdmvaltgt

Updating Action Plug-ins

You can update action plug-ins registered with the Zero Downtime Migration service host.

The following example shows you how to modify the useraction zdmvalsrc to be a -post action, instead of a -pre action.

zdmuser> $ZDM_HOME/bin/zdmcli modify useraction -useraction zdmvalsrc -phase ZDM_VALIDATE_SRC
 -optype MIGRATE_DATABASE -post

This change is propagated to all of the associated action templates, so you do not need to update the action templates.

Associating an Action Template with a Migration Job

When you run a migration job you can specify the image type that specifies the plug-ins to be run as part of your migration job.

As an example, run the migration command specifying the action template ACTION_ZDM created in previous examples, -imagetype ACTION_ZDM, including the image type results in running the useract.sh and useract1.sh scripts as part of the migration job workflow.

By default, the action plug-ins are run for the specified operational phase on all nodes of the cluster. If the access credential specified in the migration command option -tgtarg2 is unique for a specified target node, then an additional auth argument should be included to specify the auth credentials required to access the other cluster nodes. For example, specify -tgtarg2 nataddrfile:auth_file_with_node_and_identity_file_mapping.

A typical nataddrfile for a 2 node cluster with node1 and node2 is shown here.

node1:node1:identity_file_path_available_on_zdmservice_node 
node2:node2:identity_file_path_available_on_zdmservice_node