3 Preparing for Database Migration

Before starting a Zero Downtime Migration database migration you must configure connectivity between the servers, prepare the source and target databases, set parameters in the response file, and configure any required migration job customization.

See the Zero Downtime Migration Release Notes for the latest information about new features, known issues, and My Oracle Support notes.

Preparing for a Physical Database Migration

The following topics describe how to configure the Zero Downtime Migration prerequisites before running a physical database migration job.

Configuring Connectivity Prerequisites

Connectivity must be set up between the Zero Downtime Migration service host and the source and target database servers.

The following topics describe how to configure the Zero Downtime Migration connectivity prerequisites before running a migration job.

Configuring Connectivity From the Zero Downtime Migration Service Host to the Source and Target Database Servers

Complete the following procedure to ensure the required connectivity between the Zero Downtime Migration service host and the source and target database servers.

  1. On the Zero Downtime Migration service host, verify that the RSA authentication key pairs are available without a passphrase for the Zero Downtime Migration software installed user.
    If a new key pair must be generated without the passphrase, then, as a Zero Downtime Migration software installed user, generate new key pairs as described in Generate SSH Keys Without a Passphrase.
  2. Rename the private key file.
    Rename the zdm_installed_user_home/.ssh/id_rsa file name to zdm_installed_user_home/.ssh/zdm_service_host.ppk.
  3. Add the contents of the zdm_installed_user_home/.ssh/id_rsa.pub file to the opc_user_home/.ssh/authorized_keys file, with the following dependencies:

    For the source database server:

    • If the source database server is accessed with the root user, then no action is required.
    • If the source database server is accessed through SSH, then add the contents of the zdm_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the source database servers.

    For the target database server:

    • Because the target database server is on cloud only and access is through SSH, add the contents of the zdm_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the target database servers.

    Note that the opc user is a standard Oracle cloud user that is used to access database servers, but you can use any privileged user that has sudo privileges. You can also use different users for the source and target databases.

  4. Make sure that the source and target database server names are resolvable from the Zero Downtime Migration service host through either resolving name servers or alternate ways approved by your IT infrastructure.

    One method of resolving source and target database server names is to add the source and target database server names and IP address details to the Zero Downtime Migration service host /etc/hosts file.

    In the following example, the IP address entries are shown as 192.x.x.x, but you must add your actual public IP addresses.

    #OCI public IP two node RAC server details
    192.0.2.1 ocidb1
    192.0.2.2 ocidb2
    #OCIC public IP two node RAC server details
    192.0.2.3 ocicdb1
    192.0.2.4 ocicdb2

    Optionally, Zero Downtime Migration allows connectivity through bastion hosts for both logical and physical migrations.

  5. Make certain that port 22 in the source and target database servers accept incoming connections from the Zero Downtime Migration service host.
  6. Test the connectivity from the Zero Downtime Migration service host to all source and target database servers.
    zdmuser> ssh -i zdm_service_host_private_key_file_location user@source/target_database_server_name

    For example,

    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_host.ppk opc@ocidb1
    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_host.ppk opc@ocicdb1

    Note:

    SSH connectivity during Zero Downtime Migration operations requires direct, non-interactive access between the Zero Downtime Migration service host and the source and target database servers without the need to enter a passphrase.
  7. Disable TTY and verify that it is disabled for the SSH privileged user.

    TTY needs to be turned off so that Zero Downtime Migration can run commands on the remote hosts non-interactively.

    Because there are many ways to set sudo privileges, there are many ways to disable TTY for the zdmuser. As an example, you could set the following default in /etc/sudoers file.

    Defaults:zdmuser !requiretty

    Run the following command to verify that TTY is disabled:

    
    ssh -i zdm_service_host_private_key_file_location
     user@source_database/target_database_server_name
     "sudo_location_source/target_database /bin/sh -c date"

    If TTY is disabled, the command above returns the date from the remote host without any errors.

    If SSH is configured to require TTY, the output shows an error, such as the following:

    
    
    [opc@zdm-server ~]$ ssh -i /home/zdmuser/.ssh/zdm_service_host.ppk opc@ocidb1
     "/usr/bin/sudo /bin/sh -c date"
    
    sudo: sorry, you must have a tty to run sudo
Configuring SUDO Access

You may need to grant certain users authority to perform operations using sudo on the source and target database servers.

For source database servers:

  • If the source database server is accessed with the root user, then there is no need to configure Sudo operations.

  • If the source database server is accessed through SSH, then configure Sudo operations to run without prompting for a password for the database installed user and the root user.

    For example, if database installed user is oracle, then run sudo su - oracle.

    For the root user run sudo su -.

For target database servers:

  • Because the target database server is on the cloud only, any Sudo operations are configured already. Otherwise, configure all Sudo operations to run without prompting for a password for the database installed user and the root user.

    For example, if database installed user is oracle, then run sudo su - oracle.

    For the root user run sudo su -.

Note, for example, if the login user is opc, then you can enable Sudo operations for the opc user.

Configuring Connectivity Between the Source and Target Database Servers

You have two options for configuring connectivity between the source and target database servers: SQL*Net connectivity using SCAN or SSH.

Configure connectivity using one of the following options.

Option 1: SQL*Net Connectivity Using SCAN

To use this option, the SCAN of the target should be resolvable from the source database server, and the SCAN of the source should be resolvable from the target server.

The specified source database server in the ZDMCLI migrate database command -sourcenode parameter can connect to the target database instance over target SCAN through the respective SCAN port and vice versa.

With SCAN connectivity from both sides, the source database and target databases can synchronize from either direction. If the source database server SCAN cannot be resolved from the target database server, then the SKIP_FALLBACK parameter in the response file must be set to TRUE, and the target database and source database cannot synchronize after switchover.

Test Connectivity

To test connectivity from the source to the target environment, add the TNS entry of the target database to the source database server $ORACLE_HOME/network/admin/tnsnames.ora file.

[oracle@sourcedb ~] tnsping target-tns-string

To test connectivity from the target to the source environment, add the TNS entry of the source database to the target database server $ORACLE_HOME/network/admin/tnsnames.ora file

[oracle@targetdb ~] tnsping source-tns-string

Note:

Database migration to Exadata Cloud at Customer using the Zero Data Loss Recovery Appliance requires mandatory SQL*Net connectivity from the target database server to the source database server.
Option 2: Set up an SSH Tunnel

If connectivity using SCAN and the SCAN port is not possible between the source and target database servers, set up an SSH tunnel from the source database server to the target database server.

The following procedure sets up an SSH tunnel on the source database servers for the root user. Note that this procedure amounts to setting up what may be considered a temporary channel. Using this connectivity option, you will not be able to synchronize between the target database and source database after switchover, and with this configuration you cannot fall back to the original source database.

Note:

The following steps refer to Oracle Cloud Infrastructure, but are also applicable to Exadata Cloud at Customer and Exadata Cloud Service.
  1. Generate an SSH key file without a passphrase for the opc user on the target Oracle Cloud Infrastructure server, using the information in Generate SSH Keys Without a Passphrase. If the target is an Oracle RAC database, then generate an SSH key file without a passphrase from the first Oracle RAC server.
  2. Add the contents of the Oracle Cloud Infrastructure server opc_user_home/.ssh/id_rsa.pub file into the Oracle Cloud Infrastructure server opc_user_home/.ssh/authorized_keys file.
  3. Copy the target Oracle Cloud Infrastructure server private SSH key file onto the source server in the /root/.ssh/ directory. If the source is an Oracle RAC database, copy the file into all of the source servers.
    For better manageability, keep the private SSH key file name the same as the target server name, and keep the .ppk extension. For example, ocidb1.ppk (where ocidb1 is the target server name).

    The file permissions should be similar to the following.

    /root/.ssh>ls -l ocidb1.ppk
    -rw------- 1 root root 1679 Oct 16 10:05 ocidb1.ppk
  4. Put the following entries in the source server /root/.ssh/config file.
    Host *
      ServerAliveInterval 10  
      ServerAliveCountMax 2
    
    Host OCI_server_name   
      HostName OCI_server_IP_address
      IdentityFile Private_key_file_location 
      User OCI_user_login  
      ProxyCommand /usr/bin/nc -X connect -x proxy_name:proxy_port %h %p

    Where

    • OCI_server_name is the Oracle Cloud Infrastructure target database server name without the domain name. For an Oracle RAC database use the first Oracle RAC server name without the domain name.
    • OCI_server_IP_address is the Oracle Cloud Infrastructure target database server IP address. For an Oracle RAC database use the first Oracle RAC server IP address.
    • Private_key_file_location is the location of the private key file on the source database server, which you copied from the target database server in step 3 above.
    • OCI_user_login is the OS user used to access the target database servers.
    • proxy_name is the host name of the proxy server.
    • proxy_port is the port of the proxy server.

    Note that the proxy setup might not be required when you are not using a proxy server for connectivity. For example, when the source database server is on Oracle Cloud Infrastructure Classic, you can remove or comment the line starting with ProxyCommand.

    For example, after specifying the relevant values, the /root/.ssh/config file should be similar to the following.

    Host *
      ServerAliveInterval 10  
      ServerAliveCountMax 2
    
    Host ocidb1
      HostName 192.0.2.1
      IdentityFile /root/.ssh/ocidb1.ppk
      User opc
      ProxyCommand /usr/bin/nc -X connect -x www-proxy.example.com:80 %h %p
    

    The file permissions should be similar to the following.

    /root/.ssh>ls -l config
    -rw------- 1 root root 1679 Oct 16 10:05 config

    In the above example, the Oracle Cloud Infrastructure server name is ocidb1, and the Oracle Cloud Infrastructure server public IP address is 192.0.2.1.

    If the source is an Oracle Cloud Infrastructure Classic server, the proxy_name is not required, so you can remove or comment the line starting with ProxyCommand.

    If the source is an Oracle RAC database, then copy the same /root/.ssh/config file onto all of the source Oracle RAC database servers. This file will have the Oracle Cloud Infrastructure server name, Oracle Cloud Infrastructure server public IP address, and private key file location of first Oracle Cloud Infrastructure Oracle RAC server information configured.

  5. Make sure that you can SSH to the first target Oracle Cloud Infrastructure server from the source server before you enable the SSH tunnel.
    For an Oracle RAC database, test the connection from all of the source servers to the first target Oracle Cloud Interface server.

    Using the private key:

    [root@ocicdb1 ~] ssh -i /root/.ssh/ocidb1.ppk opc@ocidb1
    Last login: Fri Dec  7 14:53:09 2018 from 192.0.2.3
    
    [opc@ocidb1 ~]$

    Note:

    SSH connectivity requires direct, non-interactive access between the source and target database servers, without the need to enter a passphrase.
  6. Run the following command on the source server to enable the SSH tunnel.
    ssh -f OCI_hostname_without_domain_name -L ssh_tunnel_port_number:OCI_server_IP_address:OCI_server_listener_port -N

    Where

    • OCI_hostname_without_domain_name is the Oracle Cloud Infrastructure target database server name without a domain name. For an Oracle RAC database use the first Oracle RAC server name without domain name.
    • ssh_tunnel_port_number is any available ephemeral port in the range (1024-65545). Make sure that the SSH tunnel port is not used by any other process in the server before using it.
    • OCI_server_listener_port is the target database listener port number. The listener port must be open between the source database servers and Oracle Cloud Infrastructure target servers.
    • OCI_server_IP_address is the IP address of the target database server. For a single instance database, specify the Oracle Cloud Infrastructure server IP address. For an Oracle RAC database, specify the Oracle Cloud Infrastructure scan name with the domain name. If the scan name with domain name is not resolvable or not working, then specify the IP address obtained using the lsnrctl status command output. For example,
      Listening Endpoints Summary...
        (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.9)(PORT=1521)))
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.10)(PORT=1521)))

    The following is an example of the command run to enable the SSH tunnel.

    [root@ocicdb1~]ssh -f ocidb1 -L 9000:192.0.2.9:1521 -N

    For an Oracle RAC database, this step must be repeated on all of the source servers.

  7. Test the SSH tunnel.
    Log in to source server, switch to the oracle user and source the database environment, and run the following command.
    tnsping localhost:ssh_tunnel_port

    For example,

    [oracle@ocicdb1 ~] tnsping localhost:9000

    The command output is similar to the following.

    TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 22-JAN-2019 05:41:57
    Copyright (c) 1997, 2014, Oracle.  All rights reserved.
    Used parameter files:
    Used HOSTNAME adapter to resolve the alias
    Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=9000)))
    OK (50 msec)

    If tnsping does not work, then the SSH tunnel is not enabled.

    For Oracle RAC, this step must be repeated on all of the source servers.

Generate SSH Keys Without a Passphrase

You can generate a new SSH key without a passphrase if on the Zero Downtime Migration service host the authentication key pairs are not available without a passphrase for the Zero Downtime Migration software installed user.

Note:

Currently, only the RSA key format is supported for configuring SSH connectivity, so use the ssh-keygen command, which generates both of the authentication key pairs (public and private).

The following example shows you how to generate an SSH key pair for the Zero Downtime Migration software installed user. You can also use this command to generate the SSH key pair for the opc user.

Run the following command on the Zero Downtime Migration service host.

zdmuser> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/zdmuser/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/zdmuser/.ssh/id_rsa.
Your public key has been saved in /home/zdmuser/.ssh/id_rsa.pub.
The key fingerprint is:
c7:ed:fa:2c:5b:bb:91:4b:73:93:c1:33:3f:23:3b:30 zdmuser@zdm_service_host
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         . . .   |
|        S o . =  |
|         . E . * |
|            X.+o.|
|          .= Bo.o|
|          o+*o.  |
+-----------------+

This command generates the id_rsa and id_rsa.pub files in the zdmuser home, for example, /home/zdmuser/.ssh.

Preparing the Source and Target Databases

There are several tasks you must complete on the source and target databases before configuring a migration job.

Source Database Prerequisites

Meet the following prerequisites on the source database before the Zero Downtime Migration process starts.

  • The source database must be running in ARCHIVELOG mode. See Changing the Database Archiving Mode.

  • The character set on the source database must be the same as the target database.

  • Configure the TDE wallet on Oracle Database 12c Release 2 and later. Enabling TDE on Oracle Database 11g Release 2 (11.2.0.4) and Oracle Database 12c Release 1 is optional.

    For Oracle Database 12c Release 2 and later, if the source database does not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins. The WALLET_TYPE can be AUTOLOGIN (preferred) or PASSWORD based.

    Ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (For an AUTOLOGIN wallet type), or WALLET_TYPE is PASSWORD (For a PASSWORD based wallet type). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.

    SQL> SELECT * FROM v$encryption_wallet;
  • If the source is an Oracle RAC database, and SNAPSHOT CONTROLFILE is not on a shared location, configure SNAPSHOT CONTROLFILE to point to a shared location on all Oracle RAC nodes to avoid the ORA-00245 error during backups to Oracle Object Store.

    For example, if the database is deployed on ASM storage,

    $ rman target /  
    RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DATA/db_name/snapcf_db_name.f';

    If the database is deployed on an ACFS file system, specify the shared ACFS location in the above command.

  • Verify that port 22 on the source database server allows incoming connections from the Zero Downtime Migration service host.

  • Ensure that the scan listener ports (1521, for example) on the source database servers allow incoming connections from the target database servers and outgoing connections to the target database servers.

    Alternate SQL connectivity should be made available if a firewall blocks incoming remote connection using the SCAN listener port.

  • To preserve the source database Recovery Time Objective (RTO) and Recovery Point Objective (RPO) during the migration, the existing RMAN backup strategy should be maintained.

    During the migration a dual backup strategy will be in place; the existing backup strategy and the strategy used by Zero Downtime Migration. Avoid having two RMAN backup jobs running simultaneously (the existing one and the one initiated by Zero Downtime Migration). If archive logs were to be deleted on the source database, and these archive logs are needed by Zero Downtime Migration to synchronize the target cloud database, then these files should be restored so that Zero Downtime Migration can continue the migration process.

  • If the source database is deployed using Oracle Grid Infrastructure and the database is not registered using SRVCTL, then you must register the database before the migration.

  • The source database must use a server parameter file (SPFILE).

  • If RMAN is not already configured to automatically back up the control file and SPFILE, then set CONFIGURE CONTROLFILE AUTOBACKUP to ON and revert the setting back to OFF after migration is complete.

    RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
  • For offline migrations, plan to make sure no incoming transactions take place on the source database before the ZDM_BACKUP_DIFFERENTIAL_SRC phase, so that there is no loss of data during the migration. Once Zero Downtime Migration starts generating backups and transfers them, any new transactions on the source won't be part of the backups and therefore the target in the cloud won't have those changes.

  • System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.

    If the time on any of these systems varies beyond 6 minutes from the time on OCI, it should be adjusted. You can use ntp time check to synchronize the time if NTP is configured. If NTP is not configured, then it is recommended that you configure it. If configuring NTP is not an option, then you need to correct the time manually to ensure it is in sync with OCI time.

  • Set the COMPATIBLE database initialization parameter to the same value on the source and target database. See Values for the COMPATIBLE Initialization Parameter in Oracle Database for valid values.

Target Database Prerequisites

The following prerequisites must be met on the target database before you begin the Zero Downtime Migration process.

  • You must create a placeholder target database.

    For Exadata Cloud Service and Exadata Cloud at Customer targets, the placeholder database must be created using Control Plane, not Grid Infrastructure Database Services before database migration begins.

    Note:

    For this release, only Grid Infrastructure-based database services are supported as targets. For example, an LVM-based instance or an instance created in compute node without Grid Infrastructure are not supported targets.

    The placeholder target database is overwritten during migration, but it retains the overall configuration.

    Pay careful attention to the following requirements:

    • Size for the future - When you create the database from the console, ensure that your chosen shape can accommodate the source database, plus any future sizing requirements. A good guideline is to use a shape similar to or larger in size than source database.
    • Set name parameters
      • DB_NAME - If the target database is Exadata Cloud Service or Exadata Cloud at Customer, then the database DB_NAME should be the same as the source database DB_NAME. If the target database is Oracle Cloud Infrastructure, then the database DB_NAME can be the same as or different from the source database DB_NAME.
      • DB_UNIQUE_NAME: If the target database is Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer, the target database DB_UNIQUE_NAME parameter value must be unique to ensure that Oracle Data Guard can identify the target as a different database from the source database.
    • Match the source SYS password - Specify a SYS password that matches that of the source database.
    • Disable automatic backups - Provision the target database from the console without enabling automatic backups.

      For Oracle Cloud Infrastructure and Exadata Cloud Service, do not select the Enable automatic backups option under the section Configure database backups.

      For Exadata Cloud at Customer, set Backup destination Type to None under the section Configure Backups.

  • The target database version should be the same as the source database version. The target database patch level should also be the same as (or higher than) the source database.

    If the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP), then Zero Downtime Migration runs the datapatch utility as part of the migration.

  • The character set on the source database must be the same as the target database.

  • The target database time zone version must be the same as the source database time zone version. To check the current time zone version, query the V$TIMEZONE_FILE view as shown here, and upgrade the time zone file if necessary.

    SQL> SELECT * FROM v$timezone_file;
  • Verify that the TDE wallet folder exists, and ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (For an auto-login wallet type), or WALLET_TYPE is PASSWORD (For a password-based wallet). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.

    SQL> SELECT * FROM v$encryption_wallet;
  • The target database must use a server parameter file (SPFILE).

  • If the target is an Oracle RAC database, then verify that SSH connectivity without a passphrase is set up between the Oracle RAC servers for the oracle user.

  • Check the size of the disk groups and usage on the target database (ASM disk groups or ACFS file systems) and make sure adequate storage is provisioned and available on the target database servers.

  • Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

  • Verify that ports 22 and 1521 (or the configured database listener port) on the target servers in the Oracle Cloud Infrastructure, Exadata Cloud Service, or Exadata Cloud at Customer environment are open and not blocked by a firewall.

  • Verify that port 22 on the target database server allows incoming connections from the Zero Downtime Migration service host.

  • Capture the output of the RMAN SHOW ALL command, so that you can compare RMAN settings after the migration, then reset any changed RMAN configuration settings to ensure that the backup works without any issues.

    RMAN> show all;
  • System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.

    If the time on any of these systems varies beyond 6 minutes from the time on OCI, it should be adjusted. You can use ntp time check to synchronize the time if NTP is configured. If NTP is not configured, then it is recommended that you configure it. If configuring NTP is not an option, then you need to correct the time manually to ensure it is in sync with OCI time.

  • Set the COMPATIBLE database initialization parameter to the same value on the source and target database. See Values for the COMPATIBLE Initialization Parameter in Oracle Database for valid values.

See Also:

Managing User Credentials for information about generating the auth token for Object Storage backups

Zero Downtime Migration Port Requirements

Setting Up the Transparent Data Encryption Keystore

For Oracle Database 12c Release 2 and later, if the source and target databases do not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE keystore before migration begins.

TDE should be enabled and the TDE WALLET status on both source and target databases must be set to OPEN. The WALLET_TYPE can be AUTOLOGIN, for an auto-login keystore (preferred), or PASSWORD, for a password-based keystore. On a multitenant database, make sure that the keystore is open on all PDBs as well as the CDB, and that the master key is set for all PDBs and the CDB.

If TDE is not already configured as required on the source and target databases, use the following instructions to set up the TDE keystore.

For a password-based keystore, you only need to do steps 1, 2, and 4; for an auto-login keystore, complete all of the steps.

  1. Set ENCRYPTION_WALLET_LOCATION in the $ORACLE_HOME/network/admin/sqlnet.ora file.

    /home/oracle>cat /u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/sqlnet.ora 
    
    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)
      (METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/12.2.0.1/dbhome_4/network/admin/)))

    For an Oracle RAC instance, also set ENCRYPTION_WALLET_LOCATION in the second Oracle RAC node.

  2. Create and configure the keystore.

    1. Connect to the database and create the keystore.

      $ sqlplus "/as sysdba"
      SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin'
       identified by password;
    2. Open the keystore.

      For a non-CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password;
      keystore altered.

      For a CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY password container = ALL;
      keystore altered.
    3. Create and activate the master encryption key.

      For a non-CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup;
      keystore altered.

      For a CDB environment, run the following command.

      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY password with backup container = ALL;
      keystore altered.
    4. Query V$ENCRYPTION_KEYS to get the keystore status, keystore type, and keystore location.

      SQL> SELECT * FROM v$encryption_keys;
      
      WRL_TYPE    WRL_PARAMETER
      --------------------            --------------------------------------------------------------------------------
      STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC    CON_ID
      ------------------------------ -------------------- --------- ---------            ----------
      FILE        /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
      OPEN                           PASSWORD             SINGLE    NO         0

    The configuration of a password-based keystore is complete at this stage, and the keystore is enabled with status OPEN and WALLET_TYPE is shown as PASSWORD in the query output above.

    Continue to step 3 only if you need to configure an auto-login keystore, otherwise skip to step 4.

  3. For an auto-login keystore only, complete the keystore configuration.

    1. Create the auto-login keystore.

      SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE
       '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/' IDENTIFIED BY password;
      keystore altered.
    2. Close the password-based keystore.

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY password;
      keystore altered.
    3. Query V$ENCRYPTION_WALLET to get the keystore status, keystore type, and keystore location.

      SQL> SELECT * FROM v$encryption_wallet;
      WRL_TYPE WRL_PARAMETER
      -------------------- --------------------------------------------------------------------------------
      STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
      ------------------------------ -------------------- --------- --------- ---------
      FILE /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
      OPEN AUTOLOGIN SINGLE NO 

      In the query output, verify that the TDE keystore STATUS is OPEN and WALLET_TYPE set to AUTOLOGIN, otherwise the auto-login keystore is not set up correctly.

      This completes the auto-login keystore configuration.

  4. Copy the keystore files to the second Oracle RAC node.

    If you configured the keystore in a shared file system for Oracle RAC, or if you are enabling TDE for a single instance database, then no action is required.

    If you are enabling TDE for Oracle RAC database without shared access to the keystore, copy the following files to the same location on second node.

    • /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ew*

    • /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/cw*

Preparing the Physical Migration Response File

Set the required physical migration response file parameters. Get the response file template, $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, which is used to create your Zero Downtime Migration response file for the database migration procedure, and edit the file as described here.

The following response file settings show you how to configure a typical use case. To further customize your configuration you can find additional parameters described in Zero Downtime Migration Physical Migration Response File Parameters Reference.

TGT_DB_UNIQUE_NAME

Set TGT_DB_UNIQUE_NAME to the target database DB_UNIQUE_NAME value. To find DB_UNIQUE_NAME run

SQL> show parameter db_unique_name

For Cloud type Exadata Cloud at Customer Gen 1, set TGT_DB_UNIQUE_NAME to a different DB_UNIQUE_NAME not currently in use.

PLATFORM_TYPE

Set PLATFORM_TYPE to one of the following:

  • VMDB - Oracle Cloud Infrastructure virtual machine or bare metal targets.

  • EXACS - Exadata Cloud Service

  • EXACC - Exadata Cloud at Customer

  • NON_CLOUD - On-premises Exadata Database Machine

MIGRATION_METHOD

Set MIGRATION_METHOD to one of the following:

  • ONLINE_PHYSICAL - Oracle Data Guard (online)

  • OFFLINE_PHYSICAL - RMAN backup and restore (offline). Note that this is the only migration method supported for Oracle Standard Edition databases.

DATA_TRANSFER_MEDIUM

DATA_TRANSFER_MEDIUM specifies the media used for the source database backup. Valid values are dependent on whether MIGRATION_METHOD=ONLINE_PHYSICAL or MIGRATION_METHOD=OFFLINE_PHYSICAL.

When MIGRATION_METHOD=ONLINE_PHYSICAL You can use one of these data transfer method values:

  • OSS - Oracle Data Guard using Object Storage Service (OSS) for standby initialization.

    Supported for PLATFORM_TYPE set to Oracle Cloud Infrastructure (VMDB), Exadata Cloud Service (EXACS), and Exadata Cloud at Customer (EXACC).

    Also set ZDM_LOG_OSS_PAR_URL to the Cloud Object Store pre-authenticated URL if you want to upload migration logs onto Cloud Object Storage. For information about getting a pre-authenticated URL see Oracle Cloud documentation at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm#usingconsole.

  • EXTBACKUP - Oracle Data Guard with existing backup in external location .

    Supported for PLATFORM_TYPE set to Exadata Cloud at Customer (EXACC)

    Also, create a standby control file backup in the specified path and provide read permissions to the backup pieces for the target database user. For example,

    RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT 'BACKUP_PATH/lower_case_dbname/standby_ctl_%U';

    Where standby_ctl_%U is a system-generated unique file name.

  • ZDLRA - Oracle Data Guard using ZDLRA for standby initialization.

    Supported for PLATFORM_TYPE set to Exadata Cloud at Customer (EXACC), and set the following parameters.

    • Set SRC_ZDLRA_WALLET_LOC for the wallet location, for example,

      SRC_ZDLRA_WALLET_LOC=/u02/app/oracle/product/12.1.0/dbhome_3/dbs/zdlra
    • Set TGT_ZDLRA_WALLET_LOC for the wallet location, for example, TGT_ZDLRA_WALLET_LOC=target_database_oracle_home/dbs/zdlra.

    • Set ZDLRA_CRED_ALIAS for the wallet credential alias, for example,

      ZDLRA_CRED_ALIAS=zdlra_scan:listener_port/zdlra9:dedicated
  • NFS - Oracle Data Guard using backup location such as NFS.

    Supported for PLATFORM_TYPE set to Exadata Cloud at Customer (EXACC)

    Also set BACKUP_PATH to specify the actual NFS path which is made accessible from both the source and target database servers, for example, an NFS mount point. The NFS mount path should be same for both source and target database servers. This path does not need to be mounted on the Zero Downtime Migration service host. Note the following considerations:

    • The source database is backed up to the specified path and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.

    • The path set in BACKUP_PATH should have ‘rwx’ permissions for the source database user, and at least read permissions for the target database user.

    • In the path specified by BACKUP_PATH, the Zero Downtime Migration backup procedure will create a directory, $BACKUP_PATH/dbname, and place the backup pieces in this directory.

When MIGRATION_METHOD=OFFLINE_PHYSICAL You can use one of these data transfer method values:

  • OSS - Migration using backup and restore through Object Storage Service, supported for Oracle Cloud Infrastructure (VMDB), Exadata Cloud Service (EXACS), and Exadata Cloud at Customer (EXACC). SQL*Net connectivity between source and target not needed.
  • NFS - Migration using backup and restore through NFS, supported for Exadata Cloud at Customer (EXACC). SQL*Net connectivity between source and target not needed.

Additional Oracle Cloud Object Storage Settings

When DATA_TRANSFER_MEDIUM=OSS, set the following parameters to access the Oracle Cloud Object Storage.

The source database is backed up to the specified container and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.

  • Set HOST to the cloud storage REST endpoint URL.

    • For Oracle Cloud Infrastructure storage the typical value format is HOST=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ObjectStorageNamespace

      To find the Object Storage Namespace value, log in to the Cloud Console and select Menu, Administration, Tenancy Detail, and in the Object Storage Settings section find Value against entry Object Storage Namespace:

    • For Oracle Cloud Infrastructure Classic storage the typical value format is HOST=https://acme.storage.oraclecloud.com/v1/Storage-tenancy name

  • Set the Object Storage bucket OPC_CONTAINER parameter.

    The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.

TGT_SSH_TUNNEL_PORT

If SSH tunneling is set up, set the TGT_SSH_TUNNEL_PORT parameter.

Data and Redo Locations

Zero Downtime Migration automatically discovers the location for data, redo, and reco storage volumes from the specified target database. If you need to override the discovered values, specify the target database data files storage (ASM or ACFS) location using the appropriate set of parameters.

  • ASM: TGT_DATADG, TGT_REDODG, and TGT_RECODG
  • ACFS: TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS

SKIP_FALLBACK

Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.

TGT_SKIP_DATAPATCH

Zero Downtime Migration runs the datapatch utility by default as part of the migration process if the target database environment is at a higher patch level than the source database (for example, if the source database is at Jan 2020 PSU/BP and the target database is at April 2020 PSU/BP).

If you want to skip this task set the TGT_SKIP_DATAPATCH=FALSE response file parameter.

PHASE_NAME_MONITORING_INTERVAL

Set PHASE_NAME_MONITORING_INTERVAL=n mins if you want Zero Downtime Migration to monitor and report the status of backup and restore operations at the configured time interval during the migration. The default interval value is 10 minutes. To disable monitoring, set these values to 0 (zero).

ZDM_BACKUP_FULL_SRC_MONITORING_INTERVAL= 
ZDM_BACKUP_INCREMENTAL_SRC_MONITORING_INTERVAL= 
ZDM_BACKUP_DIFFERENTIAL_SRC_MONITORING_INTERVAL= 
ZDM_CLONE_TGT_MONITORING_INTERVAL= 
ZDM_OSS_RESTORE_TGT_MONITORING_INTERVAL= 
ZDM_OSS_RECOVER_TGT_MONITORING_INTERVAL=

ZDM_BACKUP_RETENTION_WINDOW

Set ZDM_BACKUP_RETENTION_WINDOW=number of days if you wish to retain source database backup after the migration.

ZDM_SRC_TNS_ADMIN

Set ZDM_SRC_TNS_ADMIN=TNS_ADMIN value in case of custom location.

Migrating an On-Premises Database to an On-Premises Exadata Database Machine

An on-premises migration to an on-premises Exadata Database Machine target using Zero Downtime Migration works the same way as a migration to a cloud target. In the response file, you indicate that the migration target is on-premises by setting PLATFORM_TYPE=NON_CLOUD.

Just like in cloud migration scenarios, you must provision the target database with the shape and size desired, including configuring any initialization parameters, before starting the migration. The target database is expected to be the same major version as the source database, Oracle Grid Infrastructure is mandatory at the target database, and target datafiles can be stored on ASM or ACFS.

One aspect where an on-premises to on-premises migration is different from migrating to the cloud is in the handling of Transparent Data Encryption (TDE). On the cloud, TDE is mandatory for Oracle Database 12.2 and later releases; however, for an on-premises to on-premises migration, TDE must be configured at the target only if TDE is used at the source. You must configure TDE at the target before the migration starts; Zero Downtime Migration does not configure it for you.

You can specify that TDE is not configured at the source or target by setting the response file parameter ZDM_TDE_MANDATORY=FALSE. This parameter can only be used when you set PLATFORM_TYPE=NON_CLOUD. With ZDM_TDE_MANDATORY=FALSE set, Zero Downtime Migration does not require TDE at the target when the source is not using TDE, and does not encrypt the target on restore.

For an on-premises Exadata target database migration, MIGRATION_METHOD can be set to ONLINE_PHYSICAL or OFFLINE_PHYSICAL, and DATA_TRANSFER_MEDIUM can be set to any of the values supported by Zero Downtime Migration. Set the remaining parameters as you would for a cloud migration.

Converting a Non-CDB Database to a CDB During Migration

As part of the physical migration process, Zero Downtime Migration can handle conversion of a non-CDB source database to a PDB of the same version in the cloud. The conversion process transforms the source non-CDB into a target PDB that is then plugged into an existing CDB in the target.

Downtime will increase due to the non-CDB to CDB conversion. This process is offline (no redo transport and apply), and no rollback is possible.

Source non-CDB Database Prerequisites

  • Oracle Database 12c or later versions, because this is when multitenant architecture became available

  • Same character set as the target CDB

Target Database CDB and PDB Prerequisites

  • The target CDB must not contain a PDB with same name as the resulting converted PDB, because Zero Downtime Migration will create the PDB.

  • The target database must be at least the same major version as the source database.

    • If the minor version is different on the target, it must be a higher minor version than the source database.
    • If the patch level is different, you must set the response file parameter TGT_SKIP_DATAPATCH=FALSE.

Transparent Data Encryption Requirements

  • Transparent Data Encryption (TDE) is optional on the source database. If TDE is not set there is no further information required; however if TDE is set up on source, the credentials for export and import TDE keys are required.

  • For source credentials, the migrate database command must include either -tdekeystorepasswd or the -tdekeystorewallet option.

  • If any of these options is used then the target credentials must be also provided by using either -tgttdekeystorepasswd or the -tgttdekeystorewallet option

Application Express Requirements

  • If Application Express (APEX) is not installed on the source there are no further requirements.

  • If APEX exists on the source, and the source database is a non-CDB, you must choose one of the following options:

    • Remove APEX from the source non-CDB.
    • Verify that the APEX version on the target CDB is the same as that on the source.

      If APEX is not at the same version conversion is not possible; APEX schemas vary between versions and the target PDB will not be able to open.

The target CDB is not dropped in the process, and the presence or absence of other PDBs does not affect the outcome of the conversion and plug-in.

Parameters in the response file are set as follows:

  • (Required) NONCDBTOPDB_CONVERSION: Set to TRUE to indicate that you want to convert a source database from non-CDB to PDB.
  • (Optional) NONCDBTOPDB_SWITCHOVER: Set to TRUE for a physical migration using Data Guard switchover, to execute switchover operations during a migration job with non-CDB to PDB conversion enabled.

The following are examples of the ZDMCLI migrate database command usage for converting a non-CDB to a PDB during migration using the TDE credentials:

zdmuser> $ZDM_HOME/bin/zdmcli migrate database
          -sourcedb source_db_unique_name_value -sourcenode source_database_server_name -srcroot
          -targetnode target_database_server_name -backupuser Object_store_login_user_name
          -rsp response_file_location -tgtauth zdmauth -tgtarg1 user:target_database_server_login_user_name
          -tgtarg2 identity_file:ZDM_installed_user_private_key_file_location -tgtarg3 sudo_location:/user/bin/sudo
          -tdekeystorepasswd -tgttdekeystorepasswd  
zdmuser> $ZDM_HOME/bin/zdmcli migrate database
          -sourcedb source_db_unique_name_value -sourcenode source_database_server_name -srcroot
          -targetnode target_database_server_name -backupuser Object_store_login_user_name
          -rsp response_file_location -tgtauth zdmauth -tgtarg1 user:target_database_server_login_user_name
          -tgtarg2 identity_file:ZDM_installed_user_private_key_file_location -tgtarg3 sudo_location:/user/bin/sudo
          -tdekeystorepasswd -tgttdekeystorewallet /scratch/credentials/cdbtde.sso

Preparing for a Logical Migration

The following topics describe how to configure the Zero Downtime Migration prerequisites before running a logical migration job.

Logical Migration Prerequisites

Complete the following prerequisites to prepare for a logical migration.

  • Create an OCI API key pair. See Required Keys and OCIDs for details.

  • Create an Object Store bucket on Oracle Cloud Infrastructure if you are using Object Storage as a data transfer medium. This is not required for Exadata Cloud at Customer or on-premises Exadata Database Machine targets.

  • If the source database listener is configured with TLS (TCPS) using self-signed database server certificates, then ensure that the self-signed certificate is added to the Zero Downtime Migration home cert store as follows.

    keytool -import -keystore ZDM_HOME/jdk/jre/lib/security/cacerts -trustcacerts
    -alias "src ca cert" -file source_db_server-certificate
  • If you are not using database link, ensure that the file system used for the Data Pump export directory has sufficient space to store Data Pump dump files.

  • If you are using an existing database link between the target database to an on-premises source database by global_name of the source database, ensure that the DBLINK is not broken. Zero Downtime Migration can reuse the pre-existing DBLINK for migration if that data transfer medium is configured.

  • For online migration, do the following:

    • Set up an Oracle GoldenGate Microservices hub.

    • If the target database is configured to use SSL/TLS, then ensure that the wallet containing certificates for TLS authentication is located in the correct location on the GoldenGate instance, as follows:

      • For an Autonomous Database, the wallet file should be located in directory /u02/deployments/deployment_name/etc/adb

      • For a co-managed database, the wallet file should be located in directory /u02/deployments/deployment_name/etc

      Autonomous databases are always configured to use TLS.

    • If the source database is configured to use SSL/TLS, then ensure that the wallet containing certificates for TLS authentication is located in directory /u02/deployments/deployment_name/etc on the GoldenGate instance.

    • Complete the source database prerequisites. See Prepare the Source Database for Logical Migration

    • On the target database:

      If the target is Autonomous Database, unlock the pre-created ggadmin user.

      If the target is not Autonomous database, create a ggadmin user in the target PDB. See Prepare the Source Database for Logical Migration for information about creating this user.

  • Ensure that the OCI network security rules allow the following connections:

    Table 3-1 Prerequisite Connections for Online Logical Migration

    Connection Source Destination
    SQL*Net GoldenGate hub Source database
    SQL*Net GoldenGate hub Target database
    SQL*Net ZDM server Source database
    SSH ZDM server Source database server
    SQL*Net ZDM server Target database
    HTTPS ZDM server1 GoldenGate hub

    1The Zero Downtime Migration server should be allowed to make HTTPS over port 443 calls to an OCI REST endpoint.

    See Zero Downtime Migration Port Requirements for more information.

Prepare the Source Database for Logical Migration

Complete the following prerequisites on the source database to prepare for an online logical migration.

Offline and Online Migrations Require:

  • The character set on the source database must be the same as the target database.

  • Configure the streams pool with the initialization parameter STREAMS_POOL_SIZE.

    For offline logical migrations, for optimal Data Pump performance, it is recommended that you set STREAMS_POOL_SIZE to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.

    For online logical migrations, set STREAMS_POOL_SIZE to at least 2GB. See https://support.oracle.com/epmos/faces/DocumentDisplay?id=2078459.1 for the recommendation 1GB STREAMS_POOL_SIZE per integrated extract + additional 25 percent.

  • System time of the Zero Downtime Migration service host and source database server should be in sync with your Oracle Cloud Infrastructure target.

    If the time on any of these systems varies beyond 6 minutes from the time on OCI, it should be adjusted. You can use ntp time check to synchronize the time if NTP is configured. If NTP is not configured, then it is recommended that you configure it. If configuring NTP is not an option, then you need to correct the time manually to ensure it is in sync with OCI time.

  • If you are using a database link, and your target database is on Autonomous Database Shared Infrastructure, you must configure TCPS on the source. Autonomous Database Shared Infrastructure doesn't allow a database link to a source that is not configured with TCPS.

Online Migrations Require:

  • If the source is Oracle Database 11.2, apply mandatory 11.2.0.4 RDBMS patches on the source database.

    See My Oracle Support note Oracle GoldenGate -- Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)

    • Database PSU 11.2.0.4.200414 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS

    • OGG RDBMS patch 31704157 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.200414 FOR BUGS 31182000 20448066 - This patch combines mandatory fixes for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES and required OGG RDBMS patch 31182000 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.200414 FOR BUGS 2990912 12668795.

      Although MOS note 1557031.1 mentions OGG patch 31177512, it conflicts with a patch for bug 20448066. As such, OGG patch 31704157 should be used instead of OGG patch 31177512.

  • If the source is Oracle Database 12.1.0.2 or a later release, apply mandatory RDBMS patches on the source database.

    See My Oracle Support note Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1), which lists the additional RDBMS patches needed on top of the latest DBBP/RU for Oracle Database 12c and later.

  • Enable ARCHIVELOG mode for the database. See Changing the Database Archiving Mode.

  • Enable FORCE LOGGING to ensure that all changes are found in the redo by the Oracle GoldenGate Extract process. See Specifying FORCE LOGGING Mode

  • Enable database minimal supplemental logging. See Minimal Supplemental Logging.

    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  • Enable initialization parameter ENABLE_GOLDENGATE_REPLICATION.

  • Install the UTL_SPADV or UTL_RPADV package for Integrated Extract performance analysis.

    See Collecting XStream Statistics Using the UTL_RPADV Package. Note that the package changes name from UTL_SPADV to UTL_RPADV in Oracle Database 19c.

  • Create a GoldenGate administration user, ggadmin, granting all of the permissions listed in the example. If the source database is multitenant (CDB), create the user in the source PDB.

    SQL> create user ggadmin identified by password default tablespace users temporary tablespace temp;
    SQL> grant connect, resource to ggadmin;
    SQL> alter user ggadmin quota 100M ON USERS;
    SQL> grant unlimited tablespace to ggadmin;
    SQL> grant select any dictionary to ggadmin;
    SQL> grant create view to ggadmin;
    SQL> grant execute on dbms_lock to ggadmin;
    SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');
  • If the source database is multitenant (CDB), also create user c##ggadmin in CDB$ROOT as shown here.

    SQL> create user c##ggadmin identified by password default tablespace users temporary tablespace temp;
    SQL> grant connect, resource to c##ggadmin;
    SQL> grant unlimited tablespace to c##ggadmin;
    SQL> alter user c##ggadmin quota 100M ON USERS;
    SQL> grant select any dictionary to c##ggadmin;
    SQL> grant create view to c##ggadmin;
    SQL> grant execute on dbms_lock to c##ggadmin;
    SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('c##ggadmin',container=>'all');
  • During the migration period, to provide the most optimal environment for fast database replication, avoid large batch DML operations. Running large batch operations, like a single transaction that affects multi-millions of rows, can slow down replication rates. Create, alter, and drop DDL operations are not replicated.

Offline Migrations Require:

  • The DATAPUMP_EXP_FULL_DATABASE and DATAPUMP_IMP_FULL_DATABASE roles are required. These roles are required for Data Pump to determine whether privileged application roles should be assigned to the processes comprising the migration job.

    DATAPUMP_EXP_FULL_DATABASE is required for the export operation at the source database for the specified database user. The DATAPUMP_IMP_FULL_DATABASE role is required for the import operation at the specified target database for specified target database user.

    See the Oracle Data Pump documentation for more information.

Preparing the Logical Migration Response File

Set the required logical migration response file parameters. Get the response file template, $ZDM_HOME/rhp/zdm/template/zdm_logical_template.rsp, which is used to create your Zero Downtime Migration response file for the database migration procedure, and edit the file as described here.

The logical migration response file settings are described in detail in Zero Downtime Migration Logical Migration Response File Parameters Reference.

The following parameters are required for an offline or online logical migration:

  • MIGRATION_METHOD: Set to ONLINE_LOGICAL for online migration with GoldenGate or OFFLINE_LOGICAL for an offline Data Pump transfer.

  • DATA_TRANSFER_MEDIUM: Set to OSS for Object Storage bucket, NFS for a shared Network File System, DBLINK for a direct transfer using a database link, or COPY to use secure copy.

    Unless you are using the default data transfer servers for handling the Data Pump dumps, you may also need to configure the data transfer node settings for the source and target database environments.

    See Configuring the Transfer Medium and Specifying Transfer Nodes for details.

  • For an offline logical migration of an Oracle Database 11g source to an 11g target, set DATAPUMPSETTINGS_SECUREFILELOB=FALSE or you may get errors.

  • Set the following target database parameters.
    • TARGETDATABASE_OCID specifies the Oracle Cloud resource identifier.

      For example: ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrwvou52gb6s5a

      See also https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/identifiers.htm

    • TARGETDATABASE_ADMINUSERNAME specifies the database administrator user name. For example, for a co-managed database migration user name as system and for an Autonomous Database migration user name as admin.

  • Set the following source database parameters.
    • SOURCEDATABASE_ADMINUSERNAME specifies the database administrator user name. For example, user name as system.

    • SOURCEDATABASE_CONNECTIONDETAILS_HOST specifies the listener host name or IP address. In case of Oracle RAC, the SCAN name can be specified. (not required for Autonomous Database)

    • SOURCEDATABASE_CONNECTIONDETAILS_PORT specifies the listener port number. (not required for Autonomous Database)

    • SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME specifies the fully qualified service name. (not required for Autonomous Database)

      For example: service_name.DB_domain

      See also https://docs.cloud.oracle.com/en-us/iaas/Content/Database/Tasks/connectingDB.htm

  • Set the following OCIAUTHENTICATIONDETAILS parameters.

    For more information about the required settings, see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#RequiredKeysandOCIDs

Oracle GoldenGate Settings

For online logical migrations, in addition to the above, you must also set the GoldenGate parameters, TARGETDATABASE_GGADMINUSERNAME, SOURCEDATABASE_GGADMINUSERNAME, SOURCECONTAINERDATABASE_GGADMINUSERNAME, and the parameters prefixed with GOLDENGATEHUB and GOLDENGATESETTINGS.

See Zero Downtime Migration Logical Migration Response File Parameters Reference for details about these parameters.

Oracle Data Pump Settings

Zero Downtime Migration automatically sets optimal defaults for Data Pump parameters to achieve better performance and ensure data security. If you need to further tune performance, there are several Data Pump settings that you can configure in the response file.

The default DATAPUMPSETTINGS_JOBMODE=SCHEMA is recommended for migrations to Autonomous Database.

See Oracle Data Pump Settings for Zero Downtime Migration for information about the default Data Pump property settings, how to select schemas or objects for inclusion or exclusion, and Data Pump error handling.

See Zero Downtime Migration Logical Migration Response File Parameters Reference for all of the Data Pump parameters you can set through Zero Downtime Migration.

Configuring the Transfer Medium and Specifying Transfer Nodes

Zero Downtime Migration offers various transfer options to make Oracle Data Pump dumps available to the target database server.

Using the DATA_TRANSFER_MEDIUM response file parameter you can configure the following data transfer methods:

  • OSS: Oracle Cloud Object Storage.

    Supported for all migration types and targets.

  • NFS: Network File System

    Supported for offline migrations to co-managed target database only.

  • DBLINK: Direct data transfer from the source to the target over a database link.

    Supported for online and offline migrations to Autonomous Database Shared (Data Warehouse or Transaction Processing) and co-managed targets only.

  • COPY: Transfer dumps to the target transfer node using secure copy.

    Supported for offline migrations to co-managed target databases only.

Note:

To take advantage of parallelism and achieve the best data transfer performance, Oracle recommends that you transfer data using OSS or NFS for databases over 50GB in size. The DBLINK transfer medium can be convenient for smaller databases, but this choice may involve uncertainty in performance because of its dependence on network bandwidth for the duration of the transfer.

Once the export of dumps on the source is completed, the dumps are uploaded or transferred in parallel as defined by parameter DUMPTRANSFERDETAILS_PARALLELCOUNT (defaults to 3), and any transfer failures are retried by default as specified in the parameter DUMPTRANSFERDETAILS_RETRYCOUNT (defaults to 3).

The transfer of dumps can be done from any node at the source data center, provided that the dumps are accessible from the given node. It is crucial to ascertain the network connectivity and transfer workload impact on the source database server in order to decide which data transfer approach to take.

Direct Transfer from Source to Target

This option applies only to on-premises Exadata Database Machine and co-managed cloud target databases.

Zero Downtime Migration enables logical migration using direct transfer of the Data Pump dump from the source to the target securely. The data is copied over from the source database directory object path to the target database server directory object path, or to a target transfer node, using either secure copy or RSYNC. This avoids the data being transferred over a WAN or needing additional shared storage between the source and target environments. This capability greatly simplifies the logical migration within the data center.

About Transfer Nodes

You will configure a node, referred as a transfer node, for both the source data center and the target tenancy.

The response file parameters that are prefixed with DUMPTRANSFERDETAILS_SOURCE_TRANSFERNODE designate the node that handles the export dumps at the source data center. This source transfer node defaults to the source database.

Similarly, the response file parameters that are prefixed with DUMPTRANSFERDETAILS_TARGET_TRANSFERNODE designate the node that handles the import of dumps at the target. This target transfer node defaults to the target database, for co-managed targets.

Transfer Node Requirements

The source transfer node can be any of the following:

  • Source database server (default)
  • NAS mounted server
  • Zero Downtime Migration service node

The target transfer node can be any of the following:

  • Target Database server (default)
  • NAS mounted server
  • Zero Downtime Migration service node

For a server to be designated as transfer node, the following critical considerations are necessary.

  • Availability of CPU and memory to process the upload or transfer workload

  • Connectivity to the specified upload or transfer target

    • Port 443 connectivity to Object Storage Service if the chosen data transfer medium is OSS

    • Port 22 connectivity to target storage server if the chosen transfer medium is COPY

  • Availability of Oracle Cloud Infrastructure CLI. For speedier and resilient upload of dumps this is the recommended transfer utility for the OSS transfer medium.

  • OCI CLI must be installed and configured as detailed in https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm.

    Installing and configuring OCI CLI on each source database server may not be feasible. In such cases, one of the nodes in the data center can be designated as a transfer node with OCI CLI configured, and this node can share a network storage path with the database servers for Data Pump dumps to be created. This also avoids the upload workload consuming additional CPU and memory on production database servers.

The designated transfer node can act as the gateway server at the data center for the external data transfer allowing transfer data traffic, thus avoiding the need to allow data transfer from the source database server or to the target database server.

Optionally, the additional transfer node requirement can be avoided by leveraging the Zero Downtime Migration server as the transfer node, provided that the Zero Downtime Migration service is placed in an on-premises data center and can meet the transfer node requirements listed above.

Using the Oracle Cloud Object Storage Transfer Medium

Object Storage data transfer medium is supported for all migration types and targets.

When using Object Storage as the data transfer medium, by setting DATA_TRANSFER_MEDIUM=OSS, it is recommended that dumps be uploaded using OCI CLI for faster and more secure and resilient uploads. You must configure OCI CLI in the upload node, and set parameter DUMPTRANSFERDETAILS_SOURCE_USEOCICLI to TRUE, the parameters for OCI CLI are

DUMPTRANSFERDETAILS_SOURCE_USEOCICLI

DUMPTRANSFERDETAILS_SOURCE_OCIHOME

Using the Database Link Transfer Medium

Supported for online and offline migrations to Autonomous Database Shared (Data Warehouse or Transaction Processing) and co-managed targets only.

When you set DATA_TRANSFER_MEDIUM=DBLINK, a database link is created from the OCI co-managed database or Autonomous Database target to the source database using the global_name of the specified source database.

Zero Downtime Migration creates the database link if it does not already exist, and the link is cleaned once the Data Pump import phase is complete.

Using the NFS Transfer Medium

Supported for offline migrations to co-managed target database only.

The NFS mode of transfer is available, by setting DATA_TRANSFER_MEDIUM=NFS, for co-managed target databases that avoid the transfer of dumps. You should ensure that the specified path is accessible between the source and target database server path.

Zero Downtime Migration ensures the security of dumps in the shared storage by preserving the restricted permission on the dumps such that only the source and target database users are allowed to access the dump.

Using the Copy Transfer Medium

Supported for offline migrations to co-managed target databases only.

Dumps can be transferred from the source to the target securely, by setting DATA_TRANSFER_MEDIUM=COPY. The relevant parameters are as follows:

DUMPTRANSFERDETAILS_TRANSFERTARGET_USER

DUMPTRANSFERDETAILS_TRANSFERTARGET_USERKEY

DUMPTRANSFERDETAILS_TRANSFERTARGET_HOST

DUMPTRANSFERDETAILS_TRANSFERTARGET_SUDOPATH

DUMPTRANSFERDETAILS_TRANSFERTARGET_DUMPDIRPATH

You can leverage the RSYNC utility instead of SCP. Set DUMPTRANSFERDETAILS_RSYNCAVAILABLE to TRUE, and verify that RSYNC is available both at the source and target transfer nodes.

Configuring Resiliency to Intermittent Network Failures

Zero Downtime Migration is resilient to intermittent network failures that can cause backups or SSH connectivity to fail.

Physical Migration Resiliency

Zero Downtime Migration can auto-detect intermittent network failures. Zero Downtime Migration automatically retries the RMAN retry-able errors, and some retry customization is available.

SSH connection retries are customized using the following parameters:

SRC_SSH_RETRY_TIMEOUT

TGT_SSH_RETRY_TIMEOUT

You can customize RMAN backup retries with following parameters:

ZDM_OPC_RETRY_WAIT_TIME

ZDM_OPC_RETRY_COUNT

ZDM_OPC_RETRY_WAIT_TIME

Logical Migration Resiliency

Intermittent network failures observed during the transfer of Data Pump dumps can mitigated by setting the DUMPTRANSFERDETAILS_RETRYCOUNT parameter. The default value is 3.

Database Server Connectivity Using a Bastion Host

Zero Downtime Migration lets you configure connectivity to the source and target database servers through a bastion host for both physical and logical migration work flows.

Note that a bastion host cannot be used to connect to an Autonomous Database, except for JDBC connections.

Use the following sections to configure the appropriate parameters for physical and logical migrations that must connect to the source or target database server through a bastion host.

SSH Connection to Database Servers

Connecting database servers through a bastion host requires the following information:

  • Bastion Host IP Address and Source Database Server IP Address: To connect to the database server through a bastion host, the bastion host IP address and the source node IP address are required.

  • Bastion Port Number: The port number defaults to 22 if not specified.

  • Bastion User: The bastion host user is only required if the user specified for the argument zdmauth plug-in is different from the user of the bastion host. The bastion user defaults to the user specified for the source zdmauth plug-in if the property is not specified.

  • Bastion Identity File: If the SRC/TGT_BASTION_IDENTITY_FILE parameter is not specified, the value defaults to the value specified for the identity_file argument of the zdmauth plug-in argument.

Physical Migration Response File Parameters

Configure the following response file parameters for a physical migration.

Source Database Server

SRC_BASTION_HOST_IP=

SRC_BASTION_PORT=

SRC_BASTION_USER=

SRC_BASTION_IDENTITY_FILE=

SRC_HOST_IP=

Target Database Server

TGT_BASTION_HOST_IP=

TGT_BASTION_PORT=

TGT_BASTION_USER=

TGT_BASTION_IDENTITY_FILE=

TGT_HOST_IP=

Logical Migration Response File Parameters

Configure the following response file parameters for a logical migration.

Source Database Server

SOURCECONTAINERDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IP=

SOURCECONTAINERDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_PORT=22

SOURCECONTAINERDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IDENTITYFILE=

SOURCECONTAINERDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_USERNAME=

SOURCECONTAINERDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_REMOTEHOSTIP=

Target Database Server (including Autonomous Database)

TARGETDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IP=

TARGETDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_PORT=22

TARGETDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IDENTITYFILE=

TARGETDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_USERNAME=

TARGETDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_REMOTEHOSTIP=

Preparing for Automatic Application Switchover

To minimize or eliminate service interruptions on the application after you complete the database migration and switchover, prepare your application to automatically switch over connections from the source database to the target database.

Note:

In physical migrations, Autonomous Database targets are not supported for automatic application switchover.

In the following example connect string, the application connects to the source database, and when it is not available the connection is switched over to the target database.

(DESCRIPTION=
    (FAILOVER=on)(LOAD_BALANCE=on)(CONNECT_TIMEOUT=3)(RETRY_COUNT=3)
    (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=TCP)(HOST=source_database_scan)(PORT=1521))
        (ADDRESS=(PROTOCOL=TCP)(HOST=target_database_scan)(PORT=1521)))
    (CONNECT_DATA=(SERVICE_NAME=zdm_prod_svc)))

On the source database, create the service, named zdm_prod_svc in the examples.

srvctl add service -db clever -service zdm_prod_svc -role PRIMARY
 -notification TRUE -session_state dynamic -failovertype transaction
 -failovermethod basic -commit_outcome TRUE -failoverretry 30 -failoverdelay 10
 -replay_init_time 900 -clbgoal SHORT -rlbgoal SERVICE_TIME -preferred clever1,clever2
 -retention 3600 -verbose

If the db_domain changes between the source and target then the connect string specified in the application should cater to both for failover to be effective.

(DESCRIPTION_LIST=
            (FAILOVER=ON)
            (LOAD_BALANCE=ON)
            (DESCRIPTION=
            (ADDRESS= (PROTOCOL=TCP) (HOST=SRC_SCAN) (PORT=1521))
            (CONNECT_DATA=
            (SERVICE_NAME=SVC.SRC_DOMAIN)))            
            (DESCRIPTION=
            (ADDRESS= (PROTOCOL=TCP) (HOST=TGT_SCAN) (PORT=1521))
            (CONNECT_DATA=
            (SERVICE_NAME= SVC.TGT_DOMAIN))

See Also:

Oracle MAA white papers about client failover best practices on the Oracle Active Data Guard Best Practices page at https://www.oracle.com/goto/maa

High Availability in Oracle Database Development Guide

Customizing a Migration Job

You can customize the Zero Downtime Migration work flow with scripts which can be run at the beginning or end of a specified migration job phase. In Zero Downtime Migration, these customizations are called custom plug-ins with user actions.

The following topics describe how to customize a migration job.

About Custom Plug-ins with User Actions

In Zero Downtime Migration, a custom script, or bundle of scripts, that you want to plug in and run as part of a migration job is called a custom plug-in with user action.

A custom plug-in with user action, which is also referred to as a user action, is associated with an operational phase in the migration job and can be run before the phase or after it.

A pre-action is a user action performed at the beginning of the associated phase. Likewise, a post-action is performed at the end of the associated phase.

Once a user action is associated with a migration phase, Zero Downtime Migration copies it to the respective node (for non-Autonomous Databases) and run the user action locally. Zero Downtime Migration supplies the user action script execution with a set of parameters for developers to use the DBNAME, DBHOME, DB SCAN and ZDM log locations, and many more parameters.

For Autonomous Database, Zero Downtime Migration allows SQL to be registered as a user action and executes the SQL in an Autonomous Database instance from the Zero Downtime Migration server through a JDBC connection.

Parameters Supplied for Custom Plug-ins with User Actions

At run time, Zero Downtime Migration invokes the user action script and supplies it with set of auto-populated ENV variables that can be used to program the logic.

These variables are supplied as ENV variables with values so you can program the custom plug-in using these values.

For example, the value of ZDM_SRCDBHOME specifies the database home associated with the current migration job source database, and similarly SRC_SCAN_NAME indicates the scan listener information for the source database that you can use to connect to the database.

The following is a listing of the user action parameters with expected values. See the footnotes for more details.

RHP_OPTYPE=MIGRATE_DATABASE
RHP_PHASE=PRE
ZDM_SRCDB=src_db_name
ZDM_SRCDBHOME=src_db_home
ZDM_TARGETDB=tgt_db_name
ZDM_TARGETDBHOME=tgt_db_home
RHP_PROGRESSLISTENERHOST=zdm_node_name
RHP_PROGRESSLISTENERPORT=zdm_progress_listener_port
LOG_PATH=src/tgt_zdm_log_path1
SRC_SCAN_NAME=src_scan_name
SRC_SCAN_PORT=src_scan_port
TGT_SCAN_NAME=tgt_scan_name
TGT_SCAN_PORT=tgt_scan_port
RHP_USERACTIONDATA=user_action_data2
RHP_OP_PHASE=current_job_phase3

1LOG_PATH specifies the source or target node Zero Downtime Migration log path for the custom plug-in to store any log files for future reference.

2RHP_USERACTIONDATA specifies the useraction argument that was specified in zdmcli migrate database command option -useractiondata user_action_data

3RHP_OP_PHASE specifies the current phase of the migration job, for example,ZDM_VALIDATE_SRC.

User Action Scripts

You can use the following example user action scripts to help you create your own scripts.

For an on-premises source and Cloud target with node access, Zero Downtime Migration requires that the user action be a script. The script file is copied over to the respective node and is executed locally.

For Autonomous Database targets, Zero Downtime Migration requires that the user action script be a SQL file (.sql), which is executed in the Autonomous Database target using a JDBC connection.

Example 3-1 action.sh

Shown here is a sample user action script that discovers the supplied parameters.

    #!/bin/sh  

    for var in $@
    do
    if [[ ${var} == *"ZDM_SRCDB="* ]]
    then
    IFS='=' read -ra DBARR <<< "${var}"
    SRCDBNAME=${DBARR[1]}
    fi
    if [[ ${var} == *"ZDM_TARGETDB="* ]]
    then
    IFS='=' read -ra DBARR <<< "${var}"
    TARGETDBNAME=${DBARR[1]}
    fi
    if [[ $var == *"ZDM_SRCDBHOME="* ]]
    then
    IFS='=' read -ra PATHARR <<< "$var"
    SRCDBHOME=${PATHARR[1]}
    fi
    if [[ $var == *"ZDM_TARGETDBHOME="* ]]
    then
    IFS='=' read -ra PATHARR <<< "$var"
    TARGETDBHOME=${PATHARR[1]}
    fi
    if [[ $var == *"RHP_OP_PHASE="* ]]
    then
    IFS='=' read -ra PHASEARR <<< "$var"
    ZDMPHASE=${PHASEARR[1]}
    fi
    if [[ $var == *"eval"* ]]
    then
    IFS='=' read -ra PATHARR <<< "$var"
    EVALRUN=${PATHARR[1]}
    fi
    if [[ $var == *"LOG_PATH"* ]]
    then
    IFS='=' read -ra PATHARR <<< "$var"
    LOGPATH=${PATHARR[1]}
    fi
    done

    echo "`date` Starting CUSTOM_USERACTION" >> $LOG_PATH/CUSTOM_USERACTION.log
    echo $@ >> $LOG_PATH/CUSTOM_USERACTION.log

Example 3-2 Running SQL*Plus in a user action

#!/bin/sh  
 
for var in $@
do
if [[ ${var} == *"ZDM_SRCDB="* ]]
then
IFS='=' read -ra DBARR <<< "${var}"
SRCDBNAME=${DBARR[1]}
fi
if [[ ${var} == *"ZDM_TARGETDB="* ]]
then
IFS='=' read -ra DBARR <<< "${var}"
TARGETDBNAME=${DBARR[1]}
fi
if [[ $var == *"ZDM_SRCDBHOME="* ]]
then
IFS='=' read -ra PATHARR <<< "$var"
SRCDBHOME=${PATHARR[1]}
fi
if [[ $var == *"ZDM_TARGETDBHOME="* ]]
then
IFS='=' read -ra PATHARR <<< "$var"
TARGETDBHOME=${PATHARR[1]}
fi
if [[ $var == *"RHP_OP_PHASE="* ]]
then
IFS='=' read -ra PHASEARR <<< "$var"
ZDMPHASE=${PHASEARR[1]}
fi
if [[ $var == *"eval"* ]]
then
IFS='=' read -ra PATHARR <<< "$var"
EVALRUN=${PATHARR[1]}
fi
if [[ $var == *"LOG_PATH"* ]]
then
IFS='=' read -ra PATHARR <<< "$var"
LOGPATH=${PATHARR[1]}
fi
done
 
    check_dv_staus()
    {
        export ORACLE_HOME=${2}
        export PATH=$ORACLE_HOME/bin:$PATH
        export LOGFILE=$3
        export currenthostname=`hostname -a` >> $LOGFILE
        echo "Current DB Host=$currenthostname" >> $LOGFILE
        CURRENTNODE=`srvctl status database -db ${1} | grep $currenthostname | cut -d" " -f2` >> $LOGFILE
        SQLRETURN=$?
        echo "Curent DB Node=${CURRENTNODE}" >> $LOGFILE
        if [ "$SQLRETURN" -ne "0" ]; then
            return 1
        fi
        export ORACLE_SID=${CURRENTNODE}
        echo "`date` Checking Database Vault Status" >> $LOGFILE
        $ORACLE_HOME/bin/sqlplus -s / as sysdba 2>> $LOGFILE << EOF > /dev/null
        whenever sqlerror exit 1
        SET PAGESIZE 60
        SET LINESIZE 1300
        SET VERIFY OFF TRIMSPOOL ON  HEADING OFF  TERMOUT OFF  FEEDBACK OFF
        spool ${LOGFILE} append;
 
        SELECT 'Check Status : '||PARAMETER FROM V\$OPTION WHERE PARAMETER = 'Oracle Database Vault' AND VALUE='TRUE';
        SELECT 'Check Grant : '||GRANTED_ROLE from dba_role_privs where GRANTED_ROLE in ('DV_PATCH_ADMIN') and grantee='SYS';
        select distinct ('Check TDE enabled ' || ENCRYPTED) from dba_tablespaces where ENCRYPTED='YES';
        spool off
        EOF
        SQLRETURN=$?
        if [ "$SQLRETURN" -ne "0" ]; then
            return 1
        fi
 
        if grep -q "Check Status : Oracle Database Vault" $LOGFILE;
        then
            echo "`date`:$ORACLE_SID:Database Vault is enabled " >> $LOGFILE
        if grep -q "Check Grant : DV_PATCH_ADMIN" $LOGFILE;
        then
            return 3 # sys privs already granted
        else
            return 4 # sys privs are not granted
        fi
        else
            echo "`date`:$ORACLE_SID: DV is not enabled" >> $LOGFILE
        return 2
        fi
    }
 
if [[ $ZDMPHASE == "ZDM_VALIDATE_SRC" || $ZDMPHASE == "ZDM_VALIDATE_TGT" ]]
then
  export LOGFILE=$LOG_PATH/${SRCDBNAME}_${ZDMPHASE}_datavault.log
  echo "`date` Start User Action for Phase: $ZDMPHASE " > $LOGFILE
  echo $@ >> $LOGFILE
  check_dv_staus $SRCDBNAME $SRCDBHOME $LOGFILE
  CHECKDV_STATUS_RETURN_CODE=$?
  if [ "$CHECKDV_STATUS_RETURN_CODE" -eq "1" ]; then
  echo "`date`:${SRCDBNAME}:Unable to check DataVault status" >> $LOGFILE
  exit 1
  fi
 
  if [ "$CHECKDV_STATUS_RETURN_CODE" -eq "2" ]; then
    echo "`date`:${SRCDBNAME}:DataVault is not enabled " >> $LOGFILE
    exit 0
  fi
 
  if [ "$CHECKDV_STATUS_RETURN_CODE" -eq "3" ];
  then
     echo "`date`:${SRCDBNAME}:Database Vault SYS privileges granted" >> $LOGFILE
     exit 0
  fi

Example 3-3 Action file archive extractor action_extract.sh

If you bundle more than one user action into an action file, as described in Registering User Actions, you must also supply a script that unzips the action file, as shown in this example.

#!/bin/sh  
MKDIR=/bin/mkdir
DATE=/bin/date
UNZIP=/usr/bin/unzip
#get the current location, extract path from it and then from the same
directory perform unzip to /tmp directory
script_path=$0
script_dir=${script_path%/*}
timestamp=$($DATE +%s)
unzip_dir=/tmp/rhp_${timestamp}
$MKDIR $unzip_dir
$UNZIP ${script_dir}/pack.zip -d $unzip_dir

echo "/bin/sh ${unzip_dir}/pack/main.sh $@"
/bin/sh ${unzip_dir}/pack/main.sh  "$@"

Once the archive is extracted, action_extract.sh runs main.sh, which in turn runs any user action scripts.

 #!/bin/sh  
script_path=$0
script_dir=${script_path%/*}
#extract args and execute all scripts
echo "/bin/sh ${script_dir}/wc_add_pre.sh $@"
/bin/sh ${script_dir}/wc_add_pre.sh  "$@"

action_phase_pre.sh

#!/bin/sh
touch /tmp/SAMPLE_PRE_OUT.txt;
echo $* >> /tmp/SAMPLE_PRE_OUT.txt;

Registering User Actions

User actions must be registered with the Zero Downtime Migration service host to be plugged in as customizations for a particular operational phase.

The ZDMCLI add useraction command registers a custom action script that needs to be run in the source or target server. The user action script is copied to the Zero Downtime Migration metadata. You can use the modify useraction command if the script needs to be modified. An action script can be part of any number of custom actions in given migration job.

Determine the migration job phase the action has to be associated with, and run the ZDMCLI command ADD USERACTION, specifying -optype MIGRATE_DATABASE and the respective phase of the operation, whether the plug-in is run -pre or -post relative to that phase, and any on-error requirements.

You can register custom plug-ins for operational phases after ZDM_SETUP_TGT in the migration job work flow.

If the user action encounters an error at runtime, a behavior can be specified with the -onerror option, which you can set to either ABORT, to end the process, or CONTINUE, to continue the migration job even if the custom plug-in exits with an error.

Use the Zero Downtime Migration software installed user (for example, zmduser) to add user actions to a database migration job.

The following example shows you how to add user actions zdmvaltgt and zdmvalsrc.

zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvaltgt -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_TGT -pre -onerror ABORT -actionscript /home/zdmuser/useract.sh

zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction zdmvalsrc -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_SRC -pre -onerror CONTINUE -actionscript /home/zdmuser/useract1.sh

In the above command, the scripts useract.sh and useract1.sh, specified in the -actionscript option, are copied to the Zero Downtime Migration service host repository meta data, and they are run if they are associated with any migration job run using an action template.

Running Multiple Scripts in One Action

If you need to run multiple scripts as a single pre or post action, all of the action scripts can be bundled as an action file archive and supplied along with an action script, as shown in the following example. The action script should unzip the action file and invoke the scripts within.

zdmuser> $ZDM_HOME/bin/zdmcli add useraction -useraction reconfigure_services   
 -optype MIGRATE_DATABASE -phase ZDM_RECOVER_TGT -post -onerror ABORT
 -actionscript /home/zdmuser/action_extract.sh
 -actionfile /home/zdmuser/pack.zip 

For example scripts and script requirements, see User Action Scripts.

Creating an Action Template

After the useraction plug-ins are registered, you create an action template that combines a set of action plug-ins which can be associated with a migration job.

An action template is created using the ZDMCLI command add imagetype, where the image type, imagetype, is a bundle of all of the useractions required for a specific type of database migration. Create an image type that associates all of the useraction plug-ins needed for the migration of the database. Once created, the image type can be reused for all migration operations for which the same set of plug-ins are needed.

The base type for the image type created here must be CUSTOM_PLUGIN, as shown in the example below.

For example, you can create an image type ACTION_ZDM that bundles both of the useractions created in the previous example, zdmvalsrc and zdmvaltgt.

zdmuser> $ZDM_HOME/bin/zdmcli add imagetype -imagetype ACTION_ZDM -basetype 
CUSTOM_PLUGIN -useractions zdmvalsrc,zdmvaltgt

Updating Action Plug-ins

You can update action plug-ins registered with the Zero Downtime Migration service host.

The following example shows you how to modify the useraction zdmvalsrc to be a -post action, instead of a -pre action.

zdmuser> $ZDM_HOME/bin/zdmcli modify useraction -useraction zdmvalsrc -phase ZDM_VALIDATE_SRC
 -optype MIGRATE_DATABASE -post

This change is propagated to all of the associated action templates, so you do not need to update the action templates.

Querying Action Plug-ins

You can query action plug-ins registered with the Zero Downtime Migration service host.

To display the configuration of a user action:

zdmuser> $ZDM_HOME/bin/zdmcli query useraction -h

See query useraction for usage information.

Associating an Action Template with a Migration Job

When you run a migration job you can specify the image type that specifies the plug-ins to be run as part of your migration job.

As an example, run the migration command specifying the action template ACTION_ZDM created in previous examples, -imagetype ACTION_ZDM, including the image type results in running the useract.sh and useract1.sh scripts as part of the migration job workflow.

By default, the action plug-ins are run for the specified operational phase on all nodes of the cluster. If the access credential specified in the migration command option -tgtarg2 is unique for a specified target node, then an additional auth argument should be included to specify the auth credentials required to access the other cluster nodes. For example, specify -tgtarg2 nataddrfile:auth_file_with_node_and_identity_file_mapping.

A typical nataddrfile for a 2 node cluster with node1 and node2 is shown here.

node1:node1:identity_file_path_available_on_zdmservice_node 
node2:node2:identity_file_path_available_on_zdmservice_node