3 Preparing for Database Migration

Before starting a Zero Downtime Migration database migration configure connectivity, prepare the database, and configure any required migration job customization.

See the Zero Downtime Migration Release Notes for the latest information about known issues, My Oracle Support notes, and runbooks.

Configuring Connectivity Prerequisites

Connectivity must be set up between the Zero Downtime Migration service host and the source and target database servers.

The following topics describe how to configure the Zero Downtime Migration connectivity prerequisites before running a migration job.

Configuring Connectivity From the Zero Downtime Migration Service Host to the Source and Target Database Servers

Complete the following procedure to ensure the required connectivity between the Zero Downtime Migration service host and the source and target database servers.

  1. On the Zero Downtime Migration service host, verify that the authentication key pairs are available without a passphrase for the Zero Downtime Migration software installed user.
    If a new key pair must be generated without the passphrase, then, as a Zero Downtime Migration software installed user, generate new key pairs as described in Generating a Private SSH Key Without a Passphrase.
  2. Rename the private key file.
    Rename the ZDM_installed_user_home/.ssh/id_rsa file name to ZDM_installed_user_home/.ssh/ZDM_service_node_name.ppk.
  3. Add the contents of the ZDM_installed_user_home/.ssh/id_rsa.pub file to the opc_user_home/.ssh/authorized_keys file, with the following dependencies:
    • If the source database is on Oracle Cloud Infrastructure Classic, then add the contents of the ZDM_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the source database servers.

      Note that the opc user is a standard Oracle Cloud user that is used to access the target database servers.

    • If the source database servers have root access, no action is required.
    • If the target database is on Oracle Cloud Infrastructure, Exadata Cloud at Customer, or Exadata Cloud Service, then add the contents of the ZDM_installed_user_home/.ssh/id_rsa.pub file into the opc_user_home/.ssh/authorized_keys file on all of the target database servers.
  4. Make sure that the source and target database server names specified in the command are resolvable from the Zero Downtime Migration service host through either resolving name servers or alternate ways approved by your IT infrastructure.
    One method of resolving source and target database server names is to add the source and target database server names and IP address details to the Zero Downtime Migration service host /etc/hosts file.

    For example,

    #OCI public IP two node RAC server details
    192.0.2.1 zdmhost1
    192.0.2.2 zdmhost2
    #OCIC public IP two node RAC server details
    192.0.2.6 ocicdb1
    192.0.2.7 ocicdb2
  5. Make certain that port 22 in the source and target database servers accept incoming connections from the Zero Downtime Migration service host.
  6. Test the connectivity from the Zero Downtime Migration service host to all source and target database servers.
    zdmuser> ssh -i ZDM_service_node_private_key_file_location user@source/target_database_server_name

    For example,

    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_node.ppk opc@zdmhost1
    zdmuser> ssh -i /home/zdmuser/.ssh/zdm_service_node.ppk opc@ocicdb1

Configuring Connectivity Between the Source and Target Database Servers

You can configure connectivity between the source and target database servers using one of two options.

Option 1

The source database server specified in the ZDMCLI command -sourcenode parameter can connect to target database instance over target SCAN through the respecitve scan port and vice versa. The SCAN of the target should be resolvable from the source database server, and the SCAN of the source should resolve from the target server. Having connectivity from both sides, you can synchronize between the source database and target database from either side. If the source database server SCAN cannot be resolved from the target database server, then the SKIP_FALLBACK parameter in the response file must be set to TRUE, and you cannot synchronize between the target database and source database.

Option 2

If connectivity through SCAN and the SCAN port is not possible between the source and target database servers, set up an SSH tunnel from the source database server to the target database server using the procedure below. Using this option, you will not be able to synchronize between the target database and source database.

Note that this procedure amounts to setting up what may be considered a temporary channel. You can choose to set up access without using an SSH tunnel.

Note:

The following steps refer to Oracle Cloud Infrastructure, but are also applicable to Exadata Cloud at Customer and Exadata Cloud Service.
  1. Set up an SSH tunnel on the source database servers for the root user.
    1. Generate a private SSH key file without a passphrase for the opc user on the target Oracle Cloud Infrastructure server, using the information in Generating a Private SSH Key Without a Passphrase. If the target is an Oracle RAC database, then generate a private SSH key file without a passphrase from the first Oracle RAC server.
    2. Add the contents of the Oracle Cloud Infrastructure server opc_user_home/.ssh/id_rsa.pub file into the Oracle Cloud Infrastructure server opc_user_home/.ssh/authorized_keys file.
    3. Copy the target Oracle Cloud Infrastructure server private SSH key file onto the source server in the /root/.ssh/ directory. If the source is an Oracle RAC database, copy the file into all of the source servers.
      For better manageability, keep the private SSH key file name the same as the target server name, and keep the .ppk extension. For example, rptest.ppk (where rptest is the target server name).

      The file permissions should be similar to the following.

      /root/.ssh>ls -l rptest.ppk
      -rw------- 1 root root 1679 Oct 16 10:05 rptest.ppk
    4. Put the following entries in the source server /root/.ssh/config file.
      Host *
        ServerAliveInterval 10  
        ServerAliveCountMax 2
      
      Host OCI_server_name   
        HostName OCI_server_IP_address
        IdentityFile Private_key_file_location 
        User OCI_user_login  
        ProxyCommand /usr/bin/nc -X connect -x proxy_name:proxy_port %h %p

      Where

      • OCI_server_name is the Oracle Cloud Infrastructure target database server name without the domain name. For an Oracle RAC database use the first Oracle RAC server name without the domain name.
      • OCI_server_IP_address is the Oracle Cloud Infrastructure target database server IP address. For an Oracle RAC database use the first Oracle RAC server IP address.
      • Private_key_file_location is the location of the private key file.
      • OCI_user_login is the OS user used to access the target database servers.
      • proxy_name is the host name of the proxy server.
      • proxy_port is the port of the proxy server.

      Note that the proxy setup might not be required when you are not using a proxy server for connectivity. For example, when the source database server is on Oracle Cloud Infrastructure Classic, you can remove or comment the line starting with ProxyCommand.

      For example, after specifying the relevant values, the /root/.ssh/config file should be similar to the following.

      Host *
        ServerAliveInterval 10  
        ServerAliveCountMax 2
      
      Host rptest
        HostName 192.0.2.9
        IdentityFile /root/.ssh/rptest.ppk
        User opc
        ProxyCommand /usr/bin/nc -X connect -x www-proxy.example.com:80 %h %p
      

      The file permissions should be similar to the following.

      /root/.ssh>ls -l config
      -rw------- 1 root root 1679 Oct 16 10:05 config

      In the above example, the Oracle Cloud Infrastructure server name is rptest, and the Oracle Cloud Infrastructure server public IP address is 192.0.2.9.

      If the source is an Oracle Cloud Infrastructure Classic server, the proxy_name is not required, so you can remove or comment the line starting with ProxyCommand.

      If the source is an Oracle RAC database, then copy the same /root/.ssh/config file onto all of the source Oracle RAC database servers. This file will have the Oracle Cloud Infrastructure server name, Oracle Cloud Infrastructure server public IP address, and private key file location of first Oracle Cloud Infrastructure Oracle RAC server information configured.

    5. Make sure that you can SSH to first target Oracle Cloud Infrastructure server from the source server before you enable the SSH tunnel.
      For an Oracle RAC database, test the connection from all of the source servers to the first target Oracle Cloud Interface server.

      Using the private key:

      [root@ocic121 ~] ssh -i /root/.ssh/rptest.ppk opc@rptest
      Last login: Fri Dec  7 14:53:09 2018 from 192.0.2.11
      
      [opc@rptest ~]$
    6. Run the following command on the source server to enable the SSH tunnel.
      ssh -f OCI_hostname_without_domain_name -L ssh_tunnel_port_number:OCI_server_IP_address:OCI_server_listener_port -N

      Where

      • OCI_hostname_without_domain_name is the Oracle Cloud Infrastructure target database server name without a domain name. For an Oracle RAC database use the first Oracle RAC server name without domain name.
      • ssh_tunnel_port_number is any available ephemeral port in the range (1024-65545). Make sure that the SSH tunnel port is not used by any other process in the server before using it.
      • OCI_server_listener_port is the target database listener port number. The listener port must be open between the source database servers and Oracle Cloud Infrastructure target servers.
      • OCI_server_IP_address is configured based on database architecture. For a single instance database, specify the Oracle Cloud Infrastructure server IP address. For an Oracle RAC database, specify the Oracle Cloud Infrastructure scan name with the domain name. If the scan name with domain name is not resolvable or not working, then specify the IP address obtained using the lsnrctl status command output. For example,
        Listening Endpoints Summary...
          (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
          (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.9)(PORT=1521)))
          (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.0.2.10)(PORT=1521)))

      The following is an example of the command run to enable the SSH tunel.

      [root@ocic121~]ssh -f rptest -L 9000:192.0.2.9:1521 -N

      For an Oracle RAC database, this step must be repeated on all of the source servers.

    7. Test the SSH tunnel.
      Log in to source server, switch to the oracle user and source the database environment, and run the following command.
      tnsping localhost:ssh_tunnel_port

      For example,

      [oracle@ocic121 ~] tnsping localhost:9000

      The command output is similar to the following.

      TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 22-JAN-2019 05:41:57
      Copyright (c) 1997, 2014, Oracle.  All rights reserved.
      Used parameter files:
      Used HOSTNAME adapter to resolve the alias
      Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=9000)))
      OK (50 msec)

      If tnsping does not work, then the SSH tunnel is not enabled.

      For Oracle RAC, this step must be repeated on all of the source servers.

  2. Test connectivity from the source to target environments.
    Add the TNS entry of the target database to the source database server $ORACLE_HOME/network/admin/tnsnames.ora file.
    [oracle@sourcedb ~] tnsping target-tns-string
  3. Test connectivity from the target to the source environment.
    Add the TNS entry of the source database to the target database server $ORACLE_HOME/network/admin/tnsnames.ora file
    [oracle@targetdb ~] tnsping source-tns-string

Generating a Private SSH Key Without a Passphrase

If, on the Zero Downtime Migration service host, source database server, or target database server, the authentication key pairs are not available without a passphrase for the Zero Downtime Migration software installed user, you can generate a new SSH key using the following procedure.

SSH connectivity during Zero Downtime Migration operations requires direct, non-interactive access between the Zero Downtime Migration service host and the source and target database servers, and also between the source and target database servers, without the need to enter a passphrase.

Note:

The following steps show examples for generating a private SSH key for the software installed user. You can also use these steps for the opc user.

Run the following command as the Zero Downtime Migration software installed user on the Zero Downtime Migration service host.

zdmuser> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/opc/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/opc/.ssh/id_rsa.
Your public key has been saved in /home/opc/.ssh/id_rsa.pub.
The key fingerprint is:
c7:ed:fa:2c:5b:bb:91:4b:73:93:c1:33:3f:23:3b:30 opc@rhost1
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         . . .   |
|        S o . =  |
|         . E . * |
|            X.+o.|
|          .= Bo.o|
|          o+*o.  |
+-----------------+

This command generates the id_rsa and id_rsa.pub files in the zdmuser home, for example, /home/zdmuser/.ssh.

You can add the public key (for example, /home/zdmuser/.ssh/id_rsa.pub) to the source and target database servers using the Oracle Cloud Infrastructure Console, or you can add it manually to the authorized_keys file on those servers, as shown below.

Add the contents of the Zero Downtime Migration service host /home/zdmuser/.ssh/id_rsa.pub file to the Oracle Cloud Infrastructure server opc user /home/opc/.ssh/authorized_keys file, as shown here.

[opc@rptest.ssh]$ export PS1='$PWD>'
/home/opc/.ssh>ls
authorized_keys  authorized_keys.bkp  id_rsa  id_rsa.pub  known_hosts  zdmkey
/home/opc/.ssh>cat id_rsa.pub >> authorized_keys

You should save the private key in a separate, secure file, and use it to connect to the source and target database servers. For example, create a zdm_service_node.ppk file with permissions set to 600, and put the private key file into it on the Zero Downtime Migration service host software installed user home/.ssh to connect source and target database servers.

Setting Up the Transparent Data Encryption Wallet

For Oracle Database 12c Release 2 and later, if the source database does not have TDE enabled, then it is mandatory that you configure the TDE wallet before migration begins. Enabling TDE on Oracle Database 11g Release 2 (11.2.0.4) and Oracle Database 12c Release 1 is not required.

If Transparent Data Encryption (TDE) is not already configured as required on the source and target databases, use the following instructions to set up the (TDE) wallet. TDE should be enabled, the WALLET status on both source and target databases must be set to OPEN, and the WALLET_TYPE must be set to AUTOLOGIN.
  1. Set ENCRYPTION_WALLET_LOCATION in $ORACLE_HOME/network/admin/sqlnet.ora file.
    $ cat /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/sqlnet.ora 
    
    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)
      (METHOD_DATA=(DIRECTORY=/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/)))
    
  2. Connect to the database and configure the keystore.
    $ sqlplus "/as sysdba"
    SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin'
     identified by **********;
    keystore altered.

    For a non-CDB environment, run the following command.

    SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY **********;
    keystore altered.

    For a CDB environment, run the following command.

    SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY ********** container = ALL;

    For a non-CDB environment, run the following command.

    SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY ********** with backup;
    keystore altered.

    For a CDB environment, run the following command.

    SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY ********** with backup container = ALL;

    Then run,

    
    SQL> select * FROM v$encryption_keys;
    
  3. Set up autologin.
    SQL> SELECT * FROM v$encryption_wallet;
    
    WRL_TYPE	WRL_PARAMETER
    --------------------	--------------------------------------------------------------------------------
    STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC	CON_ID
    ------------------------------ -------------------- --------- ---------			----------
    FILE		/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
    OPEN                           PASSWORD             SINGLE    NO         0
    
    SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE
     '/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/' IDENTIFIED BY **********;
    keystore altered.
    

    If you are using an Oracle RAC database, copy the files below to the same location on each cluster node, or to a shared file system.

    /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/ew* 
    /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/cw*   
    
    SQL> SELECT * FROM v$encryption_wallet;
    WRL_TYPE	WRL_PARAMETER
    --------------------	--------------------------------------------------------------------------------
    STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC	    CON_ID
    ------------------------------ -------------------- --------- ---------			----------
    FILE		/u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
    OPEN                           PASSWORD             SINGLE    NO	         	0
    

    At this stage, the PASSWORD based wallet is enabled. To enable an AUTOLOGIN based wallet, complete the remaining steps in this procedure.

    Close the password wallet.

    SQL> administer key management set keystore close identified by **********;
    keystore altered.

    Then verify that autologin is configured. Set TDE WALLET status to OPEN and WALLET_TYPE to AUTOLOGIN, otherwise the wallet configuration is not correctly set up.

    $ sqlplus "/as sysdba"
    SQL> SELECT * FROM v$encryption_wallet;
    WRL_TYPE WRL_PARAMETER
    -------------------- --------------------------------------------------------------------------------
    STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
    ------------------------------ -------------------- --------- --------- ---------
    FILE /u01/app/oracle/product/12.2.0.1/dbhome_2/network/admin/
    OPEN AUTOLOGIN SINGLE NO 

Upon migration of your Oracle Database to the Oracle Cloud, bear in mind that Oracle databases in the Oracle Cloud are TDE enabled by default. Zero Downtime Migration will take care of the encryption of your target database, even if your source Oracle Database is not TDE enabled by default. However, once the switchover phase of the migration has taken place, the redo logs that the new primary database in the Oracle Cloud sends to the new standby database on your premises will be encrypted. Therefore, if you decide to switch back and role swap again making the on-premises database the primary again and the database in the Oracle Cloud the standby, the on-premises database will not be able to read the newly encrypted changed blocks applied by the redo logs unless TDE is enabled on-premises. 

In order to avoid post migration conflict, prior to performing the original switchover as part of the migration process, the recommended best practice is to perform appropriate testing and validation. There are options outside of Zero Downtime Migration for testing with a snapshot standby database, and once you are ready to proceed, delete the snapshot standby database and instruct Zero Downtime Migration to perform the switchover and finalize the migration process.

Preparing the Database for Migration

Prepare the source and target databases for the migration.

See the following topics for information about preparing the source and target databases for migration.

Source Database Prerequisites

Meet the prerequisites on the source database before the Zero Downtime Migration process starts.

  1. The source database must be running in archive log mode.
  2. For Oracle Database 12c Release 2 and later, if the source database does not have Transparent Data Encryption (TDE) enabled, then it is mandatory that you configure the TDE wallet before migration begins. The WALLET_TYPE can be AUTOLOGIN (preferred) or PASSWORD based.
  3. Ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (For an AUTOLOGIN wallet type), or WALLET_TYPE is PASSWORD (For a PASSWORD based wallet type). For a multitenant database, ensure that the wallet is open on all PDBs as well as the CDB, and the master key is set for all PDBs and the CDB.
    SQL> SELECT * FROM v$encryption_wallet;
  4. If the source is an Oracle RAC database, and SNAPSHOT CONTROLFILE is not on a shared location, configure SNAPSHOT CONTROLFILE to point to a shared location on all Oracle RAC nodes to avoid the ORA-00245 error during backups to Oracle Object Store.
    For example, if the database is deployed on ASM storage,
    $ rman target /  
    RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DATA/snapcf_matrix.f';

    If the database is deployed on an ACFS file system, specify the shared ACFS location in the above command.

  5. Verify that port 22 on the source and target database servers allow incoming connections from the Zero Downtime Migration service host.
  6. Ensure that the scan listener ports (1521, for example) on the source database servers allow incoming connections from the target database servers and vice versa.
    Alternate SQL connectivity should be made available if a firewall blocks incoming remote connection using the SCAN listener port.
  7. To preserve the source database Recovery Time Objective (RTO) and Recovery Point Objective (RPO) during the migration, the existing RMAN backup strategy should be maintained.
    During the migration a dual backup strategy will be in place; the existing backup strategy and the strategy used by Zero Downtime Migration. Avoid having two RMAN backup jobs running simultaneously (the existing one and the one initiated by Zero Downtime Migration). If archive logs were to be deleted on the source database, and these archive logs are needed by Zero Downtime Migration to instantiate the target cloud database, then these files should be restored so that Zero Downtime Migration can continue the migration process.

Target Database Prerequisites

The following prerequisites must be met on the target database before you begin the Zero Downtime Migration process.

  1. A placeholder target database must be created before database migration begins.
    The placeholder target database is overwritten during migration, but it retains the overall configuration.

    Pay careful attention to the following requirements:

    • Size for the future - When you create the database from the console, ensure that your chosen shape can accommodate the source database, plus any future sizing requirements. A good guideline is to use a shape similar to or larger in size than source database.
    • Set name paramters - The target database db_name should be the same as the source database db_name, and the target database db_unique_name parameter value must be unique to ensure that Oracle Data Guard can identify the target as a different database from the source database.
    • Disable automatic backups - Provision the target database from the console without enabling automatic backups.

      For Oracle Cloud Infrastructure and Exadata Cloud Service, do not select the Enable automatic backups option under the section Configure database backups.

      For Exadata Cloud at Customer, set Backup destination Type to None under the section Configure Backups.

  2. The target database version should be the same as the source database version. The target database patch level should also be the same as (or higher than) the source database.
    If the target database environment is at a higher patch level than the source database (for example, if the source database is at Oct 2018 PSU/BP and the target database is at Jan 2019 PSU/BP), then you must run datapatch after database migration.
  3. Transparent Data Encryption (TDE) should be enabled and ensure that the wallet STATUS is OPEN and WALLET_TYPE is AUTOLOGIN (for an AUTOLOGIN wallet type), or WALLET_TYPE is PASSWORD (for a PASSWORD based wallet type).
    SQL> SELECT * FROM v$encryption_wallet;
  4. If the target is an Oracle RAC database, then you must set up SSH connectivity without a passphrase between the Oracle RAC servers for the oracle user.
  5. Check the size of the disk groups and usage on the target database (ASM disk groups or ACFS file systems) and make sure adequate storage is provisioned and available on the target database servers.
  6. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
  7. Verify that ports 22 and 1521 on the target servers in the Oracle Cloud Infrastructure or Exadata Cloud at Customer environment are open and not blocked by a firewall.
  8. Capture the output of the RMAN SHOW ALL command, so that you can compare RMAN settings after the migration, then reset any changed RMAN configuration settings to ensure that the backup works without any issues.
    RMAN> show all;

See Also:

Managing User Credentials for information about generating the auth token for Object Storage backups

Zero Downtime Migration Port Requirements

Preparing for Migration to Oracle Cloud Infrastructure

Complete the following preparation before migrating data to an Oracle Cloud Infrastructure virtual machine or bare metal target.

  1. Prepare the response file template.
    Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location /u01/app/zdmhome/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
    • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
    • Set PLATFORM_TYPE to VMDB.
    • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS stands for for Object Storage service.
    • If an SSH proxy is required to access the source database server from the Zero Downtime Migration service host, set SRC_HTTP_PROXY_URL and SRC_HTTP_PROXY_PORT.
    • If an SSH proxy is required to access the target database server from the Zero Downtime Migration service host, set TGT_HTTP_PROXY_URL and TGT_HTTP_PROXY_PORT.
    • If SSH tunneling is set up, set the TGT_SSH_TUNNEL_PORT parameter.
    • Specify the target database data files storage (ASM or ACFS) properties as appropriate for (TGT_DATADG, TGT_REDODG, and TGT_RECODG) or (TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS).
    • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby either voluntarily or because there is no connectivity between the target and source.
    • Set SHUTDOWN_SRC=TRUE, post migration, if you wish to shut down the source database.
  2. Set up Object Storage service access.
    To access the Oracle Cloud account, set the following parameters in the input file.
    • Set the cloud storage REST endpoint URL value to HOST. The value is typically in the format https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/acme, for Oracle Cloud Infrastructure storage, or https://acme.storage.oraclecloud.com/v1/Storage-acme, for Oracle Cloud Infrastructure Classic storage.
    • Set the Object Storage bucket OPC_CONTAINER parameter. The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
    • If a proxy is required to access the object store from the source database server, set SRC_OSS_PROXY_HOST and SRC_OSS_PROXY_PORT.
    • If a proxy is required to access the object store from the target database server, set TGT_OSS_PROXY_HOST and TGT_OSS_PROXY_PORT.

Preparing for Migration to Exadata Cloud Service

Complete the following preparation before migrating data to an Exadata Cloud Service target.

  1. Prepare the response file template.
    Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location /u01/app/zdmhome/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
    • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
    • Set PLATFORM_TYPE to EXACS.
    • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS stands for for Object Storage service.
    • If an SSH proxy is required to access the source database server from the Zero Downtime Migration service host, set SRC_HTTP_PROXY_URL and SRC_HTTP_PROXY_PORT.
    • If an SSH proxy is required to access the target database server from the Zero Downtime Migration service host, set TGT_HTTP_PROXY_URL and TGT_HTTP_PROXY_PORT.
    • If SSH tunneling is set up, set the TGT_SSH_TUNNEL_PORT parameter.
    • Specify the target database data files storage (ASM or ACFS) properties as appropriate for (TGT_DATADG, TGT_REDODG, and TGT_RECODG) or (TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS).
    • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.
    • Set SHUTDOWN_SRC=TRUE, post migration, if you want to shut down the source database.
  2. Set up Object Storage service access.
    To access the Oracle Cloud account, set the following parameters in the input file.
    • Set the cloud storage REST endpoint URL value to HOST. The value is typically in the format https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/acme, for Oracle Cloud Infrastructure storage, or https://acme.storage.oraclecloud.com/v1/Storage-acme, for Oracle Cloud Infrastructure Classic storage.
    • Set the Object Storage bucket OPC_CONTAINER parameter. The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
    • If a proxy is required to access the object store from the source database server, set SRC_OSS_PROXY_HOST and SRC_OSS_PROXY_PORT.
    • If a proxy is required to access the object store from the target database server, set TGT_OSS_PROXY_HOST and TGT_OSS_PROXY_PORT.

Preparing for Migration to Exadata Cloud at Customer

Complete the following preparation before migrating data to an Exadata Cloud at Customer target.

  1. Provision the target database.
    Configure a new placeholder database in your Exadata Cloud at Customer environment with same db_name as the on-premises database db_name.
  2. Prepare the ZDMCLI input response file template.
    Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location $ZDM_HOME/rhp/zdm/template/zdm_template.rsp, and update the file based on your backup medium as detailed in the topics that follow.
Preparing the Template for Exadata Cloud at Customer with Zero Data Loss Recovery Appliance Backup

When using Zero Data Loss Recovery Appliance as the backup medium for Zero Downtime Migration, set the parameters in the response file as described here.

  • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
  • Set PLATFORM_TYPE to ExaCC.
  • Set MIGRATION_METHOD to DG_ZDLRA, where DG stands for Data Guard and ZDLRA for Zero Data Loss Recovery Appliance.
  • Set the following Zero Data Loss Recovery Appliance parameters to use a backup residing in Zero Data Loss Recovery Appliance.
    • Set SRC_ZDLRA_WALLET_LOC for the wallet location, for example,
      SRC_ZDLRA_WALLET_LOC=/u02/app/oracle/product/12.1.0/dbhome_3/dbs/zdlra
    • Set TGT_ZDLRA_WALLET_LOC
    • Set ZDLRA_CRED_ALIAS for the wallet credential alias, for example,
      ZDLRA_CRED_ALIAS=zdlra_scan:listener_port/zdlra9:dedicated
  • Specify the target database data files storage (ASM or ACFS) properties as appropriate. For ASM set TGT_DATADG, TGT_REDODG, and TGT_RECODG. For ACFS set TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS.
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.
  • Set SHUTDOWN_SRC=TRUE, post migration, if you want to shut down the source database.
Preparing a Template for Exadata Cloud at Customer with Object Storage Backup

When using Oracle Cloud Infrastructure Object Storage service as the backup medium for your Zero Downtime Migration, set the parameters in the response file as described here.

  • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
  • Set PLATFORM_TYPE to ExaCC.
  • Set MIGRATION_METHOD to DG_OSS, where DG stands for Data Guard and OSS for the Object Storage service.
  • Specify the Oracle Cloud Infrastructure Object Storage service access and container details.

    The source database is backed up to the specified container and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.

  • Specify the target database data files storage (ASM or ACFS) properties as appropriate. For ASM set TGT_DATADG, TGT_REDODG, and TGT_RECODG. For ACFS set TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS.
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.
  • Set SHUTDOWN_SRC=TRUE, post migration, if you want to shut down the source database.
Preparing a Template for Exadata Cloud at Customer with NFS Backup

When using NFS storage as the backup medium for your Zero Downtime Migration, set the parameters in the response file as described here.

  • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
  • Set PLATFORM_TYPE to ExaCC.
  • Set MIGRATION_METHOD to DG_SHAREDPATH or DG_EXTBACKUP, where DG stands for Data Guard.

    Use DG_STORAGEPATH when a new backup needs to be taken and placed on an external storage mount (for eaxmple, an NFS mount point).

    Use DG_EXTBACKUP when using an existing backup, already placed on an external shared mount (for example, NFS storage).

    Note that if MIGRATION_METHOD is set to DG_EXTBACKUP then Zero Downtime Migration does not perform a new backup.

  • Set BACKUP_PATH to specify the actual NFS path which is made accessible from both the source and target database servers, for example, an NFS mount point. The NFS mount path should be same for both source and target database servers. This path does not need to be mounted on the Zero Downtime Migration service host.

    Note the following considerations:

    • The source database is backed up to the specified path and restored to Exadata Cloud at Customer using RMAN SQL*Net connectivity.
    • The path set in BACKUP_PATH should have ‘rwx’ permissions for the source database user, and at least read permissions for the target database user.
    • In the path specified by BACKUP_PATH, the Zero Downtime Migration backup procedure will create a directory, $BACKUP_PATH/dbname, and place the backup pieces in this directory.
  • If you use DG_EXTBACKUP as the MIGRATION_METHOD, then you should create a standby control file backup in the specified path and provide read permissions to the backup pieces for the target database user. For example,
    RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '< BACKUP_PATH >/lower_case_dbname/standby_ctl_%U';

    Where standby_ctl_%U is a system-generated unique file name.

  • Specify the target database data files storage (ASM or ACFS) properties as appropriate. For ASM set TGT_DATADG, TGT_REDODG, and TGT_RECODG. For ACFS set TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS.
  • Set SKIP_FALLBACK=TRUE if you do not want to ship redo logs from the target to the source standby, either voluntarily or because there is no connectivity between the target and the source.
  • Set SHUTDOWN_SRC=TRUE, post migration, if you want to shut down the source database.

Preparing for Offline Migration (Backup and Recovery)

Complete the following preparations before migrating a database to an Oracle Cloud Infrastructure, Exadata Cloud at Customer, or Exadata Cloud Service target environment.

  1. Prepare the response file template.
    Get the response file template, which is used to create your Zero Downtime Migration response file for the database migration procedure, from location /u01/app/zdmhome/rhp/zdm/template/zdm_template.rsp, and update the file as follows.
    • Set TGT_DB_UNIQUE_NAME to the target database db_unique_name value.
    • Set PLATFORM_TYPE to the appropriate value, depending on your target environment.
      • For Oracle Cloud Infrastructure, set PLATFORM_TYPE=VMDB.
      • For Exadata Cloud at Customer, set PLATFORM_TYPE=EXACC.
      • For Exadata Cloud Service, set PLATFORM_TYPE=EXACS.
    • Set MIGRATION_METHOD to BACKUP_RESTORE_OSS, where OSS stands for Object Storage service.
    • Specify the target database data files storage (ASM or ACFS) properties as appropriate. For ASM, set TGT_DATADG, TGT_REDODG, and TGT_RECODG. For ACFS set TGT_DATAACFS, TGT_REDOACFS, and TGT_RECOACFS.
    • If an SSH proxy is required to access the source database server from the Zero Downtime Migration service host, set SRC_HTTP_PROXY_URL and SRC_HTTP_PROXY_PORT.
    • If an SSH proxy is required to access the target database server from the Zero Downtime Migration service host, set TGT_HTTP_PROXY_URL and TGT_HTTP_PROXY_PORT.
  2. Set up Object Storage service access.
    To access the Oracle Cloud account, set the following parameters in the input file.
    • Set the cloud storage REST endpoint URL value to HOST. The value is typically in the format https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/acme, for Oracle Cloud Infrastructure storage, or https://acme.storage.oraclecloud.com/v1/Storage-acme, for Oracle Cloud Infrastructure Classic storage.
    • Set the Object Storage bucket OPC_CONTAINER parameter. The bucket is also referred to as a container for Oracle Cloud Infrastructure Classic storage. Make sure that the Object Storage bucket is created using the Oracle Cloud Service Console as appropriate. Make sure adequate storage is provisioned and available on the object store to accommodate the source database backup.
    • If a proxy is required to access the object store from the source database server, set SRC_OSS_PROXY_HOST and SRC_OSS_PROXY_PORT.
    • If a proxy is required to access the object store from the target database server, set TGT_OSS_PROXY_HOST and TGT_OSS_PROXY_PORT.

Preparing for Automatic Application Switchover

To minimize or eliminate service interruptions on the application after you complete the database migration and switchover, prepare your application to automatically switch over connections from the source database to the target database.

In the following example connect string, the application connects to the source database, and when it is not available the connection is switched over to the target database.

(DESCRIPTION=
    (FAILOVER=on)(LOAD_BALANCE=on)(CONNECT_TIMEOUT=3)(RETRY_COUNT=3)
    (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=TCP)(HOST=source_database_scan)(PORT=1521))
        (ADDRESS=(PROTOCOL=TCP)(HOST=target_database_scan)(PORT=1521)))
    (CONNECT_DATA=(SERVICE_NAME=zdm_prod_svc)))

On the source database, create the service, named zdm_prod_svc in the examples.

srvctl add service -db clever -service zdm_prod_svc -role PRIMARY
 -notification TRUE -session_state dynamic -failovertype transaction
 -failovermethod basic -commit_outcome TRUE -failoverretry 30 -failoverdelay 10
 -replay_init_time 900 -clbgoal SHORT -rlbgoal SERVICE_TIME -preferred clever1,clever2
 -retention 3600 -verbose

See Also:

Oracle MAA white papers about client failover best practices on the Oracle Active Data Guard Best Practices page at https://www.oracle.com/goto/maa

High Availability in Oracle Database Development Guide

Customizing a Migration Job

You can customize the Zero Downtime Migration workflow by registering action scripts or plug-ins as pre-actions or post-actions to be performed as part of the operational phases involved in your migration job.

The following topics describe how to customize a migration job.

Registering Action Plug-ins

Custom plug-ins must be registered to the Zero Downtime Migration service host to be plugged in as customizations for a particular operational phase.

Determine the operational phase the given plug-in has to be associated with, and run the ZDMCLI command add useraction, specifying -optype MIGRATE_DATABASE and the respective phase of the operation, whether the plug-in is run -pre or -post relative to that phase, and any on-error requirements. You can register custom plug-ins for operational phases after ZDM_SETUP_TGT in the migration job workflow.

What happens at runtime if the useraction encounters an error can be specified with the -onerror option, which you can set to either ABORT, to end the process, or CONTINUE, to continue the migration job even if the custom plug-in exits with an error. See the example command usage below.

Use the Zero Downtime Migration software installed user (for example, zmduser) to add useractions to a database migration job. Adding useractions zdmvaltgt and zdmvalsrc with the add useraction command would look like the following.

zdmuser> ./zdmcli add useraction -useraction zdmvaltgt -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_TGT -pre -onerror ABORT -actionscript /home/useract.sh

zdmuser> ./zdmcli add useraction -useraction zdmvalsrc -optype MIGRATE_DATABASE 
-phase ZDM_VALIDATE_SRC -pre -onerror CONTINUE -actionscript /home/useract1.sh

In the above command, the scripts /home/useract.sh and /home/useract1.sh are copied to the Zero Downtime Migration service host repository, and they are run if they are associated with any migration job run using an action template.

Creating an Action Template

After the useraction plug-ins are registered, you create an action template that combines a set of action plug-ins which can be associated with a migration job.

An action template is created using the ZDMCLI command add imagetype, where the image type, imagetype, is a bundle of all of the useractions required for a specific type of database migration. Create an image type that associates all of the useraction plug-ins needed for the migration of the database. Once created, the image type can be reused for all migration operations for which the same set of plug-ins are needed.

The base type for the image type created here must be CUSTOM_PLUGIN, as shown in the example below.

For example, you can create an image type ACTION_ZDM that bundles both of the useractions created in the previous example, zdmvalsrc and zdmvaltgt.

zdmuser>./zdmcli add imagetype -imagetype ACTION_ZDM -basetype 
CUSTOM_PLUGIN -useractions zdmvalsrc,zdmvaltgt

Updating Action Plug-ins

You can update action plug-ins registered with the Zero Downtime Migration service host.

The following example shows you how to modify the useraction zdmvalsrc to be a -post action, instead of a -pre action.

zdmuser>./zdmcli modify useraction -useraction zdmvalsrc -phase ZDM_VALIDATE_SRC
 -optype MIGRATE_DATABASE -post

This change is propagated to all of the associated action templates, so you do not need to update the action templates.

Associating an Action Template with a Migration Job

When you run a migration job you can specify the image type that specifies the plug-ins to be run as part of your migration job.

As an example, run the migration command specifying the action template ACTION_ZDM created in previous examples, -imagetype ACTION_ZDM, including the image type results in running the useract.sh and useract1.sh scripts as part of the migration job workflow.

By default, the action plug-ins are run for the specified operational phase on all nodes of the cluster. If the access credential specified in the migration command option -tgtarg2 is unique for a specified target node, then an additional auth argument should be included to specify the auth credentials required to access the other cluster nodes. For example, specify -tgtarg2 nataddrfile:auth_file_with_node_and_identity_file_mapping.

A typical nataddrfile for a 2 node cluster with node1 and node2 is shown here.

node1:node1:identity_file_path_available_on_zdmservice_node 
node2:node2:identity_file_path_available_on_zdmservice_node