1.1 Zero Downtime Migration 21.5 Release Notes

These release notes provide downloading instructions for the latest product software and documentation, and describe new features, fixed bugs, known issues, and troubleshooting information for Zero Downtime Migration Release 21c (21.5).

1.2 What's New in This Release

Zero Downtime Migration Release 21.5 improves the existing 21c functionality with the following enhancements.

  • Introducing hybrid migration

    ZDM introduces support for cross-platform and cross-version migration using RMAN transportable tablespaces and Data Pump metadata import/export. This workflow is only supported for offline migrations. See Hybrid Migrations with Zero Downtime Migration, Preparing for a hybrid database Migration, and Zero Downtime Migration Hybrid Migration Response File Parameters Reference.

  • ZDM automatically remaps TEMP Tablespaces

    ZDM automatically remaps temporary tablespaces at source database to the specified temporary tablespace at target database. This feature caters to ADB serverless that does not allow creating TABLESPACE and if migrating to regular database, there is need to consolidate the temporary tablespaces. See the TABLESPACEDETAILS_REMAPTEMPTARGET parameter for more information.

  • New options for custom port numbers

    For ZDM installer, there are two new options for defining custom port numbers.

  • Source Database Profile File Support

    New option for defining a profile response file for specific use cases. New logical migration response file parameter specifies the full path for the profile file. See the PROFILE parameter.

  • FLASHBACK_ON Parameter Support

    New response file parameter that allows customers to enable or disable flashback on the target database during the post-migration phase. See the FLASHBACK_ON parameter for more information.

  • DATAPUMPSETTINGS_RETAINDUMPS Parameter Support

    New response file parameter that indicates to ZDM whether dump files should be cleaned or left in the selected data transfer medium. For offline logical migration, ZDM now allows to specify dump prefix to reuse dump files from a previous job. Specify DATAPUMPSETTINGS_RETAINDUMPS=TRUE to retain the dump files on the target stage and the corresponding dump prefix DATAPUMPSETTINGS_REUSE_DUMPPREFIX= ZDM_<jobID>_DP_EXPORT_<#>_dmp. See the DATAPUMPSETTINGS_RETAINDUMPS parameter for more information.

  • DATAPUMPSETTINGS_REUSE_DUMPPREFIX Parameter Support

    This new response file parameter allows customers to specify the dump prefix of existing dump files. ZDM will reuse the specified dump files for migration purposes. See the DATAPUMPSETTINGS_REUSE_DUMPPREFIX parameter for more information.

  • GOLDENGATESETTINGS_REPLICATIONMODE Parameter Support

    New response file parameter that allows to specify whether Oracle GoldenGate should work in non-integrated or integrated mode. This change enables user to migrate databases in integrated mode with procedural replication, which will avoid reloading the PLSQL supported objects post import. See the GOLDENGATESETTINGS_REPLICATIONMODE parameter for more information.

  • Audit trails will be imported when GOLDENGATESETTINGS_RELOADUNREPLICATEDOBJECTS=TRUE

    As part of the logical online migration, when you set GOLDENGATESETTINGS_RELOADUNREPLICATEDOBJECTS=TRUE then, audit trails will be imported too during the ZDM_RELOAD_PARALLEL_EXPORT_IMPORT reload phase.

  • GOLDENGATESETTINGS_RELOADAQOBJECTS Parameter Support

    The new logical migration response file parameter allows users to specify if AQ objects need to be reloaded post import. See the GOLDENGATESETTINGS_RELOADAQOBJECTS parameter for more information.

  • ZDM_SRCDBPDB and ZDM_TGTDBPDB arguments added to the useraction script to pass the respective PDB names

    ZDM_SRC and TGTDBPDB arguments will have the source and target database PDB name, which will be passed to the useraction script as an argument.

  • User provided service will now be used to connect the database for executing the useraction scripts, instead of connecting to HIGH service by default

    Add a comment to the useraction script. The useraction script can contain comments in the beginning prefixed with --USERACTION_SERVICE=<service_name> such as HIGH, LOW, TP , and so on. ZDM connects to that service and then runs the useraction.

  • ZDM will encrypt system, sysaux, undo, and temp tablespaces for databases 19c and greater for migration using backup-restore and restore from service migration methods

    When running in Cloud, it is highly recommended to encrypt tablespaces. Starting 21.5, when migrating to the cloud, ZDM will encrypt all tablespaces, including system tablespaces.

  • Creating an Oracle Cloud Native Disaster Recovery Strategy

    ZDM now supports setting up a Data Guard configuration at target level, enhancing your Disaster Recovery architecture. ZDM supports for Disaster Recovery migrations (physical) with Cloud native Data Guard configuration (console tooling continues working after migration). Upon migration conclusion, you will have two databases, a primary and standby, configured and maintained by Data Guard. Multi-region is supported for this configuration. Allowed migration methods include OSS, DIRECT, and ZDLRA. See Creating an Oracle Cloud Native Disaster Recovery Strategy for more information.

  • GOLDENGATESETTINGS_REPLICAT_SPLITTRANSRECS Parameter Support

    Large transactions can now be divided into meaningful pieces. The pieces will by applied in parallel by individual Oracle GoldenGate Appliers. So, a large transaction can be processed faster by Oracle GoldenGate. Dependencies between the split transactions and other transactions are considered. This new response file parameter specifies the size of individual pieces of a large transaction being split. Default size is 100000 bytes. See the GOLDENGATESETTINGS_REPLICAT_SPLITTRANSRECS parameter for more information.

  • GOLDENGATESETTINGS_FEATUREGROUP Parameter Support

    New logical migration response file parameter to allow the users to select specific feature groups for procedural replication. See the GOLDENGATESETTINGS_FEATUREGROUP parameter for more information.

  • ZDM_ADVANCE_SEQUENCES Logical Migration Phase

    For online logical migration, ZDM will advance sequence values in target database during the switchover brownout to match sequence values in source database.

  • Migrating to Autonomous database using File system as the Data Transfer Medium

    Leverage Oracle Cloud File Storage Service (FSS) as the data transfer medium for migration in ZDM. ZDM auto-mounts FSS in Autonomous target database. You can load data from Oracle Cloud Infrastructure File Storage within a Virtual Cloud Network (VCN) or from any other Network File System in on-premises data centers over FastConnect and Site-to-Site VPN.

    For more information, see Migrating to Autonomous Database Server with Files Storage Transfer Medium.

  • REFRESHMVIEWS parameter and ZDM_REFRESH_MVIEW phase

    Materialized views are refreshed post import for logical migration. This refresh is done as part of the ZDM_REFRESH_MVIEW_TGT phase. When you set REFRESHMVIEWS to TRUE, then the ZDM_REFRESH_MVIEW phase is enabled and the materialized views are refreshed post migration. If you set the parameter to FALSE, then the materialized views are not refreshed. See REFRESHMVIEWS.

  • User Action script updates

    Now you can specify the service which is used for performing the user actions. For Autonomous databases, only SQL scripts as useraction are supported. ZDM establishes the connection to the database before executing the useraction. The service specified by you is used to make the connection. If you do not specify any service, ZDM connects to the default service specified in the response file parameter. For non Autonomous databases, you can use a shell script where you can connect to any service you want and run a query for SQL connection. See User Action Scripts.

  • Concurrent Autonomous Database Migration

    ZDM now allows to specify separate wallet paths for different autonomous databases, hence enabling a single Oracle GoldenGate deployment to replicate to multiple Oracle Autonomous databases at the same time. For more information, refer to Additional Logical Migration Prerequisites.

  • Tablespace level Encryption for Physical Migration

    Starting with 21.5, ZDM will encrypt system, sysaux, undo and temp tablespaces for Oracle Database 19c and later releases, in physical migration. ZDM will start encrypting *all* tablespaces for cases where TDE is mandatory for migrations involving backup/restore or direct migration using restore from service. See Setting Up the Transparent Data Encryption Keystore.

  • A new option DBOPTIONS DEFERREFCONST for Oracle GoldenGate Replicat

    This option optimizes constraint handling hence this is now a part of the Oracle GoldenGate Replicat. The DBOPTIONS DEFERREFCONST option is set by default for Oracle GoldenGate Replicat non-integrated Parallel Replicat. ZDM always sets this Replicat parameter if you do not opt for the Parallel Intergrated mode. See GOLDENGATESETTINGS_REPLICATIONMODE and Setting Logical Migration Parameters.

  • ZDM supports concurrent migrations using single Oracle GoldenGate Microservice

    Multiple migration targets can now use the same Oracle GoldenGate deployment link. This is done by updating the wallet path in the Oracle GoldenGate deployment directory to Wallet_<adb>. Earlier, there was a single directory 'adb' in the <deployment_dir>/etc location. However, now for each database that uses the Oracle GoldenGate Microservice for replication, can have its separate directory for wallet storage; 'Wallet_<dbname>' inside the <deployment_dir>/etc directory.

  • RUNFIXUPS for running Automated Fixups

    ZDM now has the ability to generate and run fixups for a limited number of issues that cause some checks to not PASS. For logical migration, ZDM allows an option for running automated fixups. The new logical migration response file parameter is used to run automated fixups on the target database host. Fixups are the SQL scripts that help to solve failures for some of the pre migration checks for the source databases. See the RUNFIXUPS parameter for more information.

  • GGADMIN privilege checks are improved

    Usability improvement for online logical migration: ZDM prechecks will notify user about missing privileges for the Oracle GoldenGate administrator in the target database. User can grant the indicated missing privileges to the Oracle GoldenGate administrator in the target database to avoid replication errors.

  • Support for GoldenGate schema name different from the default ggadmin

    ZDM now supports a GoldenGate schema with a different name from the default ggadmin for source and target database. If you specify a different name, ZDM will automatically update the GLOBALS file during the ZDM_VALIDATE_GG_HUB phase. If the source Oracle GoldenGate admin username is different from target, then ZDM will include the source Oracle GoldenGate admin schema for migration. See Additional Logical Migration Prerequisites.

  • Support for Physical Migration and Upgrade of CDB Source Databases

    ZDM supports migration and upgrade of CDB for EXACS/EXACC and Non-EXACS/EXACC target environment using dbaascli and autoupgrade respectively. Upgrade starts after completion of the migration. Provision a Database home to upgrade version in the target node and provide the provisioned home path in the response file property ZDM_UPGRADE_TARGET_HOME. See Migration and Upgrade of CDB Databases.

  • ZDM Support for Migration, Upgrade, and Multitenant Conversion of Non-CDB Databases

    ZDM now supports migration, upgrade, and multitenant conversion of non-CDB source databases. ZDM will perform a migration to a non-CDB auxiliary database, leverage the autoupgrade utility, and then perform the conversion to multitenant via a plug-unplug operation. For more information refer, see Migration, Upgrade, and Conversion of a Non-CDB to a PDB Database. If you need an additional home, which is at the same version as source, if you already have one on the target, you can use it without having to provision a new home.

  • ZDM now supports Autonomous Databases as Source Databases

    ZDM supports logical migration from ADB (ADB-S/ADB-D) source. You can either provide ADB connection details with existing SOURCEDATABASE_CONNECTIONDETAILS properties or use a new parameter SOURCEDATABASE_OCID. Value of SOURCEDATABASE_ENVIRONMENT_DBTYPE must be ADB-S/ADB-D. Note: Only Autonomous Databases are valid as targets.

  • New parameters for physical migration
    For ZDM physical migration, now wallets can also be specified using the following parameters:
  • Installing Zero Downtime Migration on Red Hat Enterprise Linux 9 in an Oracle Cloud Infrastructure Instance

    Refer to the steps available at Installing Zero Downtime Migration on Red Hat Enterprise Linux 9.

  • Zero Downtime Migration supports Oracle Exadata Database Service on Dedicated Infrastructure, Oracle Exadata Database Service on Cloud@Customer, Oracle Base Database Service (Virtual Machine and Bare Metal) for Oracle Database 23ai (Only supported for target databases)

    See Supported Database Versions for Migration for more information.

  • Using sudoasuser authentication plugin for logical migration

    You can migrate the database from source to target with all actions on the source and target database node without superuser privilege using sudoasuser authentication plugin wherein you connect to a host as a sudo user. The migration avoids any root access requirement. See Migrating with SUDO User Privilege for more information.

  • Use of wildcard expressions are allowed for RELOADOBJECTS parameter

    See RELOADOBJECTS-LIST_ELEMENT_NUMBER and Selecting Objects for Migration for more information.

  • The zdmcli query job now reports Oracle GoldenGate replication metrics for online logical migration

    See query job for more information.

1.3 Bugs Fixed

Zero Downtime Migration Release 21.5 introduces the bug fixes listed in the following table.

Table - Bugs Fixed In Zero Downtime Migration Release 21.5

Bug Number Description
35239994 ZDM: RESTORING CONTROL FILE WITH NON OMF FORMAT
35483295 ZDM: DST UPGRADE FAILS DUE TO TNS_ADMIN PARAMETER
35616936 ORA-28374 ERROR DURING MZDM_CONVERT_NONCDB2PDB.PL EXECUTION
35688382 ZDM - MIGRATION OUT OF AWS - FAILURE AT COPYFILES PHASE
35721877 ORA-01619 ERROR AT ZDM_CLONE_TGT PHASE
35680161 ZDM_APPLY_LAG_MONITORING_INTERVAL TO DAILY CAUSING SWITCHOVER JOB TO TO SCHEDULE AFTER A DAY
35816145 ZDM DATA PUMP MIGRATION TO ADB ON OC2 REALM IS FAILING
35744878 CPAT - CPAT WARNING WHILE MIGRATING TO ADBC@C
35931123 ZDM INIT PARAMETER MODIFICATION LEADING TO DATA GUARD CONFIGURATION ISSUES
35849014 ZDM:NON-CDB TO CDB PHYSICAL MIGRATION CHECK -EVAL
35300079 ZDM: NEED CHECK FOR EXECUTION OF NONCDB_TO_PDB.SQL SCRIPT STATUS
35552928 ZDM: DISABLE FLASHBACK
34069672 ZDM: DMS ENFORCE DUMPS ENCRYPT NONE FOR STANDARD EDITION DATABASE
35893947 ZDM IS A FAILIING DURING ZDM_CLONE_TGT PHASE WITH ERROR ORA-01127
35801705 ZDM - DB PHYSICAL MIGRATION, LOG_ARCHIVE_DEST_1 AND 10 PROPERLY SET ON TARGET EXACS DB
35816117 ZDM: DG CLEANUP ISSUES
35721877 ORA-01619 ERROR AT ZDM_CLONE_TGT PHASE
35717895 PL21.5ZDM: SOLARIS /AIX.PPC64 : ZDM_VALIDATE_SRC FAILURE FOR 11204 RAC DATABASE:PRGD-1070 : QUERY TO RETRIEVE CONTAINER COUNT FROM CONTAINERS VIEW V$CONTAINERS FAILED
35340742 USER ACTION SPECIFIED FOR PHASE ZDM_PREPARE_SWITCHOVER_APP WHICH DOES NOT SUPPORT USER ACTION
34707257 ZDM: ORA-01144: FILE SIZE (12582400 BLOCKS) EXCEEDS MAXIMUM OF 4194303 BLOCKS
35181477 PDB CONVERSION WITH WARNING MESSAGE SERVICE NAME CONFLICT
35900871 ZDM:PRCZ-2135 : SFTP FAILURE TO RETRIEVE THE FILE ATTRIBUTES - ZDM_COPYFILES - ZDLRA.ZIP - ZDLRA
35920799 ZDM DISCOVER FAILS IF DATABASE SERVER NAME IS IN UPPERCASE
36383322 INIT PARAMETERS ARE NOT CARRIED OVER FROM SOURCE DB
36346251 ZDM MIGRATION IS FAILING WITH ERROR ORA-27211
36508136 ZDM_ADVANCE_SEQUENCES FAILS WITH PRGD-1000 (NUMERIC OVERFLOW)
36597616 ZDM DOES NOT CORRECTLY COPY THE FILE BASED WALLET FOR PDB IN ISOLATED MODE
36497386 CLUSTER_INTERCONNECTS PARAMETER ON TARGET IS NOT PRESERVED DURING PHYSICAL MIGRATION
36622632 SUPPORT FOR OKV ENDPOINT SHARING
36731870 EXCLUDEOBJECTS AFFECT TO PHASE RELOAD OBJECTS . TEMPORARY TABLE IS NOT CREATED WITH ORA-0942

1.4 Downloading the Zero Downtime Migration Installation Software

For a fresh installation of the latest Zero Downtime Migration software version, go to https://www.oracle.com/database/technologies/rac/zdm-downloads.html.

1.5 Downloading the Zero Downtime Migration Documentation

You can browse and download Zero Downtime Migration documentation at https://docs.oracle.com/en/database/oracle/zero-downtime-migration/

1.6 General Information

At the time of this release, there are some details and considerations about Zero Downtime Migration behavior that you should take note of.

1.6.1 Running RHP and Zero Downtime Migration Service on the Same Host

If the Zero Downtime Migration service is installed on the same host where RHP server is deployed, note that there are some workarounds.

If you have has started an RHP server/client on the same node as the Zero Downtime Migration service, using the default port, you must either

  • Stop RHPS/RHPC

  • Modify the port for RHPS/RHPC

This is to avoid port collision between RHP and Zero Downtime Migration. If you don't want to change RHP configuration, you can also modify the port for Zero Downtime Migration before starting the Zero Downtime Migration service.

To identify the ports being used by Zero Downtime Migration:

ZDMinstallation/home/bin/zdmservice status 

To stop the Zero Downtime Migration service:

ZDMinstallation/home/bin/zdmservice stop 

To modify the ports:

ZDMinstallation/home/bin/zdmservice modify -help
Modifies configuration values.
USAGE: zdmservice modify
Optional parameters:
                     transferPortRange=<Range_of_ports>
                     rmiPort=<rmi_port>
                     httpPort=<http_port>
                     mysqlPort=<mysql_port>

For example:

ZDMinstallation/home/bin/zdmservice modify mysqlPort=8899
Editing MySQL port...
Successfully edited port=.* in file my.cnf
Successfully edited ^\(CONN_DESC=\).* in file rhp.pref
Successfully edited ^\(MYSQL_PORT=\).* in file rhp.pref

1.6.2 Cross-Edition Migration

Zero Downtime Migration cannot be used to migrate an Enterprise Edition database to a Standard Edition database. In the converse case, Standard Edition databases can be migrated to Enterprise Edition databases, except physical online migration method. For Datapump based migration, the Datapump Dumps are not exported encrypted.

1.6.3 EXT3 File System Support

There is a known issue which prevents Zero Downtime Migration from being installed in EXT3 file systems. The root cause is MySQL bug 102384. This is not a limitation of Zero Downtime Migration, but a constraint from MySQL. When that bug is resolved, Zero Downtime Migration is expected to work on EXT3 file systems.

1.7 Known Issues

At the time of this release, the following are known issues with Zero Downtime Migration that could occur in rare circumstances. For each issue, a workaround is provided.

1.7.1 Known Issue for Migrations Involving Encrypt On Restore

Issues: For migrations involving encrypt on restore, such as restore from service and backup or restore, for Oracle Database 19c or later, ensure to apply the following two fixes to the target database before starting the migration:

  • 35495759 - (G) - 80 - V$DATABASE_KEY_INFO / CONTROL FILE DB KEY NEEDS TO BE RESYNC'ED FROM SYSTEM DATAFILES (RTI 26895084)
  • 36879267 - (G) - 80 - ORA-28374: TYPED MASTER KEY NOT FOUND IN WALLET AFTER FIX FOR BUG 36697088

1.7.2 ZDM Encounters Failures During the ZDM_PRE_MIGRATION_ADVISOR Phase

Issues: ZDM encounters failures during the ZDM_PRE_MIGRATION_ADVISOR phase for Oracle Autonomous Database migrations for Oracle Database 23ai.

Solution: For ADB source database, to avoid Timezone or CPAT execution errors, run the ZDM migration with the following parameters:
  • RUNCPATREMOTELY=TRUE
  • COPYCPATREPORTTOZDMHOST=FALSE

1.7.3 Consecutive Migrations for Creating Tablespaces Causing Space Exhaustion

Issues: If you select autocreate for tablespaces and any tablespace fails to create, then when you retry the creation of tablespace, it duplicates the creation of the datafiles. So, additional datafiles on the target database get added leading to space exhaustion.

Solution: Exclude the creation of tablespaces by specifying TABLESPACEDETAILS_EXCLUDE parameter or specify the TABLESPACEDETAILS_AUTOCREATE as FALSE.

1.7.4 Hybrid Migration Failed at ZDM_VALIDATE_XTTS_SRC

Issues: The hybrid migration fails at ZDM_VALIDATE_XTTS_SRC while migrating from Oracle Database 11.2.0.4 source to Oracle Database 19c target database.

Solution: If you plan to migrate from Oracle Database 11.2.0.4 sources, you also need the latest Perl patch 5.28.2 or later.

1.7.5 Hybrid Migration Encrypted Tablespace Migration Issues

Issues: The hybrid migration is having the following issues when the source database contains encrypted tablespaces:
  • The tablespaces are getting migrated even when they are excluded using the TABLESPACEDETAILS_EXCLUDE parameter.
  • The above excluded tablespace is displayed as unencrypted at the target database even when it was encrypted at the source database.

1.7.6 Hybrid Migration Failing at ZDM_DATAPUMP_IMPORT_TGT for Oracle Database 12.2 Source Database

Issues: The hybrid migration is failing at ZDM_DATAPUMP_IMPORT_TGT phase when the source database is Oracle Database 12.2 due to SPATIAL_CSW_ADMIN_USR object. According to this MOS note, the user SPATIAL_WFS_ADMIN_USR is no longer needed and needs to be ignored for Oracle Database 12.2.

Solution: This is an expected behavior according to the Oracle Data Pump documentation. You can review the errors and if these are the only errors, you can resume the job with -ignore IMPORT_ERRORS.

1.7.7 ZDM_XTTS_RESTORE_FULL_TGT:Restore Foreign Tablespace Users to New from Backup set is Failing with Permission Errors

Issues: When the source and target database users are different and the primary group is not shared either, the restore of the backups fails with the following error stack:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/24/2024 03:32:19
ORA-19505: failed to identify file "/nfsshare/allnodes/importdumpaix/rman_job_10/RRACW_backup_042u4ku2_4_1_1"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 7

This is because the RMAN backups are not accessible by the target Oracle user. Setting ZDM response file variable RMANSETTINGS_PUBLICREAD=TRUE does not help either.

Solution: As workaround, if it is not possible to use the same user or use a common primary group, make the backups accessible by running an OS command such as the following on the backup location to make the backup accessible by the target Oracle user:

chmod -R a+rX /nfsshare/allnodes/importdumpaix/rman_job_10

1.7.8 Physical Offline Migration Fails in ZDM_DATABASE_UPGRADE_TGT

Issues: Physical offline migration from Oracle Database 19c source to Oracle Database 23ai target fails in the ZDM_DATABASE_UPGRADE_TGT phase with the following error:

ORA-02149: Specified partition does not exist

Solution: As workaround, get patch from Bug 36710007 and apply in target dbhome. This issue is being tracked with Bug 36710007.

1.7.9 Data Pump Export Logs Skipping The Source Database Host/S3 Bucket

Issues: The Oracle Data Pump export logs skip the source database host/S3 Bucket while migrating from Amazon Web Services (AWS) RDS source to Oracle Autonomous Database Serverless target database.

Solution: The following scenarios are possible:
  • Directory specified exists in source : If directory exists, after completion of migration job (SUCCESS/FAILURE) the export and estimate logs are present in the existing directory.
  • Directory specified does not exist in source and is created in the workflow : The following cases are available for this scenario:
    • eval job: After completion of eval job, estimate log is present in created directory.
    • Failed Migration Job: After completion of eval job, estimate log is present in created directory.
    • SUCCESS Migration job: ZDM creates directory in RDS, stores the export dumps and log files in this directory, uploads log files on S3, drops the created directory.
ZDM creates and drops the Data Pump export directory if the specified directory does not exist in the database. So, it deletes all the created logs. As a workaround, provide the existing Data Pump directory to collect the logs.

1.7.10 Known Issues for Upgrade Scenario

Issues: The environment variables such as ORACLE_HOME and any other environment variable with the symlink path might cause failure of UPGRADE related phases.

Solution: Set ORACLE_HOME and the other environment variable with the actual path and not the symlink path.

1.7.11 Issue with Hybrid Migration for Oracle Database 11.2.0.4 Source Database

Issue: The hybrid migration to Oracle Database 11.2.0.4 source is failing in ZDM 21.5 with the following error:

</ARG><ARG>ERROR at line 1:

</ARG><ARG>ORA-00904: "ORACLE_MAINTAINED": invalid identifier

</ARG><ARG></ARG><ARG></ARG><ARG>Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

</ARG><ARG>With the Partitioning, Real Application Clusters, OLAP, Data Mining

</ARG><ARG>and Real Application Testing options

</ARG></ARGS></ERR_FILE>

1.7.12 Hybrid Migration to Oracle Database 23ai Target Failing with EM_EXPRESS_ALL Does Not Exist Error

Issue: The hybrid migration to Oracle Database 23ai target database is failing at the ZDM_DATAPUMP_IMPORT_TGT phase with the EM_EXPRESS_ALL does not exist error.

Workaround: Review the import errors and resume job with -ignore IMPORT_ERRORS.

1.7.13 Physical migration for Non-CDB to PDB along with timezone upgrade and database upgrade fails in the TIMEZONE_UPGRADE_PREPARE_TGT phase

Issue: Physical migration for Non-CDB to PDB along with timezone upgrade and database upgrade fails in the TIMEZONE_UPGRADE_PREPARE_TGT phase.

1.7.14 Issue with logical migration with DATAPUMPSETTINGS_JOBMODE=FULL

Issue: For Oracle Base Database 21c target database, ZDM logical migration fails when DATAPUMPSETTINGS_JOBMODE=FULL. It gets stuck at ZDM_DATAPUMP_IMPORT_TGT phase.

1.7.15 Issue with RMAN encryption in offline migration

Issue: The buffer cache layer that enables the RMAN 'encrypt on restore' has the current issue when the control file is restored a second time.

Workaround: Encrypt the system tablespaces, post migration.

1.7.16 Issue with Logical Migration when COPYCPATREPORTTOZDMHOST is set to TRUE while using ZDMAUTH credentials for migration

Issue: On setting the parameter COPYCPATREPORTTOZDMHOST=TRUE, the CPAT report is not getting copied to ZDM host when using ZDMAUTH credentials for migration.

Workaround: This issue does not occur with dbuser credentials.

1.7.17 ZDM_NONCDBTOPDB_CONVERSION fails during NONCDBTOPDB migrations

Issues: ZDM_NONCDBTOPDB_CONVERSION fails during NONCDBTOPDB migrations due to the following reasons:
  • During NONCDBTOPDB migrations, the conversion phase (ZDM_NONCDBTOPDB_CONVERSION) fails unexpectedly for some variant of Oracle Database 12g (12.1) .
  • During NONCDBTOPDB migrations, conversion phase fails if patch compatibility violation errors were found and datapatch phase was skipped (TGT_SKIP_DATAPATCH=TRUE).

Workaround: For NONCDBTOPDB migrations it is recommended to run the datapatch phase (TGT_SKIP_DATAPATCH=FALSE) before conversion phase.

1.7.18 ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Issues: The following error occurs during the ZDM_SWITCHOVER_SRC phase:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor. Unable to connect to database using SERVICE_NAME=<unique_db_name>_DGMGRL.

Workaround: Manually register the <db_unique_name>_DGMGRL service by referring to: Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1) https://support.oracle.com/epmos/faces/DocContentDisplay?id=1387859.1.

1.7.19 ZDM Job Fails With Permission Denied Issue

Issues: ZDM job fails in phase GET_SRC_INFO with permission denied issue:
  • PRCZ-4001 : failed to execute command "/tmp//oradiscover_<>.sh"
  • PRCZ-2103 : Failed to execute command "/tmp//oradiscover_<>.sh" bash: /tmp//oradiscover_120212.sh: Permission denied

Workaround: You might get the aforementioned errors if the migration job fails in GET_SRC_INFO phase. However, these might not be the actual errors. To verify the actual errors, check the log file.

Ensure that /tmp is mounted with execute permission as a prerequisite for the source and target databases.

1.7.20 Known Issues for Hybrid Migration

Issues: The following features are not supported in this release:
  • RMAN backup encryption.
  • Encrypt on restore.

Solution: Both of the above issues are dependent on the availability of the following RMAN bug:

Bug 31229602 - RMAN BACKUPS - K_BTTRDA BLOCKS PROCESSED BY KD4_ENCRYPT_OFFSET1 ARE NOT ENCRYPTED.

1.7.21 Procedural Replication Does Not Work with Error 'ORA-01031: INSUFFICIENT PRIVILEGES'

Issue: Migration fails with the following error:

ORA-01031: INSUFFICIENT PRIVILEGES

Solution: As a workaround, run the following commands as sys user:

  • - alter session set container = '<pdb>'
  • - grant set container to ggadmin

1.7.22 Physical Migration Failing at UPGRADE_TGT for a Non-CDB Source

Issue: While performing plug in and upgrade for non-CDB source to a higher version of CDB, the physical migration might fail in the ZDM_DATABASE_UPGRADE_TGT phase for SYS.ALERT_QUE issue. This issue occurs if you get the following error in catupgrd0.log:

SQL>
SQL> -- Create alert queue table and alert queue
SQL> BEGIN
  2       BEGIN
  3       dbms_aqadm.create_queue_table(
  4            queue_table => 'SYS.ALERT_QT',
  5            queue_payload_type => 'SYS.ALERT_TYPE',
  6            storage_clause => 'TABLESPACE "SYSAUX"',
  7            multiple_consumers => TRUE,
  8            comment => 'Server Generated Alert Queue Table',
  9            secure => TRUE);
 10       dbms_aqadm.create_queue(
 11            queue_name => 'SYS.ALERT_QUE',
 12            queue_table => 'SYS.ALERT_QT',
 13            comment => 'Server Generated Alert Queue');
 14       EXCEPTION
 15         when others then
 16           if sqlcode = -24001 then NULL;
 17           else raise;
 18           end if;
 19       END;
 20       dbms_aqadm.start_queue('SYS.ALERT_QUE', TRUE, TRUE);
 21       dbms_aqadm.start_queue('SYS.AQ$_ALERT_QT_E', FALSE, TRUE);
 22       commit;
 23  EXCEPTION
 24      when others then
 25         raise;
 26  END;
 27  /
BEGIN
*
ERROR at line 1:
ORA-04063: SYS.ALERT_QUE has errors
ORA-06512: at line 25
ORA-06512: at "SYS.DBMS_AQADM", line 742
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8049
ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 912
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8025
ORA-06512: at "SYS.DBMS_AQADM", line 737
ORA-06512: at line 20

Solution: As a workaround, perform the following steps:

  1. In the target node, copy all the migrate database related files from 12.1HOME/dbs to 19c HOME/dbs
  2. Perform the steps mentioned in Upgrade to 12.2 fails with Error : ORA-04063: SYS.ALERT_QUE has errors (Doc ID 2632809.1).
  3. Perform the steps mentioned in How to recreate the SYS.ALERT_QUE (Doc ID 430146.1).

1.7.23 Restriction for Number of INCLUDEOBJECTS

Issue: There is an INCLUDEOBJECTS limitation in Datapump component where large number of INCLUDEOBJECTS cannot be supplied.

Solution: As a workaround, create a table in the source database in export user schema to list all the TABLE objects to be filtered and specify the following parameters:

For example, create a table <ADMIN schema>.INCLUDE_TEMP_LIST and list all objects specified in INCLUDEOBJECTS to be filtered for SCHEMA SCOTT.
INCLUDEOBJECTS-1=owner:SCOTT
DATAPUMPSETTINGS_METADATAFILTERS-1=name:NAME_EXPR,value:’IN(select OBJECT_NAME from <ADMIN schema>.INCLUDE_TEMP_LIST’)’, objectType:TABLE

Note:

For online logical migration, set the filtering of such objects in the Oracle GoldenGate EXTRACT parameter after creation of extract. Pause the migration job after ZDM_CREATE_GG_EXTRACT_SRC and update the parameter file.

1.7.24 Non CDB to PDB Not Supported for DR Migrations

Issue: The use case of migrating from a non-CDB source database to a PDB target database is not supported for DR migrations.

Solution: The DR is at the container level. You can set up the target CDB to have a DR of its own and when the non-CDB is plugged into the target CDB (regular NON-CDB to PDB migration), it should get replicated via target CDB.

1.7.25 ZDM Operations Fail with "Unable to negotiate key exchange for kex algorithms"

Issue: When the source DB is in an old Linux distribution that only has deprecated KexAlgorithms, ZDM operations fail with the following error:

Unable to negotiate key exchange for kex algorithms.

Solution: As a workaround, add the new configuration flag to enable the deprecated algorithms available in these old distributions as follows:
  1. Edit the <zdmBase>/crsdata/<hostname>/rhp/conf/rhp.pref to add the following line: USE_LEGACY_SSH=TRUE.
  2. Restart ZDM .

1.7.26 RESUME JOB FROM 21.3.12 AFTER UPGRADING TO 21.4.2 FAILS WITH UNRECOGNIZED FIELD "ENVIRONMENT" ERROR

Issue: When you resume a job that was started using a ZDM version older than 21.4.1, then after upgrading ZDM to the ZDM 21.4.1 version, the resume job fails with the following error:

Unrecognized field "environment" (class oracle.cluster.gridhome.apis.actions.database.ZdmPayload$SourceContainerDatabase$Builder), not marked as ignorable (6 known properties: "agentId", "connectionDetails", "copy", "streamId", "adminUsername", "ggAdminUsername"]).

Solution: As a workaround, you must install the ZDM 21.4.2 and later version if you want to upgrade from an earlier version of ZDM.

1.7.27 Data Guard Cleanup Issues

Issue: The following issues are observed during a Data Guard cleanup:
  • Clearing log_archive_config using the following statement causes the instance cto rash with ORA-16188:
    Alter system set
            log_archive_config='' scope=both sid='*'
  • The fal_sever does not get cleared and points to the target database. This results in the source database continuing to fetch redo logs.
Solution: Perform the following steps for the above issues:
  • According to the MOS note Doc ID 1580482.1, the correct way to reset log_archive_config is, to set it to NODG_CONFIG:
    alter system set
            log_archive_config=NODG_CONFIG  scope=both sid='*';
  • Clear the fal_server by running the following command:
    alter system set fal_server='' scope=both sid='*';

1.7.28 ZDM INIT Parameter Modification Leading to Data Guard Configuration Issues

Issue: While performing an offline physical migration from Oracle Exadata Database Service to Oracle Exadata Database Service on Dedicated Infrastructure, Data Guard configuration from console fails as it expects certain init parameters on the database level (with *.init_parameter) instead of the instance specific parameters.

For this use case, ZDM modifies the following init parameters, and adds the additional entries:
  • ZDM removes:. *.compatible='19.0.0'
  • ZDM adds: inst1. compatible='19.0.0'inst2. compatible='19.0.0'
  • This leads to an issue when you try to configure Data Guard in the Cloud, post migration and produces the following error:
CDG-50611 : Parameter COMPATIBLE is not
        set Set parameter as ALTER SYSTEM SET COMPATIBLE=<value>
  • Further, ZDM removes: *.db_files=1024
  • ZDM adds: inst1. db_files=1024 inst2. db_files=1024
  • When configuring the Data Guard post migration, Data Guard takes the value of db_files as 200 as there is no entry such as *.db_files=1024.
  • Further, ZDM adds the following additional entries which are not required for the database to function:
    exacs-hostname1.thread=1                   
    exacs-hostname2.thread=2
    exacs-hostname1.undo_tablespace=’UNDOTBS1’
    exacs-hostname1.undo_tablespace=’UNDOTBS2’
    exacs-hostname1. instance_number=1
    exacs-hostname1. instance_number =2

    Note:

    ZDM adds these entries in addition to the instance specific thread entries, instance specific instance_number entries, and instance specific undo_tablespace entries.
Solution:
  1. Remove the instance specific values wherever possible and remove the following parameters:
    exacs-hostname1. undo_tablespace=’UNDOTBS1’
    exacs-hostname1. undo_tablespace=’UNDOTBS2’
    exacs- hostname1. instance_number=1
    exacs- hostname1. instance_number=2
    exacs- hostname1.thread=1            
    exacs-hostname2.thread=2
    
  2. Create the entries for sid=’*’ as shown:
    SQL> alter system set compatible='19.0.0' scope=spfile sid='*';
     
    System altered.
     
    SQL> alter system set db_files=1024 scope=spfile sid='*';
     
    System altered.
    
  3. Query the entries created above as shown:
    SQL> show parameter compatible       
     NAME                   TYPE       VALUE
    compatible             string       19.0.0
    noncdb_compatible       boolean    FALSE

1.7.29 Oracle Data Pump Startup Errors

Issue: If the ZDM log reports ORA-20000: Datapump: Unexpected error in the job output, then it indicates that the operation failed to start in the database and it can be because of:
  • The invalid permission on the export directory path,
  • Invalid arguments or
  • Procedure semantics issues due to combination of input parameters.

    The underlying error is not captured in DATAPUMP error log as the job did not start. For such case, the Oracle Data Pump start failures has to be looked in the file as shown. The following issues are observed during a Data Guard cleanup:

Solution: Look for the highlighted text in first line to identify the Data pump job unique identifier associated with ZDM job which is, ZDM_152_DP_EXPORT_9176.

Perform the following steps:
  1. Connect to the source database host (if it is a RAC, then log can be on any of the database node, so repeat the following steps on each node)
  2. Identify the diag location using following query (if required): select name, value from v$diag_info where name like '%Trace';
  3. Change the directory using cd
  4. Run the following command: grep ZDM_152_DP_EXPORT_9176 *dm*
  5. Open the file containing ZDM_152_DP_EXPORT_9176 job start details and identify the ORA- errors that resulted in failure.
For import operation start failure:
  • Perform the same steps in target nodes, if the action that failed is IMPORT.
  • For ADB target case, check for similar text in output following query: select payload from v$diag_trace_file_contents where trace_filename like'%dm0%';

1.7.30 NON-CDB TO PDB Conversion Use-case for ZDM AUX STARTUP

Issue: When migrating a non-CDB source to a CDB target as PDB, ZDM creates an auxiliary database to first migrate the non-CDB source to as a non-CDB database on the target. After the non-CDB is brought to the target, ZDM does an unplug/plug to plugin the migrated non-CDB. To create an auxiliary database, ZDM uses the source database SPFILE, this means that while the migration is in progress, the target needs to be able to run two databases simultaneously, the target CDB as well as the auxiliary database running with source SPFILE/configuration. When the source is configured with very large memory size or SGA/PGA, it might result in failure to start the auxiliary database.

Solution: Following are the possible solutions:
  1. Increase the target size for the duration of the migration. This could be increasing the sysctl parameters configuration, ocpu or memory sizing if using elastic computing, and so on.
  2. Decrease the size of the source memory size SGA/PGA before the migration.
  3. Change the ZDM auxiliary database size by recreating the AUX SPFILE.

1.7.31 ZDM Skips C## or c## Users

Issue: During a logical migration, the C## user found in PDB are not moved by Oracle Data Pump. However, the issue is with grants and PROFILES associated with it that are failing to import. So, setting explicit EXCLUDE on these C## users ensures its dependent objects are not moved as well.

Workaround: ZDM does not migrate common users found in PDB (starting with 'C##') for SCHEMA mode migration and for FULL job, it explicitly finds the common users found in PDB and sets SCHEMA EXCLUDE for all such users.

1.7.32 Tables Created in the Data Tablespace in Oracle Autonomous Database Instead of the Respective User Data Tablespaces

Issue: Objects are getting mapped to DATA at the target and tables are getting created in the data tablespace of Oracle Autonomous Database on Exadata Cloud@Customer instead of the respective user data tablespaces.

Expected behavior: Migration to Oracle Autonomous Database on Exadata Cloud@Customer with objects in the same tablespace as it was at the source database.

Workaround: ZDM will no more set the TRANSFORM SEGMENT_ATTRIBUTES parameter to NO if user tablespaces are created in target. Use the following workaround to avoid this:

By disabling all the default transform applied, select the ones necessary after reviewing DATAPUMPSETTINGS_SKIPDEFAULTTRANSFORM=TRUE. As the above setting avoids all the default transforms, review following transform and set the relevant transform as expected. See Default Data Pump Parameter Settings for Zero Downtime Migration if you need more details on defaults shown above.

DATAPUMPSETTINGS_SECUREFILELOB=TRUE
DATAPUMPSETTINGS_METADATATRANSFORMS-1=name:LOB_STORAGE, value:'SECUREFILE'
DATAPUMPSETTINGS_METADATATRANSFORMS-2=name:OMIT_ENCRYPTION_CLAUSE, value:1
DATAPUMPSETTINGS_METADATATRANSFORMS-3=name:DWCS_CVT_IOTS, value:1
DATAPUMPSETTINGS_METADATATRANSFORMS-4=name:CONSTRAINT_USE_DEFAULT_INDEX,
value:1

1.7.33 Exporting and Importing fails for ADB-S and ADB-D During a Logical Migration

Issues: Exporting Oracle Autonomous Database Serverless and importing to Oracle Autonomous Database on Exadata Cloud@Customer fails during a logical migration. Similarly, exporting Oracle Autonomous Database on Exadata Cloud@Customer and importing to Oracle Autonomous Database Serverless fails during a logical migration.

This happens when the default roles are not present in Oracle Autonomous Database Serverless and Oracle Autonomous Database on Exadata Cloud@Customer respectively.

1.7.34 Skip the ZDM RELOAD of empty schema or schema with no qualifying objects

Solution: ZDM filters objects for reload and if there are no objects to be reloaded for any specific schema post applying the following conditions, then avoid the reload feature or do not include the particular schema.

ZDM filters the following objects:
  • Objects from DBA_GOLDENGATE_SUPPORT_MODE that have SUPPORT_MODE=NONE or SUPPORT_MODE=PLSQL or SUPPORT_MODE= INTERNAL.
  • Objects from DBA_GOLDENGATE_NOT_UNIQUE that are marked BAD_COLUMN=Y.

    ZDM skips QUEUE_TABLES from reload.

    When there are no objects are listed for reload from specific schema, then skip the reload feature or do not include the particular schema.

1.7.35 PREMIGRATION ADVISOR COMPILATION FAILURES DURING DRY RUN - PRCZ-2103 CAN'T LOCATE JSON/PP.PM

Issue: The OPRCZ-2103 CAN'T LOCATE JSON/PP.PM error occurs during the ZDM_PRE_MIGRATION_ADVISOR phase.

Solution: When the source database is Oracle Database 11.2.0.4, for performing a logical migration, specify the following parameters in the response file:
  • RUNCPATREMOTELY=TRUE
  • COPYCPATREPORTTOZDMHOST=FALSE

1.7.36 ORA-23605: INVALID VALUE "" FOR GOLDENGATE PARAMETER PARALLELISM.

Issue: The Oracle GoldenGate Extract startup fails when the source database is Oracle Standard Edition 2, due to the following error:

ORA-23605: INVALID VALUE "" FOR GOLDENGATE PARAMETER PARALLELISM.

Solution: If you do not apply the patch on the source database, then specify GOLDENGATESETTINGS_EXTRACT_PARALLELISM=1 parameter in the ZDM response file. ZDM will set TRANLOGOPTIONS INTEGRATEDPARAMS (parallelism 1) for Oracle GoldenGate Extract.

1.7.37 PRCZ-4002 : failed to execute command "/bin/cp" using the privileged execution plugin "zdmauth" on nodes "dbserver"

Issue: The ZDMCLI RESUME JOB command fails during migration and the ZDM job pauses at the ZDM_CONFIGURE_DG_SRC phase. The error occurs when you update the /etc/hosts file of the source database server with a different IP address or alias for the source database server.

Solution: Ensure that the IP address of the source database server is correctly updated in the /etc/hosts file of the source database server and the ZDM server.

1.7.38 TLS Service is required for Fractional OCPU Services in Oracle Autonomous Database

Issue: The TLS service is required for fractional OCPU services in Oracle Autonomous Database service alias which is to be specified in the response file parameter. Specifying non-TLS alias is not supported.

Solution: If the target database is Oracle Autonomous Database on Dedicated Exadata Infrastructure or Oracle Autonomous Database on Exadata Cloud@Customer using fractional OCPU services, then you can specify TP_TLS or LOW_TLS aliases for the TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME parameter.

For more information about specifying the requirement for the service alias for the target database, see Setting Logical Migration Parameters.

1.7.39 Migrating from AIX to EXACC using NFS with Non-readable Dump Fails to CHOWN

Issue: Migrating from AIX to EXACC using NFS with non-readable dump fails to CHOWN in source AIX host.

Solution: Use an alternate option for migrating using NFS which is documented in Migrating to Co-Managed Database Server with NFS Data Transfer Medium.

However the following scenario is not supported for IBM AIX: If the IDs do not match, Zero Downtime Migration automatically discovers the primary group of the target database user and changes the group of the dump to the primary group of the target database user.

1.7.40 Logical migration with DBUSER plugin must also set RUNCPATREMOTELY

Solution: To perform a logical migration using database user authentication plug-in as dbuser, you must set value of the RUNCPATREMOTELY parameter to TRUE.

See RUNCPATREMOTELY for information about this parameter.

1.7.41 Warnings shown when running zdmservice operations

Issue: A warning similar to the following is shown when running zdmservice operations start, stop, status, or deinstall.

Use of uninitialized value in concatenation (.) or string at / [...]
 /zdm21.3.1/home/lib/jwcctl_lib.pm line 571.
CRS_ERROR: Invalid data ALWAYS_ON= in _USR_ORA_ENV

Note that the line number in the output may vary.

Solution: This warning message can be ignored. It does not affect the use of the zdmservice operations or cause any issues for migration.

1.7.42 Logical Migration Using DBLINK Fails with PRGZ-1177

Issue: "PRGZ-1177 : Database link "dblink_name" is invalid and unusable" error causes failure in a logical migration using a database link in a PDB or multitenant database in version 12.1.0.x.

Solution: See 12c PDB or Multitenant Only: ORA-02085: Database Link "LINK_NAME_HERE" Connects To "TARGET_DB" (Doc ID 2344831.1)

1.7.43 PRGZ-1161 : Predefined database service "TP" does not exist

Issue: PRGZ-1161 : Predefined database service "TP" does not exist for Autonomous Database ocid is a known issue for fractional OCPU configuration

If you choose to configure ‘Fractional ADB’ (Fraction of OCPU per DB instead of integer OCPU) – this flavor does not provide standard service alias HIGH and

Solution: Set the RSP parameter TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME to LOW_TLS or TP_TLS

The available services are - ‘low’ or ‘low_tls’ for Autonomous Data Warehouse with fractional OCPU, and ‘tp’ or ‘tp_tls’ for Autonomous Transaction Processing with fractional OCPU.

1.7.44 PRGG-1043 : No heartbeat table entries were found for Oracle GoldenGate Replicat process

Issue: An online logical migration job can report error PRGG-1043: No heartbeat table entries were found for Oracle GoldenGate Replicat process process_name due to one of the following causes:

  1. Initialization parameter job_queue_processes was set to zero in the source or target database.

    Solution: Run the following statements on the database:

    show parameter job_queue_processes;
    alter system set job_queue_processes=100 scope=both;
    exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');
  2. Scheduled job GG_UPDATE_HEARTBEATS is not active in the source database.

  3. The server hosting Oracle GoldenGate deployments has a different time zone than the source database.

    Solution: First, do one of the following solutions:

    • Modify the time zone for the server hosting Oracle GoldenGate deployments, OR

    • Use the web UI for the Oracle GoldenGate deployment to add Extract parameter TRANLOGOPTIONS SOURCE_OS_TIMEZONE and restart Extract.

      For example, if the source database time zone is UTC-5, then set parameter TRANLOGOPTIONS SOURCE_OS_TIMEZONE -5. For more information, see TRANLOGOPTIONS in Reference for Oracle GoldenGate.

    Then, ensure that the DST_PRIMARY_TT_VERSION property in the source database is up to date.

1.7.45 Restore Fails When Source Uses WALLET_ROOT

Issue: Zero Downtime Migration does not currently handle the migration of the TDE wallet from the source database to the target when the source database is using the wallet_root initialization parameter. Without the wallets available on the target database, the restore phase fails with an error similar to the following:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/15/2021 07:35:11
ORA-19870: error while restoring backup piece
/rman_PRD1/ZDM/IQPCZDM/c-3999816841-20210614-00
ORA-19913: unable to decrypt backup

Solution: Manually copy the wallet to the target and resume the job.

1.7.46 PRCZ-4026 Thrown During Migration to Oracle Database 19.10 Target

Issue: When attempting to migrate to an Oracle Database 19.10 home at target, the migration job fails at phase ZDM_FINALIZE_TGT with error PRCZ-4026, because of Oracle Clusterware (OCW) Bug 31070231.

PRCZ-4026 : Resource ora.db_unique_name.db is already running on nodes node.

Solution: Apply the Backport Label Request (BLR) for Bug#32646135 to the target 19.10 dbhome to avoid the reported issue. Once the BLR is applied, you can resume the failed migration job to completion.

Precaution: For physical migrations, you can avoid this issue by ensuring that your target database home is not on Oracle Database 19.10.

1.7.47 Environments With Oracle 11.2.0.4 Must Apply Perl Patch

Issue: Before using Zero Downtime Migration, you must apply a PERL patch if your source database environment meets either of the following conditions.

  • Clusterware environment with Oracle Grid Infrastructure 11.2.0.4
  • Single instance environment with Oracle Database 11.2.0.4

Solution: Download and apply Perl patch version 5.28.2 or later. Ensure that both the source and target Oracle Database 11g home include the patch for BUG 30508206 - UPDATE PERL IN 11.2.0.4 DATABASE ORACLE HOME TO V5.28.2.

1.7.48 ORA-39006 Thrown During Logical Migration to Oracle Autonomous Database on Dedicated Exadata Infrastructure Over Database Link

Issue: When attempting to migrate a database to an Oracle Autonomous Database on Dedicated Exadata Infrastructure target over a database link, the migration job fails with error ORA-39006.

ORA-39006: internal error

Solution: This is a Data Pump issue that is being tracked with Bug 31830685. Do not perform logical migrations over a database link to Oracle Autonomous Database on Dedicated Exadata Infrastructure targets until the bug is fixed and the fix is applied to the Autonomous target database.

1.7.49 Zero Downtime Migration Service Fails To Start After Upgrade

Issue: The following scenario occurs:

  1. Perform migration jobs with Zero Downtime Migration 19.7

  2. Response files used in those jobs are removed

  3. Upgrade to Zero Downtime Migration 21.1

  4. Attempt to run a migration

The following errors are seen.

CRS_ERROR:TCC-0004: The container was not able to start.

CRS_ERROR:One or more listeners failed to start. Full details will be found in the appropriate container log fileContext [/rhp] startup failed due to previous errors sync_start failed with exit code 1.

A similar error is found in the log files located in zdm_installation_location/base/crsdata/hostname/rhp/logs/.

Caused by: oracle.gridhome.container.GHException: Internal error:PRGO-3003 : Zero downtime migration (ZDM) template file /home/jdoe/zdm_mydb.rsp does not exist.

Solution: To recover, manually recreate the response files listed in the log, and place them in the location specified in the log.

1.8 Troubleshooting

If you run into issues, check here in case a solution is published. For each issue, a workaround is provided.

1.8.1 Installation Issues

1.8.1.1 INS-42505 Warning Shown During Installation

Issue: The following warning is shown during installation.
/stage/user/ZDM_KIT_relnumber>./zdminstall.sh setup
oraclehome=/stage/user/grid oraclebase=/stage/user/base
ziploc=/stage/user/ZDM_KIT_relnumber/rhp_home.zip -zdm
---------------------------------------
Unzipping shiphome to gridhome
---------------------------------------
Unzipping shiphome...
Shiphome unzipped successfully..
---------------------------------------
##### Starting GridHome Software Only Installation #####
---------------------------------------
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-42505] The installer has detected that the Oracle Grid
Infrastructure home software at (/stage/user/grid) is not complete.
   CAUSE: Following files are missing:
...

Solution: This warning message can be ignored. It does not affect the installation or cause any issues for migration.

1.8.2 Connectivity Issues

1.8.2.1 General Connectivity Issues

Issue: If connectivity issues occur between the Zero Downtime Migration service host and the source or target environments, or between source and target environments, check the following areas.

Solution: Verify that the SSH configuration file (/root/.ssh/config) has the appropriate entries:

Host *
  ServerAliveInterval 10
  ServerAliveCountMax 2

Host ocidb1
  HostName 192.0.2.1
  IdentityFile ~/.ssh/ocidb1.ppk
  User opc
  ProxyCommand /usr/bin/nc -X connect -x www-proxy.example.com:80 %h %p

Note that the proxy setup might not be required when you are not using a proxy server for connectivity. For example, when the source database server is on Oracle Cloud Infrastructure Classic, you can remove or comment the line starting with ProxyCommand.

If the source is an Oracle RAC database, then make sure you copy the ~/.ssh/config file to all of the source Oracle RAC servers. The SSH configuration file refers to the first Oracle RAC server host name, public IP address, and private key attributes.

1.8.2.2 Communications Link Failure

Issue: If the MySQL server crashes you will see errors such as this one for the ZDM operations:

$ ./zdmcli query job -jobid 6
Exception [EclipseLink-4002] (Eclipse Persistence Services -
2.7.7.qualifier): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.cj.jdbc.exceptions.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The
driver has not received any packets from the server.
Error Code: 0
Query: ReadAllQuery(referenceClass=JobSchedulerImpl sql="SELECT
JOB_IDENTIFIER, M_ACELIST, ARGUMENTS, ATTRIBUTES, CLIENT_NAME,
COMMAND_PROVIDED, COMPARTMENT, CONTAINER_TYPE, CREATEDATE, CREATOR,
CURRENT_STATUS, DB_OCID, DBNAME, DEPLOYMENT_OCID, DISABLE_JOB_EXECUTION,
ELAPSED_TIME, END_TIME, EXECUTE_PHASES, EXECUTION_TIME, IS_EVAL, IS_PAUSED,
JOB_TYPE, METHOD_NAME, METRICS_LOCATION, OPERATION, PARAMETERS,
PARENT_JOB_ID, PAUSE_AFTER_PHASE, RESULT, PHASE, JOB_SCHEDULER_PHASES,
REGION, REST_USER_NAME, RESULT_LOCATION, SCHEDULED_TIME, SITE, SOURCEDB,
SOURCENODE, SOURCESID, SPARE1, SPARE2, SPARE3, SPARE_A, SPARE_B, SPARE_C,
START_TIME, STOP_AFTER_PHASE, TARGETNODE, JOB_THREAD_ID, UPD_DATE, USER_NAME,
ENTITY_VERSION, CUSTOMER FROM JOBSCHEDULER WHERE (PARENT_JOB_ID = ?)")

Solution: If such Communications errors are seen, restart the Zero Downtime Migration service so that the MySQL server is restarted, after which the pending jobs will resume automatically.

Stop the Zero Downtime Migration service:

zdmuser> $ZDM_HOME/bin/zdmservice stop

Start the Zero Downtime Migration service:

zdmuser> $ZDM_HOME/bin/zdmservice start

1.8.2.3 Evaluation Fails in Phase ZDM_GET_TGT_INFO

Issue: During the evaluation (-eval) phase of the migration process, the evaluation fails in the ZDM_GET_TGT_INFO phase with the following error for the Oracle RAC instance migration.

Executing phase ZDM_GET_TGT_INFO
Retrieving information from target node "trac11" ...
PRGZ-3130 : failed to establish connection to target listener from nodes [srac11, srac12]
PRCC-1021 : One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node srac11 timed out after 15 seconds.
PRCC-1025 : Command submitted on node srac12 timed out after 15 seconds.

Solution:

  1. Get the SCAN name of source database and add it to the /etc/hosts file on both target database servers, with the public IP address of the source database server and the source database SCAN name. For example:
    192.0.2.3 source-scan
  2. Get the SCAN name of the target database and add it to the /etc/hosts file on both source database servers, with the public IP address of the target database server and target database SCAN name. For example:
    192.0.2.1  target-scan

Note:

This issue, where the SCAN IP address is not added to /etc/hosts file, might occur because in some cases the SCAN IP address is assigned as a private IP address, so it might not be resolvable.

1.8.2.4 Object Storage Is Not Accessible

Issue: When Object Storage is accessed from the source or target database server, it may fail with the following error.
About to connect() to swiftobjectstorage.xx-region-1.oraclecloud.com port 443 (#0)
Trying 192.0.2.1... No route to host
Trying 192.0.2.2... No route to host
Trying 192.0.2.3... No route to host
couldn't connect to host
Closing connection #0
curl: (7) couldn't connect to host

Solution: On the Zero Downtime Migration service host, in the response file template ($ZDM_HOME/rhp/zdm/template/zdm_template.rsp), set the Object Storage Service proxy host and port parameters listed below, if a proxy is required to connect to Object Storage from the source database server. For example:

SRC_OSS_PROXY_HOST=www-proxy-source.example.com
SRC_OSS_PROXY_PORT=80

In the response file template ($ZDM_HOME/rhp/zdm/template/zdm_template.rsp), set the Object Storage Service proxy host and port parameters listed below, if a proxy is required to connect to Object Storage from the target database server. For example:

TGT_OSS_PROXY_HOST=www-proxy-target.example.com
TGT_OSS_PROXY_PORT=80

1.8.2.5 SSH Error "EdDSA provider not supported"

Issue: The following error messages appear in $ZDM_BASE/crsdata/zdm service hostname/rhp/zdmserver.log.0.

[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
 [JSChChannel$LogOutputStream.flush:1520]  2020-04-04: WARNING: org.apache.sshd.client.session.C:
 globalRequest(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com,
 want-reply=false] failed (SshException) to process: EdDSA provider not supported

[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
 [JSChChannel$LogOutputStream.flush:1520]  2020-04-04: FINE   : org.apache.sshd.client.session.C:
 globalRequest(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com,
 want-reply=false] failure details
org.apache.sshd.common.SshException: EdDSA provider not supported
    at org.apache.sshd.common.util.buffer.Buffer.getRawPublicKey(Buffer.java:446)
    at org.apache.sshd.common.util.buffer.Buffer.getPublicKey(Buffer.java:420)
    at org.apache.sshd.common.global.AbstractOpenSshHostKeysHandler.process(AbstractOpenSshHostKeysHandler.java:71)
    at org.apache.sshd.common.global.AbstractOpenSshHostKeysHandler.process(AbstractOpenSshHostKeysHandler.java:38)
    at org.apache.sshd.common.session.helpers.AbstractConnectionService.globalRequest(AbstractConnectionService.java:723)
    at org.apache.sshd.common.session.helpers.AbstractConnectionService.process(AbstractConnectionService.java:363)
    at org.apache.sshd.common.session.helpers.AbstractSession.doHandleMessage(AbstractSession.java:400)
    at org.apache.sshd.common.session.helpers.AbstractSession.handleMessage(AbstractSession.java:333)
    at org.apache.sshd.common.session.helpers.AbstractSession.decode(AbstractSession.java:1097)
    at org.apache.sshd.common.session.helpers.AbstractSession.messageReceived(AbstractSession.java:294)
    at org.apache.sshd.common.session.helpers.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:63)
    at org.apache.sshd.common.io.nio2.Nio2Session.handleReadCycleCompletion(Nio2Session.java:357)
    at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:335)
    at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:332)
    at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.lambda$completed$0(Nio2CompletionHandler.java:38)
    at java.security.AccessController.doPrivileged(Native Method)
    at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:37)
    at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)
    at sun.nio.ch.Invoker$2.run(Invoker.java:218)
    at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.security.NoSuchAlgorithmException: EdDSA provider not supported
    at org.apache.sshd.common.util.security.SecurityUtils.generateEDDSAPublicKey(SecurityUtils.java:596)
    at org.apache.sshd.common.util.buffer.keys.ED25519BufferPublicKeyParser.getRawPublicKey(ED25519BufferPublicKeyParser.java:45)
    at org.apache.sshd.common.util.buffer.keys.BufferPublicKeyParser$2.getRawPublicKey(BufferPublicKeyParser.java:98)
    at org.apache.sshd.common.util.buffer.Buffer.getRawPublicKey(Buffer.java:444)
    ... 22 more
[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
 [JSChChannel$LogOutputStream.flush:1520]  2020-04-04: FINE   : org.apache.sshd.client.session.C:
 sendGlobalResponse(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com]
 result=ReplyFailure, want-reply=false

[sshd-SshClient[3051eb49]-nio2-thread-2] [ 2020-04-04 00:26:24.182 GMT ]
 [JSChChannel$LogOutputStream.flush:1520]  2020-04-04: FINE   : org.apache.sshd.common.io.nio2.N:
 handleReadCycleCompletion(Nio2Session[local=/192.168.0.2:41198, remote=samidb-db/140.238.254.80:22])
 read 52 bytes

Solution: Zero Downtime Migration uses the RSA format.

1.8.3 Transparent Data Encryption Related Issues

1.8.3.1 Transparent Data Encryption General Information

Depending on your source database release, Transparent Data Encryption (TDE) wallet configuration may be required.

  • Oracle Database 12c Release 2 and later

    For Oracle Database 12c Release 2 and later releases, TDE wallet configuration is mandatory and must be enabled on the source database before migration begins.

    If TDE is not enabled, the database migration will fail.

    Upon restore, the database tablespaces are encrypted using the wallet.

  • Oracle Database 12c Release 1 and earlier

    On Oracle Database 12c Release 1 and Oracle Database 11g Release 2 (11.2.0.4), TDE configuration is not required.

For information about the behavior of TDE in an Oracle Cloud environment, see My Oracle Support document Oracle Database Tablespace Encryption Behavior in Oracle Cloud (Doc ID 2359020.1).

1.8.3.2 Job Fails in Phase ZDM_SETUP_TDE_TGT

Issue: The phase ZDM_SETUP_TDE_TGT fails with one of the following errors.

Executing phase ZDM_SETUP_TDE_TGT
Setting up Oracle Transparent Data Encryption (TDE) keystore on the target node oci1121 ...
oci1121: <ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_phx1z3</ARG></ARGS></ERR_FILE>
PRGO-3007 : failed to migrate database "db11204" with zero downtime
PRCZ-4002 : failed to execute command "/u01/app/18.0.0.0/grid/perl/bin/perl" using the privileged execution plugin "zdmauth" on nodes "oci1121"
PRCZ-2103 : Failed to execute command "/u01/app/18.0.0.0/grid/perl/bin/perl" on node "oci1121" as user "root". Detailed error:
<ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_phx1z3</ARG></ARGS></ERR_FILE>
Error at target server in /tmp/zdm749527725/zdm/log/mZDM_oss_standby_setup_tde_tgt_71939.log
2019-06-13 10:00:20: Keystore location /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME does not exists for database 'oci112_region'
2019-06-13 10:00:20: Reporting error:
<ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_region</ARG></ARGS></ERR_FILE>

Solution:

  • Oracle Database 12c Release 1 and later

    On the target database, make sure that $ORACLE_HOME/network/admin/sqlnet.ora points to the correct location of the TDE wallet. For exmaple:

    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)
  • Oracle Database 11g Release 2 (11.2.0.4) only

    On the target database, make sure that $ORACLE_HOME/network/admin/sqlnet.ora points to the correct location of the TDE wallet, and replace the $ORACLE_UNQNAME variable with the value obtained from the SHOW PARAMETER DB_UNIQUE_NAME SQL command.

    For example, run

    SQL> show parameter db_unique_name
    db_unique_name         string      oci112_region

    and replace

    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))

    with

    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/oci112_region)))

1.8.4 Full Backup Phase (ZDM_BACKUP_FULL_SRC) Issues

1.8.4.1 Backup Fails with ORA-19836

Issue: Source database full backup fails with one of the following errors.

</ERRLINE><ERRLINE>ORA-19836: cannot use passphrase encryption for this backup
</ERRLINE><ERRLINE>RMAN-03009: failure of backup command on C8 channel at 04/29/2019
      20:42:16
</ERRLINE><ERRLINE>ORA-19836: cannot use passphrase encryption for this backup
</ERRLINE><ERRLINE>RMAN-03009: continuing other job steps, job failed will not be
      re-run

Solution 1: This issue can occur if you specify the -sourcedb value in the wrong case. For example, if the value obtained from SQL command SHOW PARAMETER DB_UNIQUE_NAME is zdmsdb, then you need to specify it as zdmsdb in lower case, and not as ZDMSDB in upper case, as shown in the following example.

zdmuser> $ZDM_HOME/bin/zdmcli migrate database -sourcedb zdmsdb -sourcenode ocicdb1 -srcroot
-targetnode ocidb1 -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1
-backupuser backup_user@example.com -rsp /u01/app/zdmhome/rhp/zdm/template/zdm_template_zdmsdb.rsp
-tgtauth zdmauth -tgtarg1 user:opc
-tgtarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk
-tgtarg3 sudo_location:/usr/bin/sudo

Solution 2: For Oracle Database 12c Release 1 and later, ensure that $ORACLE_HOME/network/admin/sqlnet.ora points to the correct location of the TDE wallet, as shown here.

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))

For Oracle Database 11g Release 2 (11.2.0.4) only, ensure that $ORACLE_HOME/network/admin/sqlnet.ora points to the correct location of the TDE wallet as shown below, and replace the variable $ORACLE_UNQNAME with the value obtained with the SQL statement SHOW PARAMETER DB_UNIQUE_NAME.

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))

For example:

SQL> show parameter db_unique_name
db_unique_name    string      oci112_region
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/oci112_region)))

Solution 3: Run the following query and make sure that the wallet status is OPEN.

SQL> select * from v$encryption_wallet
WRL_TYPE
-------------
WRL_PARAMETER
-------------
STATUS
-------------
file
/opt/oracle/dcs/commonstore/wallets/tde/abc_test
OPEN

1.8.4.2 Backup Fails with ORA-19914 and ORA-28365

Issue: Source database full backup fails with the following errors.

channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:00:15
channel ORA_SBT_TAPE_3: starting compressed full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/ODA122/7312FA75F2B202E5E053050011AC5977/DATAFILE/system.382.1003858429
channel ORA_SBT_TAPE_3: starting piece 1 at 25-MAR-19
RMAN-03009: failure of backup command on ORA_SBT_TAPE_3 channel at 03/25/2019 19:09:30
ORA-19914: unable to encrypt backup
ORA-28365: wallet is not open
continuing other job steps, job failed will not be re-run
channel ORA_SBT_TAPE_3: starting compressed full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set

Solution: Ensure that the wallet is opened in the database, and in case of CDB, ensure that the wallet is opened in the CDB, all PDBs, and PDB$SEED. See Setting Up the Transparent Data Encryption Wallet in the Zero Downtime Migration documentation for information about setting up TDE.

1.8.4.3 Either the Bucket Named Object Storage Bucket Name Does Not Exist in the Namespace Namespace or You Are Not Authorized to Access It

See Oracle Support Knowledge Base article "Either the Bucket Named '<Object Storage Bucket Name>' Does not Exist in the Namespace '<Namespace>' or You are not Authorized to Access it (Doc ID 2605518.1)" for the desciption and workarounds for this issue.

https://support.oracle.com/rs?type=doc&id=2605518.1

1.8.5 Restore Phase (ZDT_CLONE_TGT) Issues

1.8.5.1 Restore Database Fails With Assert [KCBTSE_ENCDEC_TBSBLK_1]

Issue: Due to RDBMS Bugs 31048741, 32697431, and 32117834 you may see assert [kcbtse_encdec_tbsblk_1] in the alert log during restore phase of a physical migration.

Solution: Apply patches for RDBMS Bugs 31048741 and 32697431 to any Oracle Database 19c migration target previous to the 19.13 update.

1.8.5.2 Restore Database Fails With AUTOBACKUP does not contain an SPFILE

Issue: During the execution of phase ZDT_CLONE_TGT, restore database fails with the following error.

channel C1: looking for AUTOBACKUP on day: 20200427
channel C1: AUTOBACKUP found: c-1482198272-20200427-12
channel C1: restoring spfile from AUTOBACKUP c-1482198272-20200427-12
channel C1: the AUTOBACKUP does not contain an SPFILE

The source database is running using init.ora file, but during the restore target phase, the database is trying to restore the server parameter file (SPFILE) from autobackup, therefore it fails.

Solution: Start the source database using an SPFILE and resubmit the migration job.

1.8.5.3 Restore Database Fails With ORA-01565

Issue: During the execution of phase ZDT_CLONE_TGT, restore database fails with the following error.

</ERRLINE><ERRLINE>With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP
</ERRLINE><ERRLINE>and Real Application Testing options
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>CREATE PFILE='/tmp/zdm833428275/zdm/PFILE/zdm_tgt_mclone_nrt139.pfile' FROM SPFILE
</ERRLINE><ERRLINE>*
</ERRLINE><ERRLINE>ERROR at line 1:
</ERRLINE><ERRLINE>ORA-01565: error in identifying file '?/dbs/spfile@.ora'
</ERRLINE><ERRLINE>ORA-27037: unable to obtain file status
</ERRLINE><ERRLINE>Linux-x86_64 Error: 2: No such file or directory
</ERRLINE><ERRLINE>Additional information: 3
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
</ERRLINE><ERRLINE>With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP

Solution: Start the target database using an SPFILE and resume the migration job.

1.8.6 Post Migration Automatic Backup Issues

1.8.6.1 Troubleshooting Post Migration Automatic Backup Failures

Issue: Post migration, on the target database, Automatic Backup might fail.

You can verify the failure using the console in Bare Metal, VM and Exadata > DB Systems > DB System Details > Database Details > Backups.

Solution: Get the RMAN configuration settings from one of the following places.

  • Zero Downtime Migration documentation in Target Database Prerequisites, if captured
  • The log files at /opt/oracle/dcs/log/hostname/rman/bkup/db_unique_name/
  • /tmp/zdmXXX/zdm/zdm_TDBNAME_rman.dat

For example, using the second option, you can get the RMAN configuration settings from /opt/oracle/dcs/log/ocidb1/rman/bkup/ocidb1_abc127/rman_configure*.log, then reset any changed RMAN configuration settings for the target database to ensure that automatic backup works without any issues.

If this workaround does not help, then debug further by getting the RMAN job ID by running the DBCLI command, list-jobs, and describe the job details for more error details by running the DBCLI command describe-job -i JOB ID from the database server as the root user.

For example, during the test, the following highlighted settings were modified to make Automatic Backup work.

rman target /
Recovery Manager: Release 12.2.0.1.0 - Production on Mon Jul 8 11:00:18 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
connected to target database: ORCL (DBID=1540292788)
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name OCIDB1_ABC127 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' MAXPIECESIZE 2 G FORMAT '%d_%I_%U_%T_%t' PARMS
 'SBT_LIBRARY=/opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/objectstore/opc_pfile/1245080042/opc_OCIDB1_ABC127.ora)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+RECO/ OCIDB1_ABC127/controlfile/snapcf_ocidb1_abc127.f';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK clear;
RMAN>

1.8.6.2 Post Migration Automatic Backup Fails With DCS-10045

Issue: Post migration, Automatic Backup fails with the following error for non-TDE enabled migrated Oracle Database releases 11.2.0.4 and 12.1.0.2.

DCS-10045: Validation error encountered: Backup password is mandatory to take OSS backup for non-tde enabled database...

You can verify this error by getting the RMAN job ID by running DBCLI command list-jobs, and describe the job details to get the error details by running DBCLI command describe-job -i JOB ID from the database server as the root user.

Solution:

  1. Find the TDE wallet location.

    The Oracle Cloud Infrastructure provisioned database instance will have following entry in sqlnet.ora.

    ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))
  2. Remove the cwallet.sso file from the wallet location.

    For example, /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME.

  3. For Oracle Database 11g Release 2, do the folowing steps.
    1. Connect to database using SQL*Plus as sysdba and verify the current wallet location.
      SQL> select * from v$encryption_wallet;
      WRL_TYPE    WRL_PARAMETER                                            STATUS
      file        /opt/oracle/dcs/commonstore/wallets/tde/ocise112_region  OPEN
    2. Close the wallet in the database.
      SQL> alter system set wallet close;
    3. Open the wallet using the wallet password.
      SQL> alter system SET WALLET open IDENTIFIED BY "walletpassword"
    4. Set the master encryption key.
      SQL> alter system set encryption key identified by "walletpassword"
    5. Recreate the autologin SSO file.
      /home/oracle>orapki wallet create -wallet /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME -auto_login
      Oracle PKI Tool : Version 11.2.0.4.0 - Production
      Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.
      Enter wallet password:            #
    6. Retry Automatic Backup.
  4. For Oracle Database 12c, do the folowing steps.
    1. Connect to database using SQL*Plus as sysdba and verify the current wallet location and status.
      SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;
      WRL_PARAMETER                                            STATUS              WALLET_TYPE
      /opt/oracle/dcs/commonstore/wallets/tde/ocise112_region  OPEN_NO_MASTER_KEY  OPEN

      If the STATUS column contains a value of OPEN_NO_MASTER_KEY, you must create and activate the master encryption key.

    2. Close the wallet in the database.
      SQL> alter system set wallet close;
    3. Open the wallet-using password.
      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open IDENTIFIED BY "walletpassword" CONTAINER=all;
    4. Set the master encryption key.
      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "walletpassword" with backup;

      Log in to each PDB and run

      SQL> ALTER SESSION SET CONTAINER = PDB_NAME;
      SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "walletpassword" with backup;
    5. Create the auto login keystore.
      SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE 'path to wallet directory' IDENTIFIED BY "walletpassword";
    6. Retry Automatic Backup.

1.8.6.3 Post Migration Automatic Backup Fails With DCS-10096

Issue: Post migration, Automatic Backup fails with the following error.

DCS-10096:RMAN configuration 'Retention policy' must be configured as 'configure retentio n
      policy to recovery window of 30 days'

You can verify this error by getting the RMAN job ID by running DBCLI command list-jobs, and describe the job details for more error details by running DBCLI command describe-job -i JOB ID from the database server as the root user.

Solution: Log in into RMAN prompt and configure the retention policy.

[oracle@racoci1 ~]$ rman target /
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Jul 17 11:04:35 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
connected to target database: SIODA (DBID=2489657199)
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;

old RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;

new RMAN configuration parameters are successfully stored

Retry Automatic Backup.

1.8.7 Miscellaneous Issues

1.8.7.1 Migration from Existing Data Guard Standby Fails

Issue: Using an existing standby, Zero Downtime Migration job fails when Data Guard broker configuration uses TNS aliases.

In a Data Guard broker configuration, every database needs to be reachable from every other database in the configuration. When Zero Downtime Migration creates a new standby at the target and adds it to the existing Data Guard broker configuration, Zero Downtime Migration adds the target with connect identifier specified in the form of the connect string. Zero Downtime Migration does not update the tnsnames.ora on the target with other databases is in the configuration. Because the tnsnames.ora entries are missing, other databases may not be reachable if the configuration was created with TNS aliases.

Solution: Ensure that all TNS aliases in the broker configuration corresponding to the primary and any existing standby databases are defined in the target tnsnames.ora file.

Alternatively, ensure that the broker configuration is made up of connect strings instead of TNS aliases. The connect identifier string can be identified using the command below:

show database db_name dgconnectidentifier;

If the connect identifier is a TNS alias, the identifier can be updated using the command below and specifying the connect string in the form of EZconnect string.

For cluster databases:

edit database db_name set property
 dgconnectidentifier='scan_name:scan_port/service_name';

For non cluster database:

edit database db_name set property
dgconnectidentifier='listener_host:listener_port/service_name';

The TNS aliases are not required once the connect identifiers are specified as connect strings that are reachable from every database instance in the broker configuration. This is because the broker needs to be able to manage the primary/standby relationship in case any standby switches roles and becomes the primary.

1.8.7.2 PDB in Failed State After Migration to ExaCS or ExaCC

Issue: ExaCS and ExaCC recently added functionality to display the PDBs of the CDB. When the target database is provisioned with the same PDB name as the source before the migration, then after the migration, the PDB names report status as failed.

This is because when the target is provisioned the PDBID of the PDB is different. During the migration, Zero Downtime Migration drops the target and recreates it. So if the PDB names were the same but now have different internal PDBIDs, the control plane reports the PDB as failed.

Solutions: To avoid this problem, when provisioning the target:

  1. If the source is non-CDB, provision a non-CDB target through dbaascli

  2. If the source is a CDB with PDBs, provision the target without any PDBs

If the PDB is reported in the failed state post migration, the resolution would be to follow Pluggable Database(PDB) Resource Shows Failed Status In Cloud Console while it is Available in VM (Doc ID 2855062.1).

1.8.7.3 Oracle GoldenGate Hub Certificate Known Issues

Issue: Oracle Zero Downtime Migration leverages Oracle GoldenGate for its logical online migration work flow; an Oracle GoldenGate hub is set up on OCI compute for this purpose.

The Oracle GoldenGate hub NginX Reverse Proxy uses a self-signed certificate which will cause the following error:

SunCertPathBuilderException: unable to find valid certification path to requested target when ZDM Server makes a REST API call.

Solution: See My Oracle Support document Zero Downtime Migration - GoldenGate Hub Certificate Known Issues (Doc ID 2768483.1)

1.8.7.4 Source Discovery Does Not Find 'cut' in Default Location

Issue: Discovery at the source database server fails to find cut in the standard location.

The source database deployment's standard cut location is /bin/cut. If cut is not in the location, Zero Downtime Migration cannot discover the source database information correctly, and the migration fails in its initial phases.

Solution: To resolve the issue, ensure that cut is installed in the standard /bin/cut path or create a symbolic link to the installed location, for example:

ln -sf <installed_location_of_the_cut> /bin/cut

1.8.7.5 Evaluation Fails in Phase ZDM_GET_SRC_INFO

Issue: During the evaluation (-eval) phase of the migration process, the evaluation fails in the ZDM_GET_SRC_INFO phase with the following error for the source single instance deployed without Grid infrastructure.

Executing phase ZDM_GET_SRC_INFO
retrieving information about database "zdmsidb" ...
PRCF-2056 : The copy operation failed on node: "zdmsidb".
Details: {1}
PRCZ-4002 : failed to execute command "/bin/cp" using the privileged
execution plugin "zdmauth" on nodes "zdmsidb"
scp: /etc/oratab: No such file or directory

Solution: Make an ORACLE_HOME value entry in file /etc/oratab with value db_name:$ORACLE_HOME:N, as shown in this example.

zdmsidb:/u01/app/oracle/product/12.2.0.1/dbhome_1:N

1.8.7.6 Migration Evaluation Failure with Java Exception Invalid Key Format

Issue: The following conditions are seen:

  • Zero Downtime Migration migration -eval command fails with the following error.

    Result file path contents:
    "/u01/app/zdmbase/chkbase/scheduled/job-19-2019-12-02-03:46:19.log"
    zdm-server.ocitoolingsn.ocitooling.oraclevcn.com: Processing response
    file ...
    null
  • The file $ZDM_BASE/<zdm service host>/rhp/rhpserver.log.0 contains the following entry.

    Verify below error message observed in file $ZDM_BASE/<zdm service
    host>/rhp/rhpserver.log.0
    rhpserver.log.7:[pool-58-thread-1] [ 2019-12-02 02:08:15.178 GMT ]
    [JSChChannel.getKeyPair:1603]  Exception :
    java.security.spec.InvalidKeySpecException:
    java.security.InvalidKeyException: invalid key format
  • The Zero Downtime Migration installed user (For example: zdmuser) private key (id_rsa) file has the following entries.

    -----BEGIN OPENSSH PRIVATE KEY----------
    MIIEogIBAAKCAQEAuPcjftR6vC98fAbU4FhYVKPqc0CSgibtMSouo1DtQ06ROPN0
    XpIEL4r8nGp+c5GSDONyhf0hiltBzg0fyqyurSw3XfGJq2Q6EQ61aL95Rt9CZh6b
    JSUwc69T4rHjvRnK824k4UpfUIqafOXb2mRgGVUkldo4yy+pLoGq1GwbsIYbS4tk
    uaYPKZ3A3H9ZA7MtZ5M0sNqnk/4Qy0d8VONWozxOLFC2A8zbbe7GdQw9khVqDb/x
    END OPENSSH PRIVATE KEY-----

Solution: Authentication key pairs (private and public key) are not generated using the ssh-keygen utility, so you must generate authentication key pairs using steps in Generating a Private SSH Key Without a Passphrase.

After generating authentication key pairs, the private key file content looks like the following.

-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAuPcjftR6vC98fAbU4FhYVKPqc0CSgibtMSouo1DtQ06ROPN0
XpIEL4r8nGp+c5GSDONyhf0hiltBzg0fyqyurSw3XfGJq2Q6EQ61aL95Rt9CZh6b
JSUwc69T4rHjvRnK824k4UpfUIqafOXb2mRgGVUkldo4yy+pLoGq1GwbsIYbS4tk
uaYPKZ3A3H9ZA7MtZ5M0sNqnk/4Qy0d8VONWozxOLFC2A8zbbe7GdQw9khVqDb/x
-----END RSA PRIVATE KEY-----

Set up connectivity with the newly generated authentication key pairs and resume the migration job.

1.8.7.7 Migration Evaluation Fails with Error PRCG-1022

Issue: The following conditions are seen:

$ZDM_HOME/bin/zdmcli migrate database -sourcedb zdmsdb -sourcenode ocicdb1 
-srcauth zdmauth -srcarg1 user:opc 
-srcarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk 
-srcarg3 sudo_location:/usr/bin/sudo -targetnode ocidb1 -backupuser backup_user@example.com 
-rsp /u01/app/zdmhome/rhp/zdm/template/zdm_template_zdmsdb.rsp -tgtauth zdmauth 
-tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk 
-tgtarg3 sudo_location:/usr/bin/sudo -eval

PRCG-1238 : failed to execute the Rapid Home Provisioning action for command  'migrate database'
PRCG-1022 : failed to connect to the Rapid Home Provisioning daemon for cluster anandutest
Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException
[Root exception is java.rmi.ConnectException: Connection refused to host:
anandutest; nested exception is: java.net.ConnectException: Connection refused (Connection refused)]

Solution: Start the Zero Downtime Migration service using the $ZDM_HOME/bin/zdmservice START command, then run any ZDMCLI commands.

1.8.7.8 ORA-01031 on Full Export from an Oracle 12.1 Source

Issue: When performing a full database export with Export Data Pump from an Oracle Database 12c (12.1) source database, the following errors occur:

05-AUG-21 10:36:12.483: ORA-31693: Table data object "SYS"."TABLE" failed to load/unload and is being skipped due to error: ORA-01031: insufficient privileges

Solution: See My Oracle Support document EXPDP - ORA-31693 ORA-01031 (Insufficient Privileges) On Some Tables When Exporting from 12cR1 (Doc ID 1676411.1)

1.8.7.9 Data Transfer Medium COPY Issues

Issue: Migrating data using logical migration with DATA_TRANSFER_MEDIUM=COPY set in the Zero Downtime Migration response file fails.

Solution: When you specify DATA_TRANSFER_MEDIUM=COPY you must also specify the following DUMPTRANSFERDETAILS_SOURCE_* parameters.

DUMPTRANSFERDETAILS_TRANSFERTARGET_DUMPDIRPATH=<Target path to transferthe dumps to >
DUMPTRANSFERDETAILS_TRANSFERTARGET_HOST=<Target Db server or Target sidetransfer node >
DUMPTRANSFERDETAILS_TRANSFERTARGET_USER=<user having write access to specified path>
DUMPTRANSFERDETAILS_TRANSFERTARGET_USERKEY=<user authentication keypath on zdm node>

1.8.7.10 Unable to Resume a Migration Job

Issue: Zero Downtime Migration writes the source and target log files to the /tmp/zdm-unique id directory in the respective source and target database servers.

If you pause a migration job and and then resume the job after several (sometimes 15-20 days), the /tmp/zdm-unique id directory might be deleted or purged as part of a clean up or server reboot that also cleans up /tmp.

Solution: After pausing a migration job, back up the /tmp/zdm-unique id directory. Before resuming the migration job, check the /tmp directory for /zdm-unique id, and if it is missing, restore the directory and its contents with your backup.

1.8.7.11 Migration Job Fails at ZDM_GET_SRC_INFO

Issue: A migration job fails with the following error.

[opc@zdm-server rhp]$ cat /home/opc/zdm_base/chkbase/scheduled/job-34-2021-01-23-14:10:32.log
zdm-server: 2021-01-23T14:10:32.155Z : Processing response file ...
zdm-server: 2021-01-23T14:10:32.262Z : Starting zero downtime migrate operation ...
PRCZ-4002 : failed to execute command "/bin/cp" using the privileged execution plugin "zdmauth" on nodes "PROD.compute-usconnectoneb95657.oraclecloud.internal"

Solution: You must set up SSH connectivity without a passphrase for the oracle user.

1.8.7.12 Migration Job Fails at ZDM_SWITCHOVER_SRC

Issue: A migration job fails at ZDM_SWITCHOVER_SRC phase.

Solutions:

  1. Ensure that there is connectivity from PRIMARY database nodes to STANDBY database nodes so the redo log are shipped as expected.

  2. A job will fail at ZDM_SWITCHOVER_SRC if the recovery process (MRP0) is not running at the target. The recovery process reason for failure should be corrected if MRP0 is not running at Oracle Cloud Database Standby Instance, and then the process should be started manually at Oracle Cloud Database Standby Instance before the migration job can be resumed.

1.9 Additional Information for Migrating to Oracle Exadata Database Service

Read the following for general information, considerations, and links to more information about using Zero Downtime Migration to migrate your database to Oracle Exadata Database Service on Dedicated Infrastructure.

1.9.1 Considerations for Migrating to Oracle Exadata Database Service on Dedicated Infrastructure

For this release of Zero Downtime Migration be aware of the following considerations.

  • If the source database is release 18c, then the target home should be at release 18.6 or later to avoid issues such as Bug 29445548 Opening Database In Cloud Environment Fails With ORA-600.
  • If a backup was performed when one of the configured instances is down, you will encounter Bug 29863717 - DUPLICATING SOURCE DATABASE FAILED BECAUSE INSTANCE 1 WAS DOWN.
  • The TDE keystore password must be set in the credential wallet. To set the password as part of the Zero Downtime Migration workflow, specify the -tdekeystorewallet tde_wallet_path or -tdekeystorepasswd argument irrespective of whether the wallet uses AUTOLOGIN or PASSWORD. In either case the password is stored in the credential wallet. If the -tdekeystorepasswd argument is not supplied, then Zero Downtime Migration skips the setting tde_ks_passwd key in the credential wallet, and no error is thrown.
  • The target environment must be installed with latest DBaaS Tooling RPM with db_unique_name change support to be installed.
  • Provision a target database from the console without enabling auto-backups. In the Configure database backups section do not select the Enable automatic backups option.

1.9.2 Oracle Exadata Database Service on Dedicated Infrastructure Database Registration

Post migration, register the Oracle Exadata Database Service on Dedicated Infrastructure database, and make sure its meets all of the requirements.

Run the following commands on the Oracle Exadata Database Service on Dedicated Infrastructure database server as the root user.

/root>dbaascli registerdb prereqs --dbname db_name --db_unique_name db_unique_name

/root>dbaascli registerdb begin  --dbname db_name --db_unique_name db_unique_name

For example

/root>dbaascli registerdb prereqs --dbname ZDM122 --db_unique_name ZDM122_phx16n
DBAAS CLI version 18.2.3.2.0
Executing command registerdb prereqs --db_unique_name ZDM122_phx16n
INFO: Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:35:31.157978280334.log
INFO: Prereqs completed successfully
/root>

/root>dbaascli registerdb begin --dbname ZDM122 --db_unique_name ZDM122_phx16n
DBAAS CLI version 18.2.3.2.0
Executing command registerdb begin --db_unique_name ZDM122_phx16n
Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:45:27.264851309165.log
Running prereqs
DBAAS CLI version 18.2.3.2.0
Executing command registerdb prereqs --db_unique_name ZDM122_phx16n
INFO: Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:45:29.000432309894.log
INFO: Prereqs completed successfully
Prereqs completed
Running OCDE .. will take time ..
OCDE Completed successfully.
INFO: Database ZDM122 registered as Cloud database
/root>

1.9.3 Oracle Exadata Database Service on Dedicated Infrastructure Automatic Backup Issues

Check the backup configuration before you enable automatic backup from the console. You can use the get config command as shown in the first step below. You should see bkup_oss=no before you enable automatic backup.

You might see the error message in the console, "A backup configuration exists for this database. You must remove the existing configuration to use Oracle Cloud Infrastructure's managed backup feature."

To fix this error, remove the existing configuration.

First, make sure the automatic backup is disabled from the UI, then follow these steps to remove the existing backup configuration.

  1. Generate a backup configuration file.
    /var/opt/oracle/bkup_api/bkup_api get config --file=/tmp/db_name.bk --dbname=db_name

    For example:

    /var/opt/oracle/bkup_api/bkup_api get config --file=/tmp/zdmdb.bk --dbname=zdmdb
  2. Open the /tmp/db_name.bk file you created in the previous step.

    For example: Open /tmp/zdmdb.bk

    change bkup_oss=yes from bkup_oss=no

  3. Disable OSS backup by setting bkup_oss=no.
    /var/opt/oracle/bkup_api/bkup_api set config --file=/tmp/db_name.bk --dbname=db_name

    For example:

    /var/opt/oracle/bkup_api/bkup_api set config --file=/tmp/zdmdb.bk --dbname=zdmdb
  4. Check reconfigure status.
    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname=db_name

    For example:

    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname=zdmdb

Now enable automatic backup from console.

Verify the backups from the console. Click Create Backup to create a manual backup, and a backup should be created without any issues. and also Automatic Backup should be successful.

1.10 Additional Information for Migrating to Oracle Exadata Database Service on Cloud@Customer

Read the following for general information, considerations, and links to more information about using Zero Downtime Migration to migrate your database to Oracle Exadata Database Service on Cloud@Customer.

1.10.1 Considerations for Migrating to Oracle Exadata Database Service on Cloud@Customer

For this release of Zero Downtime Migration be aware of the following considerations.

  • You must apply the regDB patch for Bug 29715950 - "modify regdb to handle db_unique_name not same as db_name" on all Oracle Exadata Database Service on Cloud@Customer nodes. This is required for the ZDM_MANIFEST_TO_CLOUD phase. Please note that the regDB tool is part of DBaaS Tooling.
  • If the source database is release 18c, then the target home should be at release 18.6 or later to avoid issues such as Bug 29445548 Opening Database In Cloud Environment Fails With ORA-600.
  • PDB conversion related phases are listed in -listphases and can be ignored. Those are no-op phases.
  • If the backup medium is Zero Data Loss Recovery Appliance, then all configured instances should be up at the source when a FULL or INCREMENTAL backup is performed.
  • If a backup was performed when one of the configured instances is down, you will encounter Bug 29863717 - DUPLICATING SOURCE DATABASE FAILED BECAUSE INSTANCE 1 WAS DOWN.
  • The TDE keystore password must be set in the credential wallet. To set the password as part of the Zero Downtime Migration workflow, specify the -tdekeystorewallet tde_wallet_path or -tdekeystorepasswd argument irrespective of whether the wallet uses AUTOLOGIN or PASSWORD. In either case the password is stored in the credential wallet. If the -tdekeystorepasswd argument is not supplied, then Zero Downtime Migration skips the setting tde_ks_passwd key in the credential wallet, and no error is thrown.
  • The target environment must be installed with latest DBaaS Tooling RPM with db_unique_name change support to be installed.

1.11 Documentation Accessibility

Access to Oracle Support