1.1 Zero Downtime Migration 26.1 Release Notes
These release notes provide downloading instructions for the latest product software and documentation, and describe new features, fixed bugs, known issues, and troubleshooting information for Zero Downtime Migration Release 26 (26.1).
- What's New in This Release
- Bugs Fixed
- Downloading the Zero Downtime Migration Installation Software
- Downloading the Zero Downtime Migration Documentation
- General Information
- Known Issues
- Troubleshooting
- Additional Information for Migrating to Oracle Exadata Database Service
- Additional Information for Migrating to Oracle Exadata Database Service on Cloud@Customer
- Documentation Accessibility
1.2 What's New in This Release
Zero Downtime Migration Release 26.1 improves the functionality with the following enhancements.
Deployment and Platform support
- Introducing Migration Using the Instant Deploy
Feature:
ZDM introduces migration using the instant deploy feature. The instant deploy feature mode in ZDM simplifies the migration of database from on-premises to OCI or EXADATA systems by skipping the need for dedicated compute for ZDM service.
In the new Instant Deploy mode, you can download the ZDM kit directly to the database host, unzip it, and run thezdmclimigrate command to migrate the database. There is no need for setting up a ZDM server host. Currently, ZDM relies on the Micronaut container for running the migration commands. However, by leveraging the instant deploy mode, you can run the commands without initiating the Micronaut container. See Performing Migration Using the Instant Deploy Feature. -
Target database support updates: ZDM supports Oracle AI Database 26ai as a target database across Exadata services and Oracle Base Database Service (VM/BM).
-
Windows platform as source: ZDM supports Windows as a source platform for migration.
-
Source platform support: ZDM supports cross-platform/version migrations using RMAN transportable tablespaces and Data Pump metadata.
-
Target platform support : ZDM supports migrating to Oracle Database@Multi Cloud, Autonomous AI Database target with Lakehouse workload type and ExaDB-XS storage targets.
Physical migration
New methods and workflow improvements:
- Workflow improvement:
- Multi mount point scenario: ZDM allows different NFS mount point paths for both the source and the target databases - for cross data center migrations where the source NFS is not accessible by the target database due to network limitations.
- ASM Client Cluster Support: Support for migrating to and from ASM client cluster where the ASM instance is not local.
- ZDM automatically registers the migrated ODA database with the ODA tooling (by using odacli).
- Migrate your Source Database with OKV
Keystore Configured as an External Keystore:
ZDM performs migration of external keystore wallets if the source or target database are configured to use OKV (Oracle Key Vault) external keystore.
If OKV endpoint password and the TDE wallet password are different, then you have to provide the
-okvkeystorepasswdoption in thezdmclimigrate command to specify the OKV endpoint password in the source database or provide the-tgtokvkeystorepasswdoption in thezdmclicommand to specify the OKV endpoint password in the target database.The following parameters are added:
-
Target platform and methods
- Oracle Exadata Database Service on
Exascale Infrastructure (ExaDB-XS) storage
targets: Migrate to Oracle ExaDB-XS storage using
any supported physical method (online, offline,
OSS, NFS, restore from service).
Note:
Currently, this is limited to Oracle AI Database 26ai as target. - Introducing Migration Using PDB
Clone: Streamlined migrations using Oracle
Multitenant PDB cloning.
Starting with ZDM 26.1, you can create an Oracle pluggable database (PDB) clone either from a source PDB or even from an Oracle non-container database (non-CDB). PDB clone method facilitates fine-grained database migration by allowing individual, isolated PDBs to be cloned remotely, rather than the entire container database (CDB). You can migrate an Oracle database using Pluggable Database (PDB) cloning in a streamlined approach that leverages Oracle's multi-tenant architecture. See Introducing Migration Using PDB Clone.
- Oracle Exadata Database Service on
Exascale Infrastructure (ExaDB-XS) storage
targets: Migrate to Oracle ExaDB-XS storage using
any supported physical method (online, offline,
OSS, NFS, restore from service).
- Security and encryption: Tablespace encryption by default: System, SYSAUX, UNDO, and TEMP (and, where required, all tablespaces) are encrypted for Oracle Database 19c and higher versions during backup/restore or restore-from-service flows.
- Support to migrate from an exiting standby database with an existing Data Guard Broker configuration using TNS aliases. During the migration, ZDM exports the TNS aliases from the existing configuration and imports only the non-existing ones into the target database. See Using an Existing Standby to Instantiate the Target Database.
-
Now ZDM supports multi mount point scenario for cross data center migrations where the source NFS is not accessible by the target database due to network limitations. ZDM allows different NFS mount point paths for both the source and the target databases.
To make the backup pieces available for the target database, Zero Downtime Migration copies the files from the source NFS path (
$BACKUP_PATH/dbname) into the target NFS path ($TGT_BACKUP_PATH/dbname). For the copy operation, you can specify the optionalBACKUP_TRANSFERTARGET_*parameters. - You can migrate an Oracle database using Pluggable Database (PDB)
cloning in a streamlined approach that leverages Oracle's
multi-tenant architecture. This method involves cloning either a PDB
from a source CDB or a non-CDB into a target CDB, where the target
PDB can be refreshed periodically to keep it up-to-date, allowing
for a flexible and efficient migration process. The following
parameters are added:
- ZDM_PDB_CLONE_METHOD: Indicates the PDB clone method to be used (COLD, HOT or REFRESHABLE).
- ZDM_SRC_PDB_NAME: Indicates the name of the source PDB to be cloned from the source CDB.
- ZDM_TGT_PDB_NAME: Indicates the name of the target PDB to be cloned from the source CDB.
- ZDM_TGT_DBLINK_NAME: Indicates the name of the database link from the target database pointing to the source PDB (or non-CDB).
- ZDM_TGT_DBLINK_USERNAME: Indicates the name of the database username of the database link from the target database pointing to the source PDB (or non-CDB).
- ZDM_TGT_DBLINK_PASSWORD_WALLET: Indicates the wallet containing the password of the database username for the database link from the target database pointing to the source database (or non-CDB).
- ZDM_PDB_REFRESH_INTERVAL: Specifies how frequently the target PDB clone updates to reflect data changes from the source PDB (or non-CDB). This parameter is defined in minutes.
- ZDM_SWITCHOVER: For the refreshable PDB clone method, when ZDM_SWITCHOVER is set to TRUE, the database roles of both the source PDB and the target PDB are reversed.
- Support of ExaDB-XS storage for physical migration:
ZDM now includes support for Oracle Exadata Database Service on Exascale Infrastructure (ExaDB-XS) storage for Physical migration in addition to logical migration support. ZDM transparently handles migrating on-premises or existing cloud databases to the target database with Exascale storage. All the existing physical migration methods such as online, offline, backup restore with OSS, NFS, restore from the service can be used to migrate to target with Exascale storage.See the Supported Physical Migration Paths topic for more information.
Note:
Currently, Oracle AI Database 26ai is the supported target for Exascale and hence physical migration, which expects the source and target versions to be the same, is limited to Oracle AI Database 26ai.
Logical migration
New methods and workflow improvements:
- Introducing new data transfer method for ZDM: The
DATA_TRANSFER_MEDIUM=MANUAL_COPYoption is valid only for no-SSH usecase when you have not specified the -targetnode or the -sourcenode info in the zdmcli migrate command. Specifying this option skips copying of dump files by ZDM. You will have to manually copy the dumps to the import directory in the target node. Use the -pauseaftermethod with value ZDM_DATAPUMP_EXPORT_SRC in the zdmcli migrate command so that the migrations pauses post export and you can copy the dump files to the target node's import directory. Post which, you can resume the migration. ZDM uses the dumps in the import directory and continues with the migration. If you do not usepauseafter, ZDM fails in the IMPORT phase and displays the error that the required dump files are not present in the target import directory. You can then copy the dumps and resume the migration. - Online workflow improvements: Export from Snapshot Standby: You can now perform initial load export from Snapshot Standby database and avoid the production workload impact on the primary database.
- Oracle GoldenGate enhancements:
- ZDM minimizes the number of tables which need to be reloaded by intelligent identification of reload objects based on the past DML activities and size on Tables that do not have PRIMARY Key or UNIQUE KEY.
- Support Concurrent migration with single Oracle GoldenGate deployment: ZDM allows separate wallet directories to enable multiple ADB targets concurrently.
- Support for PDB level extract: No CDB connection required for source PDBs on Oracle Database 19c (19.23) with GoldenGate per-PDB Extract.
- ZDM now streamlines object reload during online logical migrations by integrating the Oracle GoldenGate Reload Advisor to identify highly active, non-unique row tables and automatically pull them out of replication. It also supports configuring Extracts with MIGRATION_MODE, which pulls in all supported objects and skips unsupported ones without manual tuning. For finer control, new parameters GOLDENGATESETTINGS_REPLICAT_EXCLUDEOBJECTS, GOLDENGATESETTINGS_EXCLUDERELOAD, and GOLDENGATESETTINGS_EXTRACT_EXCLUDEOBJECTS let you exclude objects from Extract or Replicat and decide whether they should be reloaded at switchover.
- Simplified Extract setup: ZDM enables migration specific Extract option migration mode (MIGRATION_MODE) if source supports that applies optimal defaults.
- Selecting the replication mode: You can select integrated or non-integrated Replicat for replication using GOLDENGATESETTINGS_REPLICATIONMODE.
- Faster large transactions replication: You can split and parallelize large transactions on apply using GOLDENGATESETTINGS_REPLICAT_SPLITTRANSRECS.
- Feature control Procedural replication: Select feature groups using GOLDENGATESETTINGS_FEATUREGROUP.
- Better constraint handling: Replicat is configured with deferred referential constraints by default, in the non-integrated parallel mode.
- Smarter Reload of out of sync objects:
-
Fewer table chosen for reloads using GoldenGate based intelligent object filtering by idetifying select tables that are lacking PRIMARY KEY or UNIQUE KEY; known to impact the replication performance.
- Option to exclude table out of GoldenGate replication at the start or mid-job. See GOLDENGATESETTINGS_EXTRACT_EXCLUDEOBJECTS and GOLDENGATESETTINGS_REPLICAT_EXCLUDEOBJECTS.
-
Online Logical Migration can be performed using a Source Standby Database for Data Pump Export/Import.
To support this capability, ZDM provides a set of properties for supplying the required connection details.
All of these properties are optional in general, however, to use the Source Standby Database for Data Pump Export/Import, you must provide the standby database connection details as per the setup requirements using following parameters:
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_HOST = Specifies the listener host name or IP address for the source standby database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_PORT = Specifies the listener port number for the source standby database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_SERVICENAME = Specifies the source database fully qualified service name.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_TLSDETAILS_DISTINGUISHEDNAME = Specifies the distinguished name (DN) of the database server (SSL_SERVER_CERT_DN) for a TLS connection to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_TLSDETAILS_CREDENTIALSLOCATION = Specifies the directory containing client credentials (wallet, keystore, trustfile, and so on.) for a TLS connection.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IP = Specifies the IP address of the bastion host for bastion-based access to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_PORT = Specifies the port number of the bastion host for bastion-based access to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_IDENTITYFILE = Specifies the identity file to access the bastion for bastion-based access to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_USERNAME = Specifies the user name to access the bastion for bastion-based access to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_BASTIONDETAILS_REMOTEHOSTIP = Specifies the remote host IP address to access from the bastion for bastion-based access to the database.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_PROXYDETAILS_PROTOCOL = Specifies the proxy protocol to connect to the source database through a proxy.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_PROXYDETAILS_HOSTNAME = Specifies the proxy host name to connect to the source database through an HTTPS proxy.
SOURCESTANDBYDATABASE_CONNECTIONDETAILS_PROXYDETAILS_PORT = Specifies the HTTP proxy port number to connect to the source database through an HTTPS proxy.
- Allowing migration of STS data from source data base to target databases: The following new parameters are introduced to allow migration of STS data from source data base to target database, three additional phases ZDM_PACK_STS_SRC, ZDM_EXPORT_IMPORT_STS_DATA, and ZDM_UNPACK_STS_TGT are included in the workflow to handle the migration of STS data:
- Smarter Reload of out of sync objects: Fewer table chosen for reloads using GoldenGate based intelligent object filtering - identifies select tables that are lacking PRIMARY KEY or UNIQUE KEY and known to impact the replication performance.
- Now you can fix invalids by Database instead of fixing schema by schema. See FIXINVALIDS_BYSCHEMA, FIXINVALIDS_PARALLELCOUNT, FIXINVALIDS_RETRYCOUNT for more details.
- Data Pump and object handling Improvements:
- Reuse or retain dump files: Retain dumps and/or reuse a prior dump for faster retries. See DATAPUMPSETTINGS_RETAINDUMPS, DATAPUMPSETTINGS_REUSE_DUMPPREFIX.
- Large filter lists handled efficiently: Bigger EXCLUDE lists are auto-managed through a temp table.
- Default tablespace block size: If unspecified, ZDM uses the source tablespace block size.
- Disable Constraint Validation: Option to disable constraint validation part of Data pump import and expedite migration.
- Workflow Improvements:
-
Migrate APEX, AWR, STS data: ZDM supports export as part of the workflow. See MIGRATE_AWR, MIGRATE_APEX, MIGRATE_STS.
- ZDM supports ignoring DataPump errors by error stack. See DATAPUMPSETTINGS_IGNOREERROR.
- ZDM supports ignoring any unexpected phase errors and skip to the next phase. See IGNOREPHASEERRORS-n.
- Embedded auto-retries for transient errors.
-
- Migration to Autonomous Database (ADB) enhancements:
- ADB as a source: Logical migration from ADB-S/ADB-D, allows ADB as source to specified in connection details or SOURCEDATABASE_OCID.
- ADB as a source: Logical migration from ADB-S/ADB-D, allows ADB as source to specified in connection details or SOURCEDATABASE_OCID.
- ADB to ADB via FSS/NFS: Filesystem-based transfers with auto-attach (and optional auto-detach) of FSS. Improved path validation without SSH access to source.
-
Concurrent ADB targets: One GoldenGate microservice can replicate to multiple ADBs by using separate wallet paths.
-
ADB Lakehouse: Support for migrating to ADB Lakehouse.
- Improved progress monitoring:
- Replication visibility: The
zdmclishows GoldenGate replication metrics for online migrations. - DataPump progress checks: Improved Datapump monitoring to indicate the details on objects processed.
- Replication visibility: The
- A new response file parameter DATAPUMPSETTINGS_TEXTINDEX_METHOD for logical migration is introduced to control the population of TEXT INDEXes during the import process and improve the performance of the import operation. This parameter accepts string values, and when specified with an allowed value, ZDM imports TEXT INDEXes using the NO_POPULATE option during the datapump import action.
1.3 Bugs Fixed
Zero Downtime Migration Release 26.1 introduces the bug fixes listed in the following table.
Table - Bugs Fixed In Zero Downtime Migration Release 26.1 bugs fixed in Zero Downtime Migration Release 26.1 has two columns: bug number and description
| Bug Number | Description |
|---|---|
| 35182115 | NEW PARAMETER WALLET_SOURCEADMIN MUST NEED FULL PATH |
| 36364901 | ZDM ESTIMATE PHASE FAILS IF ORACLE DIRECTORY OBJECT DOES NOT EXIST IN CUSTOMER'S DATABASE |
| 36308859 | ZDM THROWS ORA-20000 ERROR ON DATA PUMP EXPORT, INCORRECTLY STATES DATA PUMP LOG UPLOADED TO OBJECT STORAGE |
| 36358680 | ORA-39173 EVEN AFTER SETTING DATAPUMPPARAMETERS.ENCRYPTION = NONE |
| 35043118 | CHANGE PROMPT FOR OCI OSS USER PASSWORD PROMPT TO PROMPT FOR "AUTH TOKEN" INSTEAD OF "PASSWORD" |
| 36716036 | ZDM MIGRATION WITH USER ACTION SCRIPTS FAILING TO EXECUTE IN RHP_MAIN_LINUX.X64 SERIES |
| 36383756 | CLUSTER SERVICES SHOULD BE DELETED FOR PLACEHOLDER TARGET DATABASE |
| 37492619 | PRGT 1017 - ZDM - PATCH DISCREPANCY ERROR - UPDATE TO ERROR MESSAGE |
| 37443759 | ZDM LOGICAL ONLINE MIGRATION JOB FAILING AT ZDM_DATAPUMP_ESTIMATE_SRC ON EXCLUDING TABLES WHEN SOURCEDATABASE_ALLOWTEMPTABLE=TRUE |
| 37202877 | ENSURE OGG CAPTURE PROCESS IS GETTING PURGED IN CASE OF FAILED ZDM EXTRACT DELETION |
| 37486881 | ZDM LOGICAL MIGRATION | SAME SET OF TABLES ARE INCLUDED IN DATAPUMP BATCH1 AND BATCH2 JOB DURING ZDM_RELOAD_PARALLEL_EXPORT_IMPORT |
| 36816712 | FIX ZDM <> DATA PUMP JOB COMPLETION STATUS POLLING LOGIC |
| 36831210 | ENH: ZDM - LIST THE RELOAD OBJECTS ALONG WITH SIZE WHICH WILL BE RELOADED AS PART OF RELOADING PHASE IN THE -EVAL |
| 36426368 | ZDM FAILS WITH PRGO-4083 DURING ZDM_POST_MIGRATE_TGT PHASE FOR PDBS IN MOUNTED STATE |
| 37915601 | ZDM - ORADISCOVER DATABASE HOME VERSION RETRIEVAL FAILS FOR 11G |
| 33832772 | ZDM: USING EXISTING STANDBY, ZDM FAILS WHEN DG BROKER CONFIGURATION USING TNS ALIASES |
| 37763381 | GOV/OC3 : STEP ZDM_VALIDATE_GG_HUB DOESN'T COMPLETE |
| 38029366 | ZDM_DUPLICATE_TGT FAILING WITH RMAN-05500 |
| 38011485 | IRESTORE FAILS MIDWAY FOR JUST 1 CHANNEL WITH ORA-17627: ORA-01017, ZDM CONTINUES WITH NEXT STEPS AFTER THIS AND EVENTUALLY FAILS WITH ORA-15001: DISKGROUP "DATAC3" DOES NOT EXIST OR IS NOT MOUNTED |
| 38013428 | ZDM_GET_SRC_INFO WITH EXCEPTION PRGZ-3838 : SCAN NAME CANNOT BE AUTOMATICALLY DISCOVERED FOR DATABASE OCIRLAB IN NODE OCI-CFAPPDB-LAB-01.DELTAVN.VN.. |
| 36654142 | ZDM PHYSICAL MIGRATION CHANGES CARDINALITY FOR PDB SERVICE |
| 38002632 | ZDM - WITH ZDM 21.5.2 KIT, LOGICAL MIGRATIONS FROM RDS TO ADBS ARE FAILING AT ZDM_VALIDATE_SRC PHASE |
| 37960266 | ZDM LOGICAL MIGRATION: HANDLING OF XML TYPE TABLES (CPAT: HAS_XMLTYPE_TABLES) |
| 37930543 | ZDM PHYSICAL DOES NOT CHECK IF TGTTDEKEYSTOREPASSWD IS CORRECT |
| 38087108 | CLEANUP CAPTURE FOOTPRINT ON THE SOURCE DB UPON FAILED ZDM ONLINE MIGRATIONS |
| 34059615 | ZDM: PRKO-2102 : FAILED TO CREATE SERVER POOL |
| 35801869 | SUPPORT FOR SOURCE OKV DB'S TO MIGRATE TO THE CLOUD (EXACC & EXADB-D) |
| 38213800 | DATAPUMP WONT EXPORT ANYTHING WHEN INCLUDE LIST IS > 30 TABLES. |
| 38097601 | ZDM MIGRATION FAILS ON CONTROLFILE RESTORE FROM SERVICE WITH INVALID USERNAME/PASSWORD |
| 38355045 | ZDM MIGRATION FAILS WITH PRGZ-1433 : UNABLE TO ACCESS FILE ON SHARED PATH MAPPED TO DIRECTORY OBJECT 'ZDM_DATA_PUMP_DIR' IN TARGET DATABASE XXXXDB.COM |
| 38422625 | ZDM - EVEN AFTER ENABLING MINIMAL SUPPLEMENTAL LOGGING, ONLINE LOGICAL EVAL/MIGRATION JOB FROM ADBD TO ADBS IS FAILING. |
| 38493179 | ZDM LOGICAL MIGRATION | LOGICAL ONLINE MIGRATION FAILING AT ZDM_ADVANCE_SEQUENCES WITH ORA-01795: MAXIMUM NUMBER OF EXPRESSIONS IN A LIST IS 1000 |
| 38530497 | ZDM ERRORS RESTORING THE TARGET IN A PHYSICAL ONLINE MIGRATION WHERE THE SOURCE DB USES ORACLE GDS |
| 38554913 | ZDM - PHYSICAL MIGRATION FAILURE WITH OKV: VERIFYING EXACS DATABASE AS A SERVICE (DBAAS) WALLET TRANSPARENT DATA ENCRYPTION (TDE) KEYSTORE |
| 38455530 | ZDM ONLINE LOGICAL MIGRATION FAILS WITH PRGZ-1308 WHEN USING DATAPUMPSETTINGS_JOBMODE=TABLE WITH INCLUDEOBJECTS |
| 38001980 | ABILITY FOR ZDM TO SSH TO SOURCE AND TARGET USING OTHER THAN PORT 22 |
| 38589656 | ZDM ONLINE PHYSICAL MIGRATION SET WRONG PASSWORDFILE IN CRS REGISTRY AFTER RMAN |
| 38704182 | ZDM DBLINK IMPORT DIRECTORY OBJECT NOT HONORED |
| 38626118 | OPC :CONSOLE SHOWS "LAKEHOUSE," BUT THE API RETURNS "LH" (LAKEHOUSE) |
1.4 Downloading the Zero Downtime Migration Installation Software
For a fresh installation of the latest Zero Downtime Migration software version, go to https://www.oracle.com/database/technologies/rac/zdm-downloads.html.
1.5 Downloading the Zero Downtime Migration Documentation
1.6 General Information
At the time of this release, there are some details and considerations about Zero Downtime Migration behavior that you should take note of.
1.6.1 Running RHP and Zero Downtime Migration Service on the Same Host
If the Zero Downtime Migration service is installed on the same host where RHP server is deployed, note that there are some workarounds.
If you have has started an RHP server/client on the same node as the Zero Downtime Migration service, using the default port, you must either
-
Stop RHPS/RHPC
-
Modify the port for RHPS/RHPC
This is to avoid port collision between RHP and Zero Downtime Migration. If you don't want to change RHP configuration, you can also modify the port for Zero Downtime Migration before starting the Zero Downtime Migration service.
To identify the ports being used by Zero Downtime Migration:
ZDMinstallation/home/bin/zdmservice status To stop the Zero Downtime Migration service:
ZDMinstallation/home/bin/zdmservice stop To modify the ports:
ZDMinstallation/home/bin/zdmservice modify -help
Modifies configuration values.
USAGE: zdmservice modify
Optional parameters:
transferPortRange=<Range_of_ports>
rmiPort=<rmi_port>
httpPort=<http_port>
mysqlPort=<mysql_port>For example:
ZDMinstallation/home/bin/zdmservice modify mysqlPort=8899
Editing MySQL port...
Successfully edited port=.* in file my.cnf
Successfully edited ^\(CONN_DESC=\).* in file rhp.pref
Successfully edited ^\(MYSQL_PORT=\).* in file rhp.pref1.6.2 Cross-Edition Migration
Zero Downtime Migration cannot be used to migrate an Enterprise Edition database to a Standard Edition database. In the converse case, Standard Edition databases can be migrated to Enterprise Edition databases, except physical online migration method. For Datapump based migration, the Datapump Dumps are not exported encrypted.
1.6.3 EXT3 File System Support
There is a known issue which prevents Zero Downtime Migration from being installed in EXT3 file systems. The root cause is MySQL bug 102384. This is not a limitation of Zero Downtime Migration, but a constraint from MySQL. When that bug is resolved, Zero Downtime Migration is expected to work on EXT3 file systems.
1.7 Known Issues
At the time of this release, the following are known issues with Zero Downtime Migration that could occur in rare circumstances. For each issue, a workaround is provided.
1.7.1 ZDM DR Migration Does not Support Non-CDB to PDB Parameters
NONCDBTOPDB_CONVERSIONNONCDBTOPDB_SWITCHOVERZDM_NONCDBTOPDB_PDB_NAME
1.7.2 ZDM Refresh is Failing
Issue: ZDM fails while refreshing the materialized views on the target during
the ZDM_MVIEW_REFRESH_PHASE.
DECLARE failures BINARY_INTEGER;
BEGIN
DBMS_MVIEW.REFRESH_ALL_MVIEWS(1, 'C', NULL, TRUE, TRUE, FALSE);
END;
/1.7.3 Job With STS Migration Does Not Support Reuse Dumpfile Option in Response File
Issue: For offline logical migration, job with STS migration does not support
the reuse dumpfile option in response file. ZDM migration to migrate STS data from
source to a target database fails if the
DATAPUMPSETTINGS_REUSE_DUMPPREFIX parameter is used to reuse
dumpfile from previous jobs.
1.7.4 ZDM Job Completed Successfully, APEX Migration Failed
Issue: ZDM job completed successfully, however, APEX migration failed during the process.
SOURCEDATABASE_ADMINUSERNAME, then
an error in the import log will be shown as:
phoenix94034.dev3sub2phx.databasede3phx.oraclevcn.com: SQL output : Schema - MIGUSER
phoenix94034.dev3sub2phx.databasede3phx.oraclevcn.com: SQL output : Tablename - APEX_STAGE_TABLE
phoenix94034.dev3sub2phx.databasede3phx.oraclevcn.com: SQL output : Error: -20001-ORA-20001: staging table MIGUSER.APEX_STAGE_TABLE does not exist. 1.7.5 Missing Prerequisite: EXECUTE privilege on SYS.UTL_RECOMP for Exadb-XS target
Issue: ZDM fails to compile objects at database level for Exadb-XS target with:
Pls-00201: identifier 'sys.utl_recomp' must be declared.
Workaround: The migration user must have EXECUTE permission on
SYS.UTL_RECOMP. If you do not grant this privilege then, ZDM cannot invoke
SYS.UTL_RECOMP, resulting in the PLS-00201 error.
1.7.6 Encryption Keys Validation for Physical Migration when OKV Persistent Cache is Enabled
Issue: Currently in physical migration ZDM validates the encryption keys by checking the gv$encryption_keys view during the ZDM_POST_MIGRATE_TGT phase. If the target is configured with OKV persistent cache (PKCS11_PERSISTENT_CACHE_FIRST=1) the gv$encryption_keys view does not reflect all the keys that are loaded from OKV during the ZDM migration which results in the following error:
PRGZ-3536 : tablespace encryption keys found missing from the Transparent Data Encryption (TDE) keystore
Workaround: Currently the workaround is to set PKCS11_PERSISTENT_CACHE_FIRST=0 in the okvclient.ora before starting the migration and re-enable the persistent cache after the migration.
1.7.7 Hybrid Migration Jobs Getting Failed
Issue: The hybrid migration jobs are not getting suspended and instead getting failed.
Job execution is suspended during the ZDM_XTTS_BACKUP_FULL_SRC phase and the job gets failed.
1.7.8 Online Physical Migration Fails For TDE On ASM Client Cluster
Issues: ZDM online physical migration job using TDE on ASM whose cluster mode is client cluster fails during the ZDM_CLONE_TGT phase with the following errors:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/03/2024 19:26:56
ORA-19870: error while restoring backup piece /nfs-mount/srcdb/c-1265409764-20241003-02
ORA-19913: unable to decrypt backup ORA-28365: wallet is not open
Solution: Apply the patch for RDBMS Bug 37303078.
1.7.9 Known Issue for Migrations Involving Encrypt On Restore
Issues: For migrations involving encrypt on restore, such as restore from service and backup or restore, for Oracle Database 19c or later, ensure to apply the following two fixes to the target database before starting the migration:
- 35495759 - (G) - 80 - V$DATABASE_KEY_INFO / CONTROL FILE DB KEY NEEDS TO BE RESYNC'ED FROM SYSTEM DATAFILES (RTI 26895084)
- 36879267 - (G) - 80 - ORA-28374: TYPED MASTER KEY NOT FOUND IN WALLET AFTER FIX FOR BUG 36697088
1.7.10 ZDM Encounters Failures During the ZDM_PRE_MIGRATION_ADVISOR Phase
Issues: ZDM encounters failures during the ZDM_PRE_MIGRATION_ADVISOR phase for Oracle Autonomous Database migrations for Oracle AI Database 26ai.
RUNCPATREMOTELY=TRUECOPYCPATREPORTTOZDMHOST=FALSE
1.7.11 Consecutive Migrations for Creating Tablespaces Causing Space Exhaustion
Issues: If you select autocreate for tablespaces and any tablespace fails to create, then when you retry the creation of tablespace, it duplicates the creation of the datafiles. So, additional datafiles on the target database get added leading to space exhaustion.
Solution: Exclude the creation of tablespaces by specifying TABLESPACEDETAILS_EXCLUDE parameter or specify the TABLESPACEDETAILS_AUTOCREATE as FALSE.
1.7.12 Hybrid Migration Failed at ZDM_VALIDATE_XTTS_SRC
Issues: The hybrid migration fails at ZDM_VALIDATE_XTTS_SRC while migrating from Oracle Database 11.2.0.4 source to Oracle Database 19c target database.
Solution: If you plan to migrate from Oracle Database 11.2.0.4 sources, you also need the latest Perl patch 5.28.2 or later.
1.7.13 Hybrid Migration Encrypted Tablespace Migration Issues
- The tablespaces are getting migrated even when they are excluded using the TABLESPACEDETAILS_EXCLUDE parameter.
- The above excluded tablespace is displayed as unencrypted at the target database even when it was encrypted at the source database.
1.7.14 Hybrid Migration Failing at ZDM_DATAPUMP_IMPORT_TGT for Oracle Database 12.2 Source Database
Issues: The hybrid migration is failing at ZDM_DATAPUMP_IMPORT_TGT phase when the source database is Oracle Database 12.2 due to SPATIAL_CSW_ADMIN_USR object. According to this MOS note, the user SPATIAL_WFS_ADMIN_USR is no longer needed and needs to be ignored for Oracle Database 12.2.
Solution: This is an expected behavior according to the Oracle Data Pump documentation. You can review the errors and if these are the only errors, you can resume the job with -ignore IMPORT_ERRORS.
1.7.15 ZDM_XTTS_RESTORE_FULL_TGT:Restore Foreign Tablespace Users to New from Backup set is Failing with Permission Errors
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/24/2024 03:32:19
ORA-19505: failed to identify file "/nfsshare/allnodes/importdumpaix/rman_job_10/RRACW_backup_042u4ku2_4_1_1"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 7This is because the RMAN backups are not accessible by the target Oracle user. Setting ZDM response file variable RMANSETTINGS_PUBLICREAD=TRUE does not help either.
Solution: As workaround, if it is not possible to use the same user or use a common primary group, make the backups accessible by running an OS command such as the following on the backup location to make the backup accessible by the target Oracle user:
chmod -R a+rX /nfsshare/allnodes/importdumpaix/rman_job_10
1.7.16 Physical Offline Migration Fails in ZDM_DATABASE_UPGRADE_TGT
Issues: Physical offline migration from Oracle Database 19c source to Oracle AI Database 26ai target fails in the ZDM_DATABASE_UPGRADE_TGT phase with the following error:
ORA-02149: Specified partition does not exist
Solution: As workaround, get patch from Bug 36710007 and apply in target dbhome. This issue is being tracked with Bug 36710007.
1.7.17 Data Pump Export Logs Skipping The Source Database Host/S3 Bucket
Issues: The Oracle Data Pump export logs skip the source database host/S3 Bucket while migrating from Amazon Web Services (AWS) RDS source to Oracle Autonomous Database Serverless target database.
- Directory specified exists in source : If directory exists, after completion of migration job (SUCCESS/FAILURE) the export and estimate logs are present in the existing directory.
- Directory specified does not exist in source and is created in the workflow : The following cases are available for this scenario:
evaljob: After completion ofevaljob, estimate log is present in created directory.- Failed Migration Job: After completion of
evaljob, estimate log is present in created directory. - SUCCESS Migration job: ZDM creates directory in RDS, stores the export dumps and log files in this directory, uploads log files on S3, drops the created directory.
1.7.18 Known Issues for Upgrade Scenario
Issues: The environment variables such as ORACLE_HOME and any other environment variable with the symlink path might cause failure of UPGRADE related phases.
Solution: Set ORACLE_HOME and the other environment variable with the actual path and not the symlink path.
1.7.19 Issue with Hybrid Migration for Oracle Database 11.2.0.4 Source Database
Issue: The hybrid migration to Oracle Database 11.2.0.4 source is failing in ZDM 21.5 with the following error:
</ARG><ARG>ERROR at line 1:
</ARG><ARG>ORA-00904: "ORACLE_MAINTAINED": invalid identifier
</ARG><ARG></ARG><ARG></ARG><ARG>Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
</ARG><ARG>With the Partitioning, Real Application Clusters, OLAP, Data Mining
</ARG><ARG>and Real Application Testing options
</ARG></ARGS></ERR_FILE>
1.7.20 Hybrid Migration to Oracle AI Database 26ai Target Failing with EM_EXPRESS_ALL Does Not Exist Error
Issue: The hybrid migration to Oracle AI Database 26ai target
database is failing at the ZDM_DATAPUMP_IMPORT_TGT phase with the
EM_EXPRESS_ALL does not exist error.
Workaround: Review the import errors and resume job with -ignore IMPORT_ERRORS.
1.7.21 Physical migration for Non-CDB to PDB along with timezone upgrade and database upgrade fails in the TIMEZONE_UPGRADE_PREPARE_TGT phase
Issue: Physical migration for Non-CDB to PDB along with timezone upgrade and database upgrade fails in the TIMEZONE_UPGRADE_PREPARE_TGT phase.
1.7.22 Issue with logical migration with DATAPUMPSETTINGS_JOBMODE=FULL
Issue: For Oracle Base Database 21c target database, ZDM logical migration fails when DATAPUMPSETTINGS_JOBMODE=FULL. It gets stuck at ZDM_DATAPUMP_IMPORT_TGT phase.
1.7.23 Issue with RMAN encryption in offline migration
Issue: The buffer cache layer that enables the RMAN 'encrypt on restore' has the current issue when the control file is restored a second time.
Workaround: Encrypt the system tablespaces, post migration.
1.7.24 Issue with Logical Migration when COPYCPATREPORTTOZDMHOST is set to TRUE while using ZDMAUTH credentials for migration
Issue: On setting the parameter COPYCPATREPORTTOZDMHOST=TRUE, the CPAT report is not getting copied to ZDM host when using ZDMAUTH credentials for migration.
Workaround: This issue does not occur with dbuser credentials.
1.7.25 ZDM_NONCDBTOPDB_CONVERSION fails during NONCDBTOPDB migrations
- During NONCDBTOPDB migrations, the conversion phase (ZDM_NONCDBTOPDB_CONVERSION) fails unexpectedly for some variant of Oracle Database 12g (12.1) .
- During NONCDBTOPDB migrations, conversion phase fails if patch compatibility violation errors were found and datapatch phase was skipped (TGT_SKIP_DATAPATCH=TRUE).
Workaround: For NONCDBTOPDB migrations it is recommended to run the datapatch phase (TGT_SKIP_DATAPATCH=FALSE) before conversion phase.
1.7.26 ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Issues: The following error occurs during the ZDM_SWITCHOVER_SRC phase:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor. Unable to connect to database using SERVICE_NAME=<unique_db_name>_DGMGRL.
Workaround: Manually register the <db_unique_name>_DGMGRL service by referring to: Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1) https://support.oracle.com/epmos/faces/DocContentDisplay?id=1387859.1.
1.7.27 ZDM Job Fails With Permission Denied Issue
PRCZ-4001 : failed to execute command "/tmp//oradiscover_<>.sh"PRCZ-2103 : Failed to execute command "/tmp//oradiscover_<>.sh" bash: /tmp//oradiscover_120212.sh: Permission denied
Workaround: You might get the aforementioned errors if the migration job fails in GET_SRC_INFO phase. However, these might not be the actual errors. To verify the actual errors, check the log file.
Ensure that /tmp is mounted with execute permission as a prerequisite for the source and target databases.
1.7.28 Known Issues for Hybrid Migration
- RMAN backup encryption.
- Encrypt on restore.
Solution: Both of the above issues are dependent on the availability of the following RMAN bug:
Bug 31229602 - RMAN BACKUPS - K_BTTRDA BLOCKS PROCESSED BY KD4_ENCRYPT_OFFSET1 ARE NOT ENCRYPTED.
1.7.29 Procedural Replication Does Not Work with Error 'ORA-01031: INSUFFICIENT PRIVILEGES'
Issue: Migration fails with the following error:
ORA-01031: INSUFFICIENT PRIVILEGES
Solution: As a workaround, run the following commands as sys user:
- alter session set container = '<pdb>'- grant set container to ggadmin
1.7.30 Physical Migration Failing at UPGRADE_TGT for a Non-CDB Source
Issue: While performing plug in and upgrade for non-CDB source to a higher version of CDB, the physical migration might fail in the ZDM_DATABASE_UPGRADE_TGT phase for SYS.ALERT_QUE issue. This issue occurs if you get the following error in catupgrd0.log:
SQL>
SQL> -- Create alert queue table and alert queue
SQL> BEGIN
2 BEGIN
3 dbms_aqadm.create_queue_table(
4 queue_table => 'SYS.ALERT_QT',
5 queue_payload_type => 'SYS.ALERT_TYPE',
6 storage_clause => 'TABLESPACE "SYSAUX"',
7 multiple_consumers => TRUE,
8 comment => 'Server Generated Alert Queue Table',
9 secure => TRUE);
10 dbms_aqadm.create_queue(
11 queue_name => 'SYS.ALERT_QUE',
12 queue_table => 'SYS.ALERT_QT',
13 comment => 'Server Generated Alert Queue');
14 EXCEPTION
15 when others then
16 if sqlcode = -24001 then NULL;
17 else raise;
18 end if;
19 END;
20 dbms_aqadm.start_queue('SYS.ALERT_QUE', TRUE, TRUE);
21 dbms_aqadm.start_queue('SYS.AQ$_ALERT_QT_E', FALSE, TRUE);
22 commit;
23 EXCEPTION
24 when others then
25 raise;
26 END;
27 /
BEGIN
*
ERROR at line 1:
ORA-04063: SYS.ALERT_QUE has errors
ORA-06512: at line 25
ORA-06512: at "SYS.DBMS_AQADM", line 742
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8049
ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 912
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8025
ORA-06512: at "SYS.DBMS_AQADM", line 737
ORA-06512: at line 20Solution: As a workaround, perform the following steps:
- In the target node, copy all the migrate database related files from 12.1HOME/dbs to 19c HOME/dbs
- Perform the steps mentioned in Upgrade to 12.2 fails with Error : ORA-04063: SYS.ALERT_QUE has errors (Doc ID 2632809.1).
- Perform the steps mentioned in How to recreate the
SYS.ALERT_QUE(Doc ID 430146.1).
1.7.31 Restriction for Number of INCLUDEOBJECTS
Issue: There is an INCLUDEOBJECTS limitation in Datapump component where large number of INCLUDEOBJECTS cannot be supplied.
Solution: As a workaround, create a table in the source database in export user schema to list all the TABLE objects to be filtered and specify the following parameters:
<ADMIN schema>.INCLUDE_TEMP_LIST and list all objects specified in INCLUDEOBJECTS to be filtered for SCHEMA SCOTT.INCLUDEOBJECTS-1=owner:SCOTT
DATAPUMPSETTINGS_METADATAFILTERS-1=name:NAME_EXPR,value:’IN(select OBJECT_NAME from <ADMIN schema>.INCLUDE_TEMP_LIST’)’, objectType:TABLENote:
For online logical migration, set the filtering of such objects in the Oracle GoldenGate EXTRACT parameter after creation of extract. Pause the migration job after ZDM_CREATE_GG_EXTRACT_SRC and update the parameter file.1.7.32 Non CDB to PDB Not Supported for DR Migrations
Issue: The use case of migrating from a non-CDB source database to a PDB target database is not supported for DR migrations.
Solution: The DR is at the container level. You can set up the target CDB to have a DR of its own and when the non-CDB is plugged into the target CDB (regular NON-CDB to PDB migration), it should get replicated via target CDB.
1.7.33 ZDM Operations Fail with "Unable to negotiate key exchange for kex algorithms"
Issue: When the source DB is in an old Linux distribution that only has deprecated KexAlgorithms, ZDM operations fail with the following error:
Unable to negotiate key exchange for kex algorithms.
- Edit the
<zdmBase>/crsdata/<hostname>/rhp/conf/rhp.prefto add the following line:USE_LEGACY_SSH=TRUE. - Restart ZDM .
1.7.34 RESUME JOB FROM 21.3.12 AFTER UPGRADING TO 21.4.2 FAILS WITH UNRECOGNIZED FIELD "ENVIRONMENT" ERROR
Issue: When you resume a job that was started using a ZDM version older than 21.4.1, then after upgrading ZDM to the ZDM 21.4.1 version, the resume job fails with the following error:
Unrecognized field "environment" (class oracle.cluster.gridhome.apis.actions.database.ZdmPayload$SourceContainerDatabase$Builder), not marked as ignorable (6 known properties: "agentId", "connectionDetails", "copy", "streamId", "adminUsername", "ggAdminUsername"]).
Solution: As a workaround, you must install the ZDM 21.4.2 and later version if you want to upgrade from an earlier version of ZDM.
1.7.35 Data Guard Cleanup Issues
- Clearing
log_archive_configusing the following statement causes the instance cto rash withORA-16188:Alter system set log_archive_config='' scope=both sid='*' - The
fal_severdoes not get cleared and points to the target database. This results in the source database continuing to fetch redo logs.
- According to the MOS note Doc ID 1580482.1, the correct way to reset
log_archive_configis, to set it toNODG_CONFIG:alter system set log_archive_config=NODG_CONFIG scope=both sid='*'; - Clear the
fal_serverby running the following command:alter system set fal_server='' scope=both sid='*';
1.7.36 ZDM INIT Parameter Modification Leading to Data Guard Configuration Issues
Issue: While performing an offline physical migration from Oracle Exadata Database Service to Oracle Exadata Database Service on Dedicated Infrastructure, Data Guard configuration from console fails as it expects certain init parameters on the database level (with *.init_parameter) instead of the instance specific parameters.
init parameters, and adds the additional entries:
- ZDM removes:
. *.compatible='19.0.0' -
ZDM adds:
inst1. compatible='19.0.0'inst2. compatible='19.0.0' - This leads to an issue when you try to configure Data Guard in the Cloud, post migration and produces the following error:
CDG-50611 : Parameter COMPATIBLE is not
set Set parameter as ALTER SYSTEM SET COMPATIBLE=<value>- Further, ZDM removes:
*.db_files=1024 - ZDM adds:
inst1. db_files=1024 inst2. db_files=1024 - When configuring the Data Guard post migration, Data Guard takes the value of
db_files as 200as there is no entry such as*.db_files=1024. - Further, ZDM adds the following additional entries which are not required for the database to function:
exacs-hostname1.thread=1 exacs-hostname2.thread=2 exacs-hostname1.undo_tablespace=’UNDOTBS1’ exacs-hostname1.undo_tablespace=’UNDOTBS2’ exacs-hostname1. instance_number=1 exacs-hostname1. instance_number =2Note:
ZDM adds these entries in addition to the instance specific thread entries, instance specificinstance_numberentries, and instance specificundo_tablespaceentries.
- Remove the instance specific values wherever possible and remove the following parameters:
exacs-hostname1. undo_tablespace=’UNDOTBS1’ exacs-hostname1. undo_tablespace=’UNDOTBS2’ exacs- hostname1. instance_number=1 exacs- hostname1. instance_number=2 exacs- hostname1.thread=1 exacs-hostname2.thread=2 - Create the entries for
sid=’*’as shown:SQL> alter system set compatible='19.0.0' scope=spfile sid='*'; System altered. SQL> alter system set db_files=1024 scope=spfile sid='*'; System altered. - Query the entries created above as shown:
SQL> show parameter compatible NAME TYPE VALUE compatible string 19.0.0 noncdb_compatible boolean FALSE
1.7.37 Oracle Data Pump Startup Errors
- The invalid permission on the export directory path,
- Invalid arguments or
- Procedure semantics issues due to combination of input parameters.
The underlying error is not captured in DATAPUMP error log as the job did not start. For such case, the Oracle Data Pump start failures has to be looked in the file as shown. The following issues are observed during a Data Guard cleanup:
Solution: Look for the highlighted text in first line to identify the Data pump job unique identifier associated with ZDM job which is, ZDM_152_DP_EXPORT_9176.
- Connect to the source database host (if it is a RAC, then log can be on any of the database node, so repeat the following steps on each node)
- Identify the diag location using following query (if required):
select name, value from v$diag_info where name like '%Trace'; - Change the directory using
cd - Run the following command:
grep ZDM_152_DP_EXPORT_9176 *dm* - Open the file containing
ZDM_152_DP_EXPORT_9176job start details and identify the ORA- errors that resulted in failure.
- Perform the same steps in target nodes, if the action that failed is IMPORT.
- For ADB target case, check for similar text in output following query:
select payload from v$diag_trace_file_contents where trace_filename like'%dm0%';
1.7.38 NON-CDB TO PDB Conversion Use-case for ZDM AUX STARTUP
Issue: When migrating a non-CDB source to a CDB target as PDB, ZDM creates an auxiliary database to first migrate the non-CDB source to as a non-CDB database on the target. After the non-CDB is brought to the target, ZDM does an unplug/plug to plugin the migrated non-CDB. To create an auxiliary database, ZDM uses the source database SPFILE, this means that while the migration is in progress, the target needs to be able to run two databases simultaneously, the target CDB as well as the auxiliary database running with source SPFILE/configuration. When the source is configured with very large memory size or SGA/PGA, it might result in failure to start the auxiliary database.
- Increase the target size for the duration of the migration. This could be increasing the sysctl parameters configuration, ocpu or memory sizing if using elastic computing, and so on.
- Decrease the size of the source memory size SGA/PGA before the migration.
- Change the ZDM auxiliary database size by recreating the AUX SPFILE.
1.7.39 ZDM Skips C## or c## Users
Issue: During a logical migration, the C## user found in PDB are not moved by Oracle Data Pump. However, the issue is with grants and PROFILES associated with it that are failing to import. So, setting explicit EXCLUDE on these C## users ensures its dependent objects are not moved as well.
Workaround: ZDM does not migrate common users found in PDB (starting with 'C##') for SCHEMA mode migration and for FULL job, it explicitly finds the common users found in PDB and sets SCHEMA EXCLUDE for all such users.
1.7.40 Tables Created in the Data Tablespace in Oracle Autonomous Database Instead of the Respective User Data Tablespaces
Issue: Objects are getting mapped to DATA at the target and tables are getting created in the data tablespace of Oracle Autonomous Database on Exadata Cloud@Customer instead of the respective user data tablespaces.
Expected behavior: Migration to Oracle Autonomous Database on Exadata Cloud@Customer with objects in the same tablespace as it was at the source database.
Workaround: ZDM will no more set the TRANSFORM SEGMENT_ATTRIBUTES parameter to NO if user tablespaces are created in target. Use the following workaround to avoid this:
By disabling all the default transform applied, select the ones necessary after reviewing DATAPUMPSETTINGS_SKIPDEFAULTTRANSFORM=TRUE. As the above setting avoids all the default transforms, review following transform and set the relevant transform as expected. See Default Data Pump Parameter Settings for Zero Downtime Migration if you need more details on defaults shown above.
DATAPUMPSETTINGS_SECUREFILELOB=TRUE
DATAPUMPSETTINGS_METADATATRANSFORMS-1=name:LOB_STORAGE, value:'SECUREFILE'
DATAPUMPSETTINGS_METADATATRANSFORMS-2=name:OMIT_ENCRYPTION_CLAUSE, value:1
DATAPUMPSETTINGS_METADATATRANSFORMS-3=name:DWCS_CVT_IOTS, value:1
DATAPUMPSETTINGS_METADATATRANSFORMS-4=name:CONSTRAINT_USE_DEFAULT_INDEX,
value:11.7.41 Exporting and Importing fails for ADB-S and ADB-D During a Logical Migration
Issues: Exporting Oracle Autonomous Database Serverless and importing to Oracle Autonomous Database on Exadata Cloud@Customer fails during a logical migration. Similarly, exporting Oracle Autonomous Database on Exadata Cloud@Customer and importing to Oracle Autonomous Database Serverless fails during a logical migration.
This happens when the default roles are not present in Oracle Autonomous Database Serverless and Oracle Autonomous Database on Exadata Cloud@Customer respectively.
1.7.42 Skip the ZDM RELOAD of empty schema or schema with no qualifying objects
Solution: ZDM filters objects for reload and if there are no objects to be reloaded for any specific schema post applying the following conditions, then avoid the reload feature or do not include the particular schema.
- Objects from
DBA_GOLDENGATE_SUPPORT_MODEthat haveSUPPORT_MODE=NONEorSUPPORT_MODE=PLSQLorSUPPORT_MODE= INTERNAL. - Objects from
DBA_GOLDENGATE_NOT_UNIQUEthat are markedBAD_COLUMN=Y.ZDM skips
When there are no objects are listed for reload from specific schema, then skip the reload feature or do not include the particular schema.QUEUE_TABLESfrom reload.
1.7.43 PREMIGRATION ADVISOR COMPILATION FAILURES DURING DRY RUN - PRCZ-2103 CAN'T LOCATE JSON/PP.PM
Issue: The OPRCZ-2103 CAN'T LOCATE JSON/PP.PM error occurs during the ZDM_PRE_MIGRATION_ADVISOR phase.
RUNCPATREMOTELY=TRUECOPYCPATREPORTTOZDMHOST=FALSE
1.7.44 ORA-23605: INVALID VALUE "" FOR GOLDENGATE PARAMETER PARALLELISM.
Issue: The Oracle GoldenGate Extract startup fails when the source database is Oracle Standard Edition 2, due to the following error:
ORA-23605: INVALID VALUE "" FOR GOLDENGATE PARAMETER PARALLELISM.
Solution: If you do not apply the patch on the source database, then specify GOLDENGATESETTINGS_EXTRACT_PARALLELISM=1 parameter in the ZDM response file. ZDM will set TRANLOGOPTIONS INTEGRATEDPARAMS (parallelism 1) for Oracle GoldenGate Extract.
1.7.45 PRCZ-4002 : failed to execute command "/bin/cp" using the privileged execution plugin "zdmauth" on nodes "dbserver"
Issue: The ZDMCLI RESUME JOB command fails during migration and the ZDM job pauses at the ZDM_CONFIGURE_DG_SRC phase. The error occurs when you update the /etc/hosts file of the source database server with a different IP address or alias for the source database server.
Solution: Ensure that the IP address of the source database server is correctly updated in the /etc/hosts file of the source database server and the ZDM server.
1.7.46 TLS Service is required for Fractional OCPU Services in Oracle Autonomous Database
Issue: The TLS service is required for fractional OCPU services in Oracle Autonomous Database service alias which is to be specified in the response file parameter. Specifying non-TLS alias is not supported.
Solution: If the target database is Oracle Autonomous Database on Dedicated Exadata Infrastructure or Oracle Autonomous Database on Exadata Cloud@Customer using fractional OCPU services, then you can specify TP_TLS or LOW_TLS aliases for the TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME parameter.
For more information about specifying the requirement for the service alias for the target database, see Setting Logical Migration Parameters.
1.7.47 Migrating from AIX to EXACC using NFS with Non-readable Dump Fails to CHOWN
Issue: Migrating from AIX to EXACC using NFS with non-readable dump fails to CHOWN in source AIX host.
Solution: Use an alternate option for migrating using NFS which is documented in Migrating to Co-Managed Database Server with NFS Data Transfer Medium.
However the following scenario is not supported for IBM AIX: If the IDs do not match, Zero Downtime Migration automatically discovers the primary group of the target database user and changes the group of the dump to the primary group of the target database user.
1.7.48 Logical migration with DBUSER plugin must also set RUNCPATREMOTELY
Solution: To perform a logical migration using database user authentication
plug-in as dbuser, you must set value of the
RUNCPATREMOTELY parameter to
TRUE.
See RUNCPATREMOTELY for information about this parameter.
1.7.49 Warnings shown when running zdmservice operations
Issue: A warning similar to the following is shown when running zdmservice
operations start, stop,
status, or deinstall.
Use of uninitialized value in concatenation (.) or string at / [...]
/zdm21.3.1/home/lib/jwcctl_lib.pm line 571.
CRS_ERROR: Invalid data ALWAYS_ON= in _USR_ORA_ENVNote that the line number in the output may vary.
Solution: This warning message can be ignored. It does not affect the
use of the zdmservice operations or cause any issues for
migration.
1.7.50 Logical Migration Using DBLINK Fails with PRGZ-1177
Issue: "PRGZ-1177 : Database link "dblink_name" is invalid and unusable" error causes failure in a logical migration using a database link in a PDB or multitenant database in version 12.1.0.x.
Solution: See 12c PDB or Multitenant Only: ORA-02085: Database Link "LINK_NAME_HERE" Connects To "TARGET_DB" (Doc ID 2344831.1)
1.7.51 PRGZ-1161 : Predefined database service "TP" does not exist
Issue: PRGZ-1161 : Predefined database service "TP" does not exist for Autonomous Database ocid is a known issue for fractional OCPU configuration
If you choose to configure ‘Fractional ADB’ (Fraction of OCPU per DB instead of integer OCPU) – this flavor does not provide standard service alias HIGH and
Solution: Set the RSP parameter
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME to LOW_TLS or
TP_TLS
The available services are - ‘low’ or ‘low_tls’ for Autonomous Data Warehouse with fractional OCPU, and ‘tp’ or ‘tp_tls’ for Autonomous Transaction Processing with fractional OCPU.
1.7.52 PRGG-1043 : No heartbeat table entries were found for Oracle GoldenGate Replicat process
Issue: An online logical migration job can report error PRGG-1043: No heartbeat table entries were found for Oracle GoldenGate Replicat process process_name due to one of the following causes:
-
Initialization parameter
job_queue_processeswas set to zero in the source or target database.Solution: Run the following statements on the database:
show parameter job_queue_processes; alter system set job_queue_processes=100 scope=both; exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE'); -
Scheduled job
GG_UPDATE_HEARTBEATSis not active in the source database. -
The server hosting Oracle GoldenGate deployments has a different time zone than the source database.
Solution: First, do one of the following solutions:
-
Modify the time zone for the server hosting Oracle GoldenGate deployments, OR
-
Use the web UI for the Oracle GoldenGate deployment to add Extract parameter
TRANLOGOPTIONS SOURCE_OS_TIMEZONEand restart Extract.For example, if the source database time zone is UTC-5, then set parameter
TRANLOGOPTIONS SOURCE_OS_TIMEZONE -5. For more information, see TRANLOGOPTIONS in Reference for Oracle GoldenGate.
Then, ensure that the
DST_PRIMARY_TT_VERSIONproperty in the source database is up to date. -
1.7.53 Restore Fails When Source Uses WALLET_ROOT
Issue: Zero Downtime Migration does not currently handle the migration of the TDE
wallet from the source database to the target when the source database is using the
wallet_root initialization parameter. Without the wallets available
on the target database, the restore phase fails with an error similar to the
following:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/15/2021 07:35:11
ORA-19870: error while restoring backup piece
/rman_PRD1/ZDM/IQPCZDM/c-3999816841-20210614-00
ORA-19913: unable to decrypt backupSolution: Manually copy the wallet to the target and resume the job.
1.7.54 PRCZ-4026 Thrown During Migration to Oracle Database 19.10 Target
Issue: When attempting to migrate to an Oracle Database 19.10 home at target, the
migration job fails at phase ZDM_FINALIZE_TGT with error PRCZ-4026,
because of Oracle Clusterware (OCW) Bug 31070231.
PRCZ-4026 : Resource ora.db_unique_name.db is already running
on nodes node.
Solution: Apply the Backport Label Request (BLR) for Bug#32646135 to the target 19.10 dbhome to avoid the reported issue. Once the BLR is applied, you can resume the failed migration job to completion.
Precaution: For physical migrations, you can avoid this issue by ensuring that your target database home is not on Oracle Database 19.10.
1.7.55 Environments With Oracle 11.2.0.4 Must Apply Perl Patch
Issue: Before using Zero Downtime Migration, you must apply a PERL patch if your source database environment meets either of the following conditions.
- Clusterware environment with Oracle Grid Infrastructure 11.2.0.4
- Single instance environment with Oracle Database 11.2.0.4
Solution: Download and apply Perl patch version 5.28.2 or later. Ensure that both the source and target Oracle Database 11g home include the patch for BUG 30508206 - UPDATE PERL IN 11.2.0.4 DATABASE ORACLE HOME TO V5.28.2.
1.7.56 ORA-39006 Thrown During Logical Migration to Oracle Autonomous Database on Dedicated Exadata Infrastructure Over Database Link
Issue: When attempting to migrate a database to an Oracle Autonomous Database on Dedicated Exadata Infrastructure target over a database link, the migration job fails with error ORA-39006.
ORA-39006: internal error
Solution: This is a Data Pump issue that is being tracked with Bug 31830685. Do not perform logical migrations over a database link to Oracle Autonomous Database on Dedicated Exadata Infrastructure targets until the bug is fixed and the fix is applied to the Autonomous target database.
1.7.57 Zero Downtime Migration Service Fails To Start After Upgrade
Issue: The following scenario occurs:
-
Perform migration jobs with Zero Downtime Migration 19.7
-
Response files used in those jobs are removed
-
Upgrade to Zero Downtime Migration 21.1
-
Attempt to run a migration
The following errors are seen.
CRS_ERROR:TCC-0004: The container was not able to start.
CRS_ERROR:One or more listeners failed to start. Full details will be found in
the appropriate container log fileContext [/rhp] startup failed due to previous
errors sync_start failed with exit code 1.
A similar error is found in the log files located in
zdm_installation_location/base/crsdata/hostname/rhp/logs/.
Caused by: oracle.gridhome.container.GHException: Internal
error:PRGO-3003 : Zero downtime migration (ZDM) template file
/home/jdoe/zdm_mydb.rsp does not exist.
Solution: To recover, manually recreate the response files listed in the log, and place them in the location specified in the log.
1.8 Troubleshooting
If you run into issues, check here in case a solution is published. For each issue, a workaround is provided.
1.8.1.1 INS-42505 Warning Shown During Installation
/stage/user/ZDM_KIT_relnumber>./zdminstall.sh setup
oraclehome=/stage/user/grid oraclebase=/stage/user/base
ziploc=/stage/user/ZDM_KIT_relnumber/rhp_home.zip -zdm
---------------------------------------
Unzipping shiphome to gridhome
---------------------------------------
Unzipping shiphome...
Shiphome unzipped successfully..
---------------------------------------
##### Starting GridHome Software Only Installation #####
---------------------------------------
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-42505] The installer has detected that the Oracle Grid
Infrastructure home software at (/stage/user/grid) is not complete.
CAUSE: Following files are missing:
...Solution: This warning message can be ignored. It does not affect the installation or cause any issues for migration.
1.8.2.1 General Connectivity Issues
Issue: If connectivity issues occur between the Zero Downtime Migration service host and the source or target environments, or between source and target environments, check the following areas.
Solution: Verify that the SSH configuration file
(/root/.ssh/config) has the appropriate entries:
Host *
ServerAliveInterval 10
ServerAliveCountMax 2
Host ocidb1
HostName 192.0.2.1
IdentityFile ~/.ssh/ocidb1.ppk
User opc
ProxyCommand /usr/bin/nc -X connect -x www-proxy.example.com:80 %h %p
Note that the proxy setup might not be required when you are not using a
proxy server for connectivity. For example, when the source database server is on Oracle
Cloud Infrastructure Classic, you can remove or comment the line starting with
ProxyCommand.
If the source is an Oracle RAC database, then make sure you copy the
~/.ssh/config file to all of the source Oracle RAC servers. The SSH
configuration file refers to the first Oracle RAC server host name, public IP address,
and private key attributes.
1.8.2.2 Communications Link Failure
Issue: If the MySQL server crashes you will see errors such as this one for the ZDM operations:
$ ./zdmcli query job -jobid 6
Exception [EclipseLink-4002] (Eclipse Persistence Services -
2.7.7.qualifier): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.cj.jdbc.exceptions.CommunicationsException:
Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The
driver has not received any packets from the server.
Error Code: 0
Query: ReadAllQuery(referenceClass=JobSchedulerImpl sql="SELECT
JOB_IDENTIFIER, M_ACELIST, ARGUMENTS, ATTRIBUTES, CLIENT_NAME,
COMMAND_PROVIDED, COMPARTMENT, CONTAINER_TYPE, CREATEDATE, CREATOR,
CURRENT_STATUS, DB_OCID, DBNAME, DEPLOYMENT_OCID, DISABLE_JOB_EXECUTION,
ELAPSED_TIME, END_TIME, EXECUTE_PHASES, EXECUTION_TIME, IS_EVAL, IS_PAUSED,
JOB_TYPE, METHOD_NAME, METRICS_LOCATION, OPERATION, PARAMETERS,
PARENT_JOB_ID, PAUSE_AFTER_PHASE, RESULT, PHASE, JOB_SCHEDULER_PHASES,
REGION, REST_USER_NAME, RESULT_LOCATION, SCHEDULED_TIME, SITE, SOURCEDB,
SOURCENODE, SOURCESID, SPARE1, SPARE2, SPARE3, SPARE_A, SPARE_B, SPARE_C,
START_TIME, STOP_AFTER_PHASE, TARGETNODE, JOB_THREAD_ID, UPD_DATE, USER_NAME,
ENTITY_VERSION, CUSTOMER FROM JOBSCHEDULER WHERE (PARENT_JOB_ID = ?)")Solution: If such Communications errors are seen, restart the Zero Downtime Migration service so that the MySQL server is restarted, after which the pending jobs will resume automatically.
Stop the Zero Downtime Migration service:
zdmuser> $ZDM_HOME/bin/zdmservice stopStart the Zero Downtime Migration service:
zdmuser> $ZDM_HOME/bin/zdmservice start
1.8.2.3 Evaluation Fails in Phase ZDM_GET_TGT_INFO
Issue: During the evaluation (-eval) phase of the migration
process, the evaluation fails in the ZDM_GET_TGT_INFO phase with the
following error for the Oracle RAC instance migration.
Executing phase ZDM_GET_TGT_INFO
Retrieving information from target node "trac11" ...
PRGZ-3130 : failed to establish connection to target listener from nodes [srac11, srac12]
PRCC-1021 : One or more of the submitted commands did not execute successfully.
PRCC-1025 : Command submitted on node srac11 timed out after 15 seconds.
PRCC-1025 : Command submitted on node srac12 timed out after 15 seconds.Solution:
- Get the SCAN name of source database and add it to the
/etc/hostsfile on both target database servers, with the public IP address of the source database server and the source database SCAN name. For example:192.0.2.3 source-scan - Get the SCAN name of the target database and add it to the
/etc/hostsfile on both source database servers, with the public IP address of the target database server and target database SCAN name. For example:192.0.2.1 target-scan
Note:
This issue, where the SCAN IP address is not added to/etc/hosts file, might occur because in some cases the
SCAN IP address is assigned as a private IP address, so it might not be
resolvable.
1.8.2.4 Object Storage Is Not Accessible
About to connect() to swiftobjectstorage.xx-region-1.oraclecloud.com port 443 (#0)
Trying 192.0.2.1... No route to host
Trying 192.0.2.2... No route to host
Trying 192.0.2.3... No route to host
couldn't connect to host
Closing connection #0
curl: (7) couldn't connect to host
Solution: On the Zero Downtime Migration service host, in the response file
template ($ZDM_HOME/rhp/zdm/template/zdm_template.rsp), set the Object
Storage Service proxy host and port parameters listed below, if a proxy is required to
connect to Object Storage from the source database server. For example:
SRC_OSS_PROXY_HOST=www-proxy-source.example.com
SRC_OSS_PROXY_PORT=80
In the response file template
($ZDM_HOME/rhp/zdm/template/zdm_template.rsp), set the Object
Storage Service proxy host and port parameters listed below, if a proxy is required to
connect to Object Storage from the target database server. For example:
TGT_OSS_PROXY_HOST=www-proxy-target.example.com
TGT_OSS_PROXY_PORT=80
1.8.2.5 SSH Error "EdDSA provider not supported"
Issue:
The following error messages appear in $ZDM_BASE/crsdata/zdm service hostname/rhp/zdmserver.log.0.
[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
[JSChChannel$LogOutputStream.flush:1520] 2020-04-04: WARNING: org.apache.sshd.client.session.C:
globalRequest(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com,
want-reply=false] failed (SshException) to process: EdDSA provider not supported
[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
[JSChChannel$LogOutputStream.flush:1520] 2020-04-04: FINE : org.apache.sshd.client.session.C:
globalRequest(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com,
want-reply=false] failure details
org.apache.sshd.common.SshException: EdDSA provider not supported
at org.apache.sshd.common.util.buffer.Buffer.getRawPublicKey(Buffer.java:446)
at org.apache.sshd.common.util.buffer.Buffer.getPublicKey(Buffer.java:420)
at org.apache.sshd.common.global.AbstractOpenSshHostKeysHandler.process(AbstractOpenSshHostKeysHandler.java:71)
at org.apache.sshd.common.global.AbstractOpenSshHostKeysHandler.process(AbstractOpenSshHostKeysHandler.java:38)
at org.apache.sshd.common.session.helpers.AbstractConnectionService.globalRequest(AbstractConnectionService.java:723)
at org.apache.sshd.common.session.helpers.AbstractConnectionService.process(AbstractConnectionService.java:363)
at org.apache.sshd.common.session.helpers.AbstractSession.doHandleMessage(AbstractSession.java:400)
at org.apache.sshd.common.session.helpers.AbstractSession.handleMessage(AbstractSession.java:333)
at org.apache.sshd.common.session.helpers.AbstractSession.decode(AbstractSession.java:1097)
at org.apache.sshd.common.session.helpers.AbstractSession.messageReceived(AbstractSession.java:294)
at org.apache.sshd.common.session.helpers.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:63)
at org.apache.sshd.common.io.nio2.Nio2Session.handleReadCycleCompletion(Nio2Session.java:357)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:335)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:332)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.lambda$completed$0(Nio2CompletionHandler.java:38)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:37)
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)
at sun.nio.ch.Invoker$2.run(Invoker.java:218)
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.security.NoSuchAlgorithmException: EdDSA provider not supported
at org.apache.sshd.common.util.security.SecurityUtils.generateEDDSAPublicKey(SecurityUtils.java:596)
at org.apache.sshd.common.util.buffer.keys.ED25519BufferPublicKeyParser.getRawPublicKey(ED25519BufferPublicKeyParser.java:45)
at org.apache.sshd.common.util.buffer.keys.BufferPublicKeyParser$2.getRawPublicKey(BufferPublicKeyParser.java:98)
at org.apache.sshd.common.util.buffer.Buffer.getRawPublicKey(Buffer.java:444)
... 22 more
[sshd-SshClient[3051eb49]-nio2-thread-1] [ 2020-04-04 00:26:24.142 GMT ]
[JSChChannel$LogOutputStream.flush:1520] 2020-04-04: FINE : org.apache.sshd.client.session.C:
sendGlobalResponse(ClientConnectionService[ClientSessionImpl[opc@samidb-db/140.238.254.80:22]])[hostkeys-00@openssh.com]
result=ReplyFailure, want-reply=false
[sshd-SshClient[3051eb49]-nio2-thread-2] [ 2020-04-04 00:26:24.182 GMT ]
[JSChChannel$LogOutputStream.flush:1520] 2020-04-04: FINE : org.apache.sshd.common.io.nio2.N:
handleReadCycleCompletion(Nio2Session[local=/192.168.0.2:41198, remote=samidb-db/140.238.254.80:22])
read 52 bytesSolution: Zero Downtime Migration uses the RSA format.
1.8.3.1 Transparent Data Encryption General Information
Depending on your source database release, Transparent Data Encryption (TDE) wallet configuration may be required.
- Oracle Database 12c Release 2 and later
For Oracle Database 12c Release 2 and later releases, TDE wallet configuration is mandatory and must be enabled on the source database before migration begins.
If TDE is not enabled, the database migration will fail.
Upon restore, the database tablespaces are encrypted using the wallet.
- Oracle Database 12c Release 1 and earlier
On Oracle Database 12c Release 1 and Oracle Database 11g Release 2 (11.2.0.4), TDE configuration is not required.
For information about the behavior of TDE in an Oracle Cloud environment, see My Oracle Support document Oracle Database Tablespace Encryption Behavior in Oracle Cloud (Doc ID 2359020.1).
1.8.3.2 Job Fails in Phase ZDM_SETUP_TDE_TGT
Issue: The phase ZDM_SETUP_TDE_TGT fails with one of the
following errors.
Executing phase ZDM_SETUP_TDE_TGT
Setting up Oracle Transparent Data Encryption (TDE) keystore on the target node oci1121 ...
oci1121: <ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_phx1z3</ARG></ARGS></ERR_FILE>
PRGO-3007 : failed to migrate database "db11204" with zero downtime
PRCZ-4002 : failed to execute command "/u01/app/18.0.0.0/grid/perl/bin/perl" using the privileged execution plugin "zdmauth" on nodes "oci1121"
PRCZ-2103 : Failed to execute command "/u01/app/18.0.0.0/grid/perl/bin/perl" on node "oci1121" as user "root". Detailed error:
<ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_phx1z3</ARG></ARGS></ERR_FILE>
Error at target server in /tmp/zdm749527725/zdm/log/mZDM_oss_standby_setup_tde_tgt_71939.log
2019-06-13 10:00:20: Keystore location /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME does not exists for database 'oci112_region'
2019-06-13 10:00:20: Reporting error:
<ERR_FILE><Facility>PRGZ</Facility><ID>ZDM_KEYSTORE_NOT_SETUP_ERR</ID><ARGS><ARG>oci112_region</ARG></ARGS></ERR_FILE>
Solution:
- Oracle Database 12c Release 1 and later
On the target database, make sure that
$ORACLE_HOME/network/admin/sqlnet.orapoints to the correct location of the TDE wallet. For exmaple:ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME) - Oracle Database 11g Release 2 (11.2.0.4) only
On the target database, make sure that
$ORACLE_HOME/network/admin/sqlnet.orapoints to the correct location of the TDE wallet, and replace the$ORACLE_UNQNAMEvariable with the value obtained from theSHOW PARAMETER DB_UNIQUE_NAMESQL command.For example, run
SQL> show parameter db_unique_name db_unique_name string oci112_regionand replace
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))with
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/oci112_region)))
1.8.4.1 Backup Fails with ORA-19836
Issue: Source database full backup fails with one of the following errors.
</ERRLINE><ERRLINE>ORA-19836: cannot use passphrase encryption for this backup
</ERRLINE><ERRLINE>RMAN-03009: failure of backup command on C8 channel at 04/29/2019
20:42:16</ERRLINE><ERRLINE>ORA-19836: cannot use passphrase encryption for this backup
</ERRLINE><ERRLINE>RMAN-03009: continuing other job steps, job failed will not be
re-runSolution 1: This issue can occur if you specify the
-sourcedb value in the wrong case. For example, if the value
obtained from SQL command SHOW PARAMETER DB_UNIQUE_NAME is
zdmsdb, then you need to specify it as zdmsdb in
lower case, and not as ZDMSDB in upper case, as shown in the following
example.
zdmuser> $ZDM_HOME/bin/zdmcli migrate database -sourcedb zdmsdb -sourcenode ocicdb1 -srcroot
-targetnode ocidb1 -targethome /u01/app/oracle/product/12.1.0.2/dbhome_1
-backupuser backup_user@example.com -rsp /u01/app/zdmhome/rhp/zdm/template/zdm_template_zdmsdb.rsp
-tgtauth zdmauth -tgtarg1 user:opc
-tgtarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk
-tgtarg3 sudo_location:/usr/bin/sudoSolution 2: For Oracle Database 12c Release 1 and later, ensure that
$ORACLE_HOME/network/admin/sqlnet.ora points to the correct
location of the TDE wallet, as shown here.
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))For Oracle Database 11g Release 2 (11.2.0.4) only, ensure that
$ORACLE_HOME/network/admin/sqlnet.ora points to the correct
location of the TDE wallet as shown below, and replace the variable
$ORACLE_UNQNAME with the value obtained with the SQL statement
SHOW PARAMETER DB_UNIQUE_NAME.
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))For example:
SQL> show parameter db_unique_name
db_unique_name string oci112_region
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/oci112_region)))
Solution 3: Run the following query and make sure that the wallet status is
OPEN.
SQL> select * from v$encryption_wallet
WRL_TYPE
-------------
WRL_PARAMETER
-------------
STATUS
-------------
file
/opt/oracle/dcs/commonstore/wallets/tde/abc_test
OPEN1.8.4.2 Backup Fails with ORA-19914 and ORA-28365
Issue: Source database full backup fails with the following errors.
channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:00:15
channel ORA_SBT_TAPE_3: starting compressed full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/ODA122/7312FA75F2B202E5E053050011AC5977/DATAFILE/system.382.1003858429
channel ORA_SBT_TAPE_3: starting piece 1 at 25-MAR-19
RMAN-03009: failure of backup command on ORA_SBT_TAPE_3 channel at 03/25/2019 19:09:30
ORA-19914: unable to encrypt backup
ORA-28365: wallet is not open
continuing other job steps, job failed will not be re-run
channel ORA_SBT_TAPE_3: starting compressed full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
Solution: Ensure that the wallet is opened in the database, and in case of CDB, ensure that the wallet is opened in the CDB, all PDBs, and PDB$SEED. See Setting Up the Transparent Data Encryption Wallet in the Zero Downtime Migration documentation for information about setting up TDE.
1.8.4.3 Either the Bucket Named Object Storage Bucket Name Does Not Exist in the Namespace Namespace or You Are Not Authorized to Access It
See Oracle Support Knowledge Base article "Either the Bucket Named '<Object Storage Bucket Name>' Does not Exist in the Namespace '<Namespace>' or You are not Authorized to Access it (Doc ID 2605518.1)" for the desciption and workarounds for this issue.
1.8.5.1 Restore Database Fails With Assert [KCBTSE_ENCDEC_TBSBLK_1]
Issue: Due to RDBMS Bugs 31048741, 32697431, and 32117834 you may see assert [kcbtse_encdec_tbsblk_1] in the alert log during restore phase of a physical migration.
Solution: Apply patches for RDBMS Bugs 31048741 and 32697431 to any Oracle Database 19c migration target previous to the 19.13 update.
1.8.5.2 Restore Database Fails With AUTOBACKUP does not contain an SPFILE
Issue: During the execution of phase ZDT_CLONE_TGT,
restore database fails with the following error.
channel C1: looking for AUTOBACKUP on day: 20200427
channel C1: AUTOBACKUP found: c-1482198272-20200427-12
channel C1: restoring spfile from AUTOBACKUP c-1482198272-20200427-12
channel C1: the AUTOBACKUP does not contain an SPFILEThe source database is running using init.ora file, but during the
restore target phase, the database is trying to restore the server parameter file
(SPFILE) from autobackup, therefore it fails.
Solution: Start the source database using an SPFILE and resubmit the migration job.
1.8.5.3 Restore Database Fails With ORA-01565
Issue: During the execution of phase ZDT_CLONE_TGT,
restore database fails with the following error.
</ERRLINE><ERRLINE>With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP
</ERRLINE><ERRLINE>and Real Application Testing options
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>CREATE PFILE='/tmp/zdm833428275/zdm/PFILE/zdm_tgt_mclone_nrt139.pfile' FROM SPFILE
</ERRLINE><ERRLINE>*
</ERRLINE><ERRLINE>ERROR at line 1:
</ERRLINE><ERRLINE>ORA-01565: error in identifying file '?/dbs/spfile@.ora'
</ERRLINE><ERRLINE>ORA-27037: unable to obtain file status
</ERRLINE><ERRLINE>Linux-x86_64 Error: 2: No such file or directory
</ERRLINE><ERRLINE>Additional information: 3
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>
</ERRLINE><ERRLINE>Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
</ERRLINE><ERRLINE>With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAPSolution: Start the target database using an SPFILE and resume the migration job.
1.8.6.1 Troubleshooting Post Migration Automatic Backup Failures
Issue: Post migration, on the target database, Automatic Backup might fail.
You can verify the failure using the console in Bare Metal, VM and Exadata > DB Systems > DB System Details > Database Details > Backups.
Solution: Get the RMAN configuration settings from one of the following places.
- Zero Downtime Migration documentation in Target Database Prerequisites, if captured
- The log files at
/opt/oracle/dcs/log/hostname/rman/bkup/db_unique_name/ /tmp/zdmXXX/zdm/zdm_TDBNAME_rman.dat
For example, using the second option, you can get the RMAN configuration
settings from
/opt/oracle/dcs/log/ocidb1/rman/bkup/ocidb1_abc127/rman_configure*.log,
then reset any changed RMAN configuration settings for the target database to ensure
that automatic backup works without any issues.
If this workaround does not help, then debug further by getting the RMAN job ID by
running the DBCLI command, list-jobs, and describe the job details for
more error details by running the DBCLI command describe-job -i JOB
ID from the database server as the root user.
For example, during the test, the following highlighted settings were modified to make Automatic Backup work.
rman target /
Recovery Manager: Release 12.2.0.1.0 - Production on Mon Jul 8 11:00:18 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1540292788)
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name OCIDB1_ABC127 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' MAXPIECESIZE 2 G FORMAT '%d_%I_%U_%T_%t' PARMS
'SBT_LIBRARY=/opt/oracle/dcs/commonstore/pkgrepos/oss/odbcs/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/objectstore/opc_pfile/1245080042/opc_OCIDB1_ABC127.ora)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+RECO/ OCIDB1_ABC127/controlfile/snapcf_ocidb1_abc127.f';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK clear;
RMAN>
1.8.6.2 Post Migration Automatic Backup Fails With DCS-10045
Issue: Post migration, Automatic Backup fails with the following error for non-TDE enabled migrated Oracle Database releases 11.2.0.4 and 12.1.0.2.
DCS-10045: Validation error encountered: Backup password is mandatory to take OSS backup for non-tde enabled database...You can verify this error by getting the RMAN job ID by running DBCLI command
list-jobs, and describe the job details to get the error details by
running DBCLI command describe-job -i JOB ID from
the database server as the root user.
Solution:
- Find the TDE wallet location.
The Oracle Cloud Infrastructure provisioned database instance will have following entry in
sqlnet.ora.ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME))) - Remove the
cwallet.ssofile from the wallet location.For example,
/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME. - For Oracle Database 11g Release 2, do the folowing steps.
- Connect to database using SQL*Plus as sysdba and verify the current wallet
location.
SQL> select * from v$encryption_wallet; WRL_TYPE WRL_PARAMETER STATUS file /opt/oracle/dcs/commonstore/wallets/tde/ocise112_region OPEN - Close the wallet in the
database.
SQL> alter system set wallet close; - Open the wallet using the wallet
password.
SQL> alter system SET WALLET open IDENTIFIED BY "walletpassword" - Set the master encryption
key.
SQL> alter system set encryption key identified by "walletpassword" - Recreate the autologin SSO
file.
/home/oracle>orapki wallet create -wallet /opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME -auto_login Oracle PKI Tool : Version 11.2.0.4.0 - Production Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved. Enter wallet password: # - Retry Automatic Backup.
- Connect to database using SQL*Plus as sysdba and verify the current wallet
location.
- For Oracle Database 12c, do the folowing steps.
- Connect to database using SQL*Plus as sysdba and verify the current wallet
location and
status.
SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet; WRL_PARAMETER STATUS WALLET_TYPE /opt/oracle/dcs/commonstore/wallets/tde/ocise112_region OPEN_NO_MASTER_KEY OPENIf the STATUS column contains a value of OPEN_NO_MASTER_KEY, you must create and activate the master encryption key.
- Close the wallet in the
database.
SQL> alter system set wallet close; - Open the wallet-using
password.
SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open IDENTIFIED BY "walletpassword" CONTAINER=all; - Set the master encryption
key.
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "walletpassword" with backup;Log in to each PDB and run
SQL> ALTER SESSION SET CONTAINER = PDB_NAME; SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "walletpassword" with backup; - Create the auto login
keystore.
SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE 'path to wallet directory' IDENTIFIED BY "walletpassword"; - Retry Automatic Backup.
- Connect to database using SQL*Plus as sysdba and verify the current wallet
location and
status.
1.8.6.3 Post Migration Automatic Backup Fails With DCS-10096
Issue: Post migration, Automatic Backup fails with the following error.
DCS-10096:RMAN configuration 'Retention policy' must be configured as 'configure retentio n
policy to recovery window of 30 days'You can verify this error by getting the RMAN job ID by running DBCLI
command list-jobs, and describe the job details for more error details
by running DBCLI command describe-job -i JOB
ID from the database server as the root user.
Solution: Log in into RMAN prompt and configure the retention policy.
[oracle@racoci1 ~]$ rman target /
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Jul 17 11:04:35 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
connected to target database: SIODA (DBID=2489657199)
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
old RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
new RMAN configuration parameters are successfully storedRetry Automatic Backup.
1.8.7.1 Migration from Existing Data Guard Standby Fails
Issue: Using an existing standby, Zero Downtime Migration job fails when Data Guard broker configuration uses TNS aliases.
In a Data Guard broker configuration, every database needs to be reachable from every other database in the configuration. When Zero Downtime Migration creates a new standby at the target and adds it to the existing Data Guard broker configuration, Zero Downtime Migration adds the target with connect identifier specified in the form of the connect string. Zero Downtime Migration does not update the tnsnames.ora on the target with other databases is in the configuration. Because the tnsnames.ora entries are missing, other databases may not be reachable if the configuration was created with TNS aliases.
Solution: Ensure that all TNS aliases in the broker configuration corresponding to the primary and any existing standby databases are defined in the target tnsnames.ora file.
Alternatively, ensure that the broker configuration is made up of connect strings instead of TNS aliases. The connect identifier string can be identified using the command below:
show database db_name dgconnectidentifier;If the connect identifier is a TNS alias, the identifier can be updated using the command below and specifying the connect string in the form of EZconnect string.
For cluster databases:
edit database db_name set property
dgconnectidentifier='scan_name:scan_port/service_name';For non cluster database:
edit database db_name set property
dgconnectidentifier='listener_host:listener_port/service_name';The TNS aliases are not required once the connect identifiers are specified as connect strings that are reachable from every database instance in the broker configuration. This is because the broker needs to be able to manage the primary/standby relationship in case any standby switches roles and becomes the primary.
1.8.7.2 PDB in Failed State After Migration to ExaCS or ExaCC
Issue: ExaCS and ExaCC recently added functionality to display the PDBs of the CDB. When the target database is provisioned with the same PDB name as the source before the migration, then after the migration, the PDB names report status as failed.
This is because when the target is provisioned the PDBID of the PDB is different. During the migration, Zero Downtime Migration drops the target and recreates it. So if the PDB names were the same but now have different internal PDBIDs, the control plane reports the PDB as failed.
Solutions: To avoid this problem, when provisioning the target:
-
If the source is non-CDB, provision a non-CDB target through dbaascli
-
If the source is a CDB with PDBs, provision the target without any PDBs
If the PDB is reported in the failed state post migration, the resolution would be to follow Pluggable Database(PDB) Resource Shows Failed Status In Cloud Console while it is Available in VM (Doc ID 2855062.1).
1.8.7.3 Oracle GoldenGate Hub Certificate Known Issues
Issue: Oracle Zero Downtime Migration leverages Oracle GoldenGate for its logical online migration work flow; an Oracle GoldenGate hub is set up on OCI compute for this purpose.
The Oracle GoldenGate hub NginX Reverse Proxy uses a self-signed certificate which will cause the following error:
SunCertPathBuilderException: unable to find valid certification path to requested
target when ZDM Server makes a REST API call.
Solution: See My Oracle Support document Zero Downtime Migration - GoldenGate Hub Certificate Known Issues (Doc ID 2768483.1)
1.8.7.4 Source Discovery Does Not Find 'cut' in Default Location
Issue: Discovery at the source database server fails to find
cut in the standard location.
The source database deployment's standard cut location is
/bin/cut. If cut is not in the location, Zero
Downtime Migration cannot discover the source database information correctly, and the
migration fails in its initial phases.
Solution: To resolve the issue, ensure that cut is
installed in the standard /bin/cut path or create a symbolic link to
the installed location, for example:
ln -sf <installed_location_of_the_cut> /bin/cut
1.8.7.5 Evaluation Fails in Phase ZDM_GET_SRC_INFO
Issue: During the evaluation (-eval) phase of the migration
process, the evaluation fails in the ZDM_GET_SRC_INFO phase with the
following error for the source single instance deployed without Grid infrastructure.
Executing phase ZDM_GET_SRC_INFO
retrieving information about database "zdmsidb" ...
PRCF-2056 : The copy operation failed on node: "zdmsidb".
Details: {1}
PRCZ-4002 : failed to execute command "/bin/cp" using the privileged
execution plugin "zdmauth" on nodes "zdmsidb"
scp: /etc/oratab: No such file or directory
Solution: Make an ORACLE_HOME value entry in file
/etc/oratab with value
db_name:$ORACLE_HOME:N,
as shown in this example.
zdmsidb:/u01/app/oracle/product/12.2.0.1/dbhome_1:N1.8.7.6 Migration Evaluation Failure with Java Exception Invalid Key Format
Issue: The following conditions are seen:
-
Zero Downtime Migration
migration -evalcommand fails with the following error.Result file path contents: "/u01/app/zdmbase/chkbase/scheduled/job-19-2019-12-02-03:46:19.log" zdm-server.ocitoolingsn.ocitooling.oraclevcn.com: Processing response file ... null -
The file
$ZDM_BASE/<zdm service host>/rhp/rhpserver.log.0contains the following entry.Verify below error message observed in file $ZDM_BASE/<zdm service host>/rhp/rhpserver.log.0 rhpserver.log.7:[pool-58-thread-1] [ 2019-12-02 02:08:15.178 GMT ] [JSChChannel.getKeyPair:1603] Exception : java.security.spec.InvalidKeySpecException: java.security.InvalidKeyException: invalid key format -
The Zero Downtime Migration installed user (For example: zdmuser) private key (id_rsa) file has the following entries.
-----BEGIN OPENSSH PRIVATE KEY---------- MIIEogIBAAKCAQEAuPcjftR6vC98fAbU4FhYVKPqc0CSgibtMSouo1DtQ06ROPN0 XpIEL4r8nGp+c5GSDONyhf0hiltBzg0fyqyurSw3XfGJq2Q6EQ61aL95Rt9CZh6b JSUwc69T4rHjvRnK824k4UpfUIqafOXb2mRgGVUkldo4yy+pLoGq1GwbsIYbS4tk uaYPKZ3A3H9ZA7MtZ5M0sNqnk/4Qy0d8VONWozxOLFC2A8zbbe7GdQw9khVqDb/x END OPENSSH PRIVATE KEY-----
Solution: Authentication key pairs (private and public key) are not
generated using the ssh-keygen utility, so you must generate
authentication key pairs using steps in Generating a Private SSH Key Without a Passphrase.
After generating authentication key pairs, the private key file content looks like the following.
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAuPcjftR6vC98fAbU4FhYVKPqc0CSgibtMSouo1DtQ06ROPN0
XpIEL4r8nGp+c5GSDONyhf0hiltBzg0fyqyurSw3XfGJq2Q6EQ61aL95Rt9CZh6b
JSUwc69T4rHjvRnK824k4UpfUIqafOXb2mRgGVUkldo4yy+pLoGq1GwbsIYbS4tk
uaYPKZ3A3H9ZA7MtZ5M0sNqnk/4Qy0d8VONWozxOLFC2A8zbbe7GdQw9khVqDb/x
-----END RSA PRIVATE KEY-----Set up connectivity with the newly generated authentication key pairs and resume the migration job.
1.8.7.7 Migration Evaluation Fails with Error PRCG-1022
Issue: The following conditions are seen:
$ZDM_HOME/bin/zdmcli migrate database -sourcedb zdmsdb -sourcenode ocicdb1
-srcauth zdmauth -srcarg1 user:opc
-srcarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk
-srcarg3 sudo_location:/usr/bin/sudo -targetnode ocidb1 -backupuser backup_user@example.com
-rsp /u01/app/zdmhome/rhp/zdm/template/zdm_template_zdmsdb.rsp -tgtauth zdmauth
-tgtarg1 user:opc -tgtarg2 identity_file:/home/zdmuser/.ssh/zdm_service_host.ppk
-tgtarg3 sudo_location:/usr/bin/sudo -eval
PRCG-1238 : failed to execute the Rapid Home Provisioning action for command 'migrate database'
PRCG-1022 : failed to connect to the Rapid Home Provisioning daemon for cluster anandutest
Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException
[Root exception is java.rmi.ConnectException: Connection refused to host:
anandutest; nested exception is: java.net.ConnectException: Connection refused (Connection refused)]Solution: Start the Zero Downtime Migration service using the
$ZDM_HOME/bin/zdmservice START command, then run
any ZDMCLI commands.
1.8.7.8 ORA-01031 on Full Export from an Oracle 12.1 Source
Issue: When performing a full database export with Export Data Pump from an Oracle Database 12c (12.1) source database, the following errors occur:
05-AUG-21 10:36:12.483: ORA-31693: Table data object
"SYS"."TABLE" failed to load/unload and is being skipped due
to error: ORA-01031: insufficient privileges
Solution: See My Oracle Support document EXPDP - ORA-31693 ORA-01031 (Insufficient Privileges) On Some Tables When Exporting from 12cR1 (Doc ID 1676411.1)
1.8.7.9 Data Transfer Medium COPY Issues
Issue: Migrating data using logical migration with
DATA_TRANSFER_MEDIUM=COPY set in the Zero Downtime Migration
response file fails.
Solution: When you specify DATA_TRANSFER_MEDIUM=COPY
you must also specify the following DUMPTRANSFERDETAILS_SOURCE_*
parameters.
DUMPTRANSFERDETAILS_TRANSFERTARGET_DUMPDIRPATH=<Target path to transferthe dumps to >
DUMPTRANSFERDETAILS_TRANSFERTARGET_HOST=<Target Db server or Target sidetransfer node >
DUMPTRANSFERDETAILS_TRANSFERTARGET_USER=<user having write access to specified path>
DUMPTRANSFERDETAILS_TRANSFERTARGET_USERKEY=<user authentication keypath on zdm node>1.8.7.10 Unable to Resume a Migration Job
Issue: Zero Downtime Migration writes the
source and target log files to the /tmp/zdm-unique id directory in the
respective source and target database servers.
If you pause a migration job and and then resume the job after several (sometimes 15-20
days), the /tmp/zdm-unique id directory might be deleted or purged as
part of a clean up or server reboot that also cleans up /tmp.
Solution: After pausing a migration job, back up the
/tmp/zdm-unique id directory. Before resuming the migration job,
check the /tmp directory for /zdm-unique id, and if it
is missing, restore the directory and its contents with your backup.
1.8.7.11 Migration Job Fails at ZDM_GET_SRC_INFO
Issue: A migration job fails with the following error.
[opc@zdm-server rhp]$ cat /home/opc/zdm_base/chkbase/scheduled/job-34-2021-01-23-14:10:32.log
zdm-server: 2021-01-23T14:10:32.155Z : Processing response file ...
zdm-server: 2021-01-23T14:10:32.262Z : Starting zero downtime migrate operation ...
PRCZ-4002 : failed to execute command "/bin/cp" using the privileged execution plugin "zdmauth" on nodes "PROD.compute-usconnectoneb95657.oraclecloud.internal"
Solution: You must set up SSH connectivity without a passphrase for the oracle user.
1.8.7.12 Migration Job Fails at ZDM_SWITCHOVER_SRC
Issue: A migration job fails at ZDM_SWITCHOVER_SRC
phase.
Solutions:
-
Ensure that there is connectivity from PRIMARY database nodes to STANDBY database nodes so the redo log are shipped as expected.
-
A job will fail at
ZDM_SWITCHOVER_SRCif the recovery process (MRP0) is not running at the target. The recovery process reason for failure should be corrected if MRP0 is not running at Oracle Cloud Database Standby Instance, and then the process should be started manually at Oracle Cloud Database Standby Instance before the migration job can be resumed.
1.9 Additional Information for Migrating to Oracle Exadata Database Service
Read the following for general information, considerations, and links to more information about using Zero Downtime Migration to migrate your database to Oracle Exadata Database Service on Dedicated Infrastructure.
1.9.1 Considerations for Migrating to Oracle Exadata Database Service on Dedicated Infrastructure
For this release of Zero Downtime Migration be aware of the following considerations.
- If the source database is release 18c, then the target home should be at release 18.6 or later to avoid issues such as Bug 29445548 Opening Database In Cloud Environment Fails With ORA-600.
- If a backup was performed when one of the configured instances is down, you will encounter Bug 29863717 - DUPLICATING SOURCE DATABASE FAILED BECAUSE INSTANCE 1 WAS DOWN.
- The TDE keystore password must be set in the credential wallet. To set
the password as part of the Zero Downtime Migration workflow, specify the
-tdekeystorewallet tde_wallet_pathor-tdekeystorepasswdargument irrespective of whether the wallet usesAUTOLOGINorPASSWORD. In either case the password is stored in the credential wallet. If the-tdekeystorepasswdargument is not supplied, then Zero Downtime Migration skips the settingtde_ks_passwdkey in the credential wallet, and no error is thrown. - The target environment must be installed with latest DBaaS Tooling RPM
with
db_unique_namechange support to be installed. - Provision a target database from the console without enabling auto-backups. In the Configure database backups section do not select the Enable automatic backups option.
1.9.2 Oracle Exadata Database Service on Dedicated Infrastructure Database Registration
Post migration, register the Oracle Exadata Database Service on Dedicated Infrastructure database, and make sure its meets all of the requirements.
Run the following commands on the Oracle Exadata Database Service on Dedicated Infrastructure database server as the root user.
/root>dbaascli registerdb prereqs --dbname db_name --db_unique_name db_unique_name
/root>dbaascli registerdb begin --dbname db_name --db_unique_name db_unique_nameFor example
/root>dbaascli registerdb prereqs --dbname ZDM122 --db_unique_name ZDM122_phx16n
DBAAS CLI version 18.2.3.2.0
Executing command registerdb prereqs --db_unique_name ZDM122_phx16n
INFO: Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:35:31.157978280334.log
INFO: Prereqs completed successfully
/root>
/root>dbaascli registerdb begin --dbname ZDM122 --db_unique_name ZDM122_phx16n
DBAAS CLI version 18.2.3.2.0
Executing command registerdb begin --db_unique_name ZDM122_phx16n
Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:45:27.264851309165.log
Running prereqs
DBAAS CLI version 18.2.3.2.0
Executing command registerdb prereqs --db_unique_name ZDM122_phx16n
INFO: Logfile Location: /var/opt/oracle/log/ZDM122/registerdb/registerdb_2019-08-14_05:45:29.000432309894.log
INFO: Prereqs completed successfully
Prereqs completed
Running OCDE .. will take time ..
OCDE Completed successfully.
INFO: Database ZDM122 registered as Cloud database
/root>
1.9.3 Oracle Exadata Database Service on Dedicated Infrastructure Automatic Backup Issues
Check the backup configuration before you enable automatic backup from the console. You
can use the get config command as shown in the first step below. You
should see bkup_oss=no before you enable automatic backup.
You might see the error message in the console, "A backup configuration exists for this database. You must remove the existing configuration to use Oracle Cloud Infrastructure's managed backup feature."
To fix this error, remove the existing configuration.
First, make sure the automatic backup is disabled from the UI, then follow these steps to remove the existing backup configuration.
- Generate a backup configuration
file.
/var/opt/oracle/bkup_api/bkup_api get config --file=/tmp/db_name.bk --dbname=db_nameFor example:
/var/opt/oracle/bkup_api/bkup_api get config --file=/tmp/zdmdb.bk --dbname=zdmdb - Open the /tmp/db_name.bk file you created in the previous
step.
For example: Open /tmp/zdmdb.bk
change bkup_oss=yes from bkup_oss=no
- Disable OSS backup by setting
bkup_oss=no./var/opt/oracle/bkup_api/bkup_api set config --file=/tmp/db_name.bk --dbname=db_nameFor example:
/var/opt/oracle/bkup_api/bkup_api set config --file=/tmp/zdmdb.bk --dbname=zdmdb - Check reconfigure
status.
/var/opt/oracle/bkup_api/bkup_api configure_status --dbname=db_nameFor example:
/var/opt/oracle/bkup_api/bkup_api configure_status --dbname=zdmdb
Now enable automatic backup from console.
Verify the backups from the console. Click Create Backup to create a manual backup, and a backup should be created without any issues. and also Automatic Backup should be successful.
1.10 Additional Information for Migrating to Oracle Exadata Database Service on Cloud@Customer
Read the following for general information, considerations, and links to more information about using Zero Downtime Migration to migrate your database to Oracle Exadata Database Service on Cloud@Customer.
1.10.1 Considerations for Migrating to Oracle Exadata Database Service on Cloud@Customer
For this release of Zero Downtime Migration be aware of the following considerations.
- You must apply the regDB patch for Bug 29715950 - "modify regdb to handle db_unique_name not same as db_name" on all Oracle Exadata Database Service on Cloud@Customer nodes. This is required for the
ZDM_MANIFEST_TO_CLOUDphase. Please note that the regDB tool is part of DBaaS Tooling. - If the source database is release 18c, then the target home should be at release 18.6 or later to avoid issues such as Bug 29445548 Opening Database In Cloud Environment Fails With ORA-600.
- PDB conversion related phases are listed in
-listphasesand can be ignored. Those are no-op phases. - If the backup medium is Zero Data Loss Recovery Appliance, then all configured
instances should be up at the source when a
FULLorINCREMENTALbackup is performed. - If a backup was performed when one of the configured instances is down, you will encounter Bug 29863717 - DUPLICATING SOURCE DATABASE FAILED BECAUSE INSTANCE 1 WAS DOWN.
- The TDE keystore password must be set in the credential wallet. To set
the password as part of the Zero Downtime Migration workflow, specify the
-tdekeystorewallet tde_wallet_pathor-tdekeystorepasswdargument irrespective of whether the wallet usesAUTOLOGINorPASSWORD. In either case the password is stored in the credential wallet. If the-tdekeystorepasswdargument is not supplied, then Zero Downtime Migration skips the settingtde_ks_passwdkey in the credential wallet, and no error is thrown. - The target environment must be installed with latest DBaaS Tooling RPM
with
db_unique_namechange support to be installed.
1.11 Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customer access to and use of Oracle support services will be pursuant to the terms and conditions specified in their Oracle order for the applicable services.
Oracle Zero Downtime Migration Zero Downtime Migration Release Notes, Release 26 (26.1)
G50043-01