Use Case for Preparing the Source Database for Migration

Following is a sample use case to prepare your source database for migration. To configure a PDB as a source for your migration, the steps are similar to setting up a classic database as a source, but there are requirements for using the CDBROOT as ggaliassrc.

The steps differ slightly if you're using a PDB as your source database, so make sure you follow the recommendations if your database is in a multitenant environment.

  1. Configure the streams pool with the initialization parameter STREAMS_POOL_SIZE.

    • For offline logical migrations, for optimal Data Pump performance, it is required that you set STREAMS_POOL_SIZE to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.

    • For online logical migrations, set STREAMS_POOL_SIZE to at least 2GB.

      For the explanation of 1GB STREAMS_POOL_SIZE per integrated extract + additional 25 percent recommendation, see Integrated Extract / Replicat and STREAMS_POOL_SIZE (Doc ID 2078459.1).

  2. Check the GLOBAL_NAMES parameter. If it's set to true, change it to false.

    sqlplus > show parameter global
    NAME                                 TYPE    VALUE
    ------------------------------------ ------- ------------------------------
    global_names                         boolean TRUE
    
    sqlplus > alter system set global_names=false
  3. Enable ARCHIVELOG if it is not already enabled.

    1. Check whether archivelog is enabled:

      sqlplus > archive log list
      Sample output returned:
      Database log mode Archive log Mode
      Automatic archival Enabled
      Archive destination USE_DB_RECOVERY_FILE_DEST
      Oldest online log sequence 33
      Next log sequence to archive 35
      Current log sequence 35
    2. Enable archivelog mode:

      sqlplus > shutdown immediate
      sqlplus > startup mount
      sqlplus > alter database archivelog;
      sqlplus > alter database open;
    3. Disable archivelog mode (for clean up later)

      sqlplus > shutdown immediate
      sqlplus > startup mount
      sqlplus > alter database noarchivelog;
      sqlplus > alter database open;
  4. Enable logging:

    1. Check if logging is enabled:

      sqlplus > SELECT supplemental_log_data_min, force_logging FROM v$database;
    2. Enable logging:

      sqlplus > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
      sqlplus > ALTER DATABASE FORCE LOGGING;
    3. Disable logging (for cleanup later)

      sqlplus > ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
      sqlplus > ALTER DATABASE NO FORCE LOGGING;
  5. Create a database administrator user that has full Oracle Data Pump privileges for initial load to be performed. A user that has the DATAPUMP_EXP_FULL_DATABASE role is required for the export operation at the source database. This user is selected as database administrator when you create Database connections with the source databases.

    See Oracle Data Pump in the Oracle Database Utilities guide for more information.

  6. In the PDB being exported, if there is any dependency created on local objects in the C## user's schema, then they would fail to be imported in the target Autonomous Database. Exclude the problematic schema from the migration job.

  7. If you are using Object Storage as a data transfer medium, ensure that an export Directory Object exists and is usable by Data Pump to store generated dump files.

    • The directory object is a file path on the source database server file system. The name needs to comply with Oracle Database directory object rules. See CREATE DIRECTORY in Oracle Database SQL Language Reference for details.

    • The export Directory Object must be owned by same OS user who owns the database Oracle home.

    • This step is not required if you are using a database link transfer medium.

  8. If you plan to transfer data using a database link, then you must set up SSL encryption on the source database. Using Data Pump with a database link to an Autonomous Database target requires that the source database have SSL encryption enabled. Creating a database link from an Autonomous Database Shared Infrastructure target to a source database with a private IP requires assistance from Oracle Support.

    See Configuring Transport Layer Security Authentication in Oracle Database Security Guide for more information.

  9. For online logical migrations, if you plan to run migrations with replication, enable GoldenGate Replication:

    1. In a multitenant environment, if you are migrating a PDB, enable GoldenGate Replication on the CDB.

      sqlplus > ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
    2. Apply the mandatory RDBMS patches on the source database, based on your source database version:

      • Oracle Database 11.2:

        My Oracle Support note Oracle GoldenGate -- Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1) recommends the following updates:

        Database PSU 11.2.0.4.210720 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS

        OGG RDBMS patch 32248879 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.201020 FOR BUGS 32048478 20448066 - This patch contains mandatory fix for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES

      • Oracle Database 12.1.0.2 or later

        My Oracle Support note Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1) lists the additional RDBMS patches needed on top of the latest DBBP/RU for Oracle Database 12c and later if using Oracle GoldenGate.