3 Preparing Your Databases for Migration

Before you can begin the migration of your data with Oracle Cloud Infrastructure Database Migration Service, you must configure your source and target databases as described here.

Preparing Oracle Databases for Migration

Prepare your databases using either of the following methods:

  • Prepare your database by running scripts generated by the database preparation utility (Recommended option).
  • Manually configure your Oracle databases by following the documentation and running the SQL commands.

Preparing your databases using the database preparation utility:

To prepare your databases for migration:
  1. Refer to this MOS note.
  2. Download the database preparation utility which is a shell script file.
  3. Follow the instructions to proceed.
  4. Run the script locally.
The database preparation utility:
  1. Accepts the inputs that are specific to your migration and generates a SQL script that you can run for your source and target databases.
  2. Analyzes your databases for any missing required configurations or privileges.
  3. Checks the current status of the database and provides information on the operations that will be performed on your databases.
  4. Generates a final script that performs the required operations on your databases to prepare them for the migration.

    Note:

    • You must review and make necessary corrections to the scripts generated by the database preparation utility before you run them for your database.
    • You must run the utility script twice, once for the source database and then for the target database.

    Consequently, the configuration SQL script prepares the database for migration.

Manually configuring your databases for migration:

To prepare your source and target database manually using the SQL commands, see the following topics:

Preparing the Source Database for Migration

Before you can migrate data with Oracle Cloud Infrastructure Database Migration, manually configure your source database as described here.

  • To configure a single-tenant (Non CDB) as a source for migration, run the following SQL commands:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
 
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=256M;
 
-- Force Logging
ALTER DATABASE FORCE LOGGING;
 
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
  • To configure a multi-tenant (CDB) as a source for migration, run the following SQL commands:
-- Connect to CDB and run:
 
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
 
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=256M SCOPE=BOTH;
 
-- Force Logging
ALTER DATABASE FORCE LOGGING;
 
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
ALTER SESSION SET CONTAINER=PDB;
ALTER USER SYSTEM ACCOUNT UNLOCK;
  • To configure Amazon RDS (non-CDB) as a source for migration, run the following SQL commands:
    -- Remember to set the following parameters thru the Parameter groups functionality:
    -- STREAMS_POOL_SIZE=2147483648
    -- GLOBAL_NAMES=FALSE
    -- To see how Parameter groups work refer to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html
     
    -- Archive Log Mode
    EXEC RDSADMIN.RDSADMIN_UTIL.SET_CONFIGURATION('ARCHIVELOG RETENTION HOURS',72);
     
    -- Force Logging
    EXEC RDSADMIN.RDSADMIN_UTIL.FORCE_LOGGING(P_ENABLE => TRUE);
     
    -- Supplemental Logging
    EXEC RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING('ADD');
Additional Configurations for Preparing the Source Database for Online Migration

Before you can migrate data with Oracle Cloud Infrastructure Database Migration, perform additional configurations for your source database for online migration as described here.

  • To configure a single-tenant (Non CDB) as a source for online migration, run the following SQL commands:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
 
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=2G;
 
-- Force Logging
ALTER DATABASE FORCE LOGGING;
 
-- Enable GoldenGate
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;
 
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
 
-- Create GoldenGate nonCDB user
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA 100M ON GG_ADMIN;
GRANT CONNECT TO GGADMIN;
GRANT RESOURCE TO GGADMIN;
GRANT CREATE TO GGADMIN;
GRANT SELECT_CATALOG_ROLE TO GGADMIN;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN; 
GRANT ALTER SYSTEM TO GGADMIN; 
GRANT ALTER USER TO GGADMIN; 
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN; 
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN; 
GRANT SELECT ANY DICTIONARY TO GGADMIN; 
GRANT SELECT ANY TRANSACTION TO GGADMIN; 
GRANT INSERT ANY TABLE TO GGADMIN; 
GRANT UPDATE ANY TABLE TO GGADMIN; 
GRANT DELETE ANY TABLE TO GGADMIN; 
GRANT LOCK ANY TABLE TO GGADMIN; 
GRANT CREATE ANY TABLE TO GGADMIN; 
GRANT CREATE ANY INDEX TO GGADMIN; 
GRANT CREATE ANY CLUSTER TO GGADMIN; 
GRANT CREATE ANY INDEXTYPE TO GGADMIN; 
GRANT CREATE ANY OPERATOR TO GGADMIN; 
GRANT CREATE ANY PROCEDURE TO GGADMIN; 
GRANT CREATE ANY SEQUENCE TO GGADMIN; 
GRANT CREATE ANY TRIGGER TO GGADMIN; 
GRANT CREATE ANY TYPE TO GGADMIN; 
GRANT CREATE ANY SEQUENCE TO GGADMIN; 
GRANT CREATE ANY VIEW TO GGADMIN; 
GRANT ALTER ANY TABLE TO GGADMIN; 
GRANT ALTER ANY INDEX TO GGADMIN; 
GRANT ALTER ANY CLUSTER TO GGADMIN; 
GRANT ALTER ANY INDEXTYPE TO GGADMIN; 
GRANT ALTER ANY OPERATOR TO GGADMIN; 
GRANT ALTER ANY PROCEDURE TO GGADMIN; 
GRANT ALTER ANY SEQUENCE TO GGADMIN; 
GRANT ALTER ANY TRIGGER TO GGADMIN; 
GRANT ALTER ANY TYPE TO GGADMIN; 
GRANT ALTER ANY SEQUENCE TO GGADMIN; 
GRANT CREATE DATABASE LINK TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO GGADMIN; 
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN');
  • To configure a multi-tenant (CDB) as a source for online migration, run the following SQL commands:
-- Connect to CDB and run:
 
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
 
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=2G SCOPE=BOTH;
 
-- Force Logging
ALTER DATABASE FORCE LOGGING;
 
-- Enable GoldenGate
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
 
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK CONTAINER=ALL;
 
-- Create GoldenGate users
-- CDB user
ALTER SESSION SET CONTAINER = CDB$ROOT;
CREATE USER C##GGADMIN IDENTIFIED BY cggadmin_pwd CONTAINER=ALL DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;
GRANT CONNECT TO C##GGADMIN CONTAINER=ALL;
GRANT RESOURCE TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE TABLE TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE VIEW TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE SESSION TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO C##GGADMIN CONTAINER=ALL;
GRANT DV_GOLDENGATE_ADMIN TO C##GGADMIN CONTAINER=ALL;
GRANT DV_GOLDENGATE_REDO_ACCESS TO C##GGADMIN CONTAINER=ALL;
GRANT ALTER SYSTEM TO C##GGADMIN CONTAINER=ALL;
GRANT ALTER USER TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT ANY DICTIONARY TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO C##GGADMIN CONTAINER=ALL;
GRANT EXECUTE ON dbms_lock TO C##GGADMIN CONTAINER=ALL;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGADMIN',CONTAINER=>'ALL');
 
-- PDB User
ALTER SESSION SET CONTAINER = v_pdb_name;
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
GRANT CONNECT TO GGADMIN CONTAINER=CURRENT;
GRANT RESOURCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT_CATALOG_ROLE TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER SYSTEM TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER USER TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY DICTIONARY TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY TRANSACTION TO GGADMIN CONTAINER=CURRENT;
GRANT INSERT ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT UPDATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT DELETE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT LOCK ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY VIEW TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE DATABASE LINK TO GGADMIN CONTAINER=CURRENT;
GRANT EXECUTE ON dbms_lock TO GGADMIN CONTAINER=CURRENT;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN',CONTAINER=>'CURRENT');
  • To configure Amazon RDS (non-CDB) as a source for online migration, run the following SQL commands:
    -- Remember to set the following parameters thru the Parameter groups functionality:
    -- STREAMS_POOL_SIZE=2147483648
    -- ENABLE_GOLDENGATE_REPLICATION=TRUE
    -- GLOBAL_NAMES=FALSE
    -- To see how Parameter groups work refer to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html
     
    -- Archive Log Mode
    EXEC RDSADMIN.RDSADMIN_UTIL.SET_CONFIGURATION('ARCHIVELOG RETENTION HOURS',72);
     
    -- Force Logging
    EXEC RDSADMIN.RDSADMIN_UTIL.FORCE_LOGGING(P_ENABLE => TRUE);
     
    -- Supplemental Logging
    EXEC RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING('ADD');
     
    -- Create GoldenGate user
    CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA 100M ON USERS;
    GRANT UNLIMITED TABLESPACE TO GGADMIN;
    GRANT CONNECT, RESOURCE TO GGADMIN;
    GRANT SELECT ANY DICTIONARY TO GGADMIN;
    GRANT CREATE VIEW TO GGADMIN;
    GRANT EXECUTE ON DBMS_LOCK TO GGADMIN;
    GRANT SELECT ON SYS.CCOL$ TO GGADMIN;
    GRANT SELECT ON SYS.CDEF$ TO GGADMIN;
    GRANT SELECT ON SYS.COL$ TO GGADMIN;
    GRANT SELECT ON SYS.CON$ TO GGADMIN;
    GRANT SELECT ON SYS.DEFERRED_STG$ TO GGADMIN;
    GRANT SELECT ON SYS.ICOL$ TO GGADMIN;
    GRANT SELECT ON SYS.IND$ TO GGADMIN;
    GRANT SELECT ON SYS.LOB$ TO GGADMIN;
    GRANT SELECT ON SYS.LOBFRAG$ TO GGADMIN;
    GRANT SELECT ON SYS.OBJ$ TO GGADMIN;
    GRANT SELECT ON SYS.SEG$ TO GGADMIN;
    GRANT SELECT ON SYS.TAB$ TO GGADMIN;
    GRANT SELECT ON SYS.TABCOMPART$ TO GGADMIN;
    GRANT SELECT ON SYS.TABPART$ TO GGADMIN;
    GRANT SELECT ON SYS.TABSUBPART$ TO GGADMIN;
    EXEC RDSADMIN.RDSADMIN_DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (GRANTEE=>'GGADMIN',PRIVILEGE_TYPE=>'CAPTURE',GRANT_SELECT_PRIVILEGES=>TRUE,DO_GRANTS=>TRUE);
Use Case for Preparing the Source Database for Migration

Following is a sample use case to prepare your source database for migration. To configure a PDB as a source for your migration, the steps are similar to setting up a classic database as a source, but there are requirements for using the CDBROOT as ggaliassrc.

The steps differ slightly if you're using a PDB as your source database, so make sure you follow the recommendations if your database is in a multitenant environment.

  1. Configure the streams pool with the initialization parameter STREAMS_POOL_SIZE.

    • For offline logical migrations, for optimal Data Pump performance, it is required that you set STREAMS_POOL_SIZE to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.

    • For online logical migrations, set STREAMS_POOL_SIZE to at least 2GB.

      For the explanation of 1GB STREAMS_POOL_SIZE per integrated extract + additional 25 percent recommendation, see Integrated Extract / Replicat and STREAMS_POOL_SIZE (Doc ID 2078459.1).

  2. Check the GLOBAL_NAMES parameter. If it's set to true, change it to false.

    sqlplus > show parameter global
    NAME                                 TYPE    VALUE
    ------------------------------------ ------- ------------------------------
    global_names                         boolean TRUE
    
    sqlplus > alter system set global_names=false
  3. Enable ARCHIVELOG if it is not already enabled.

    1. Check whether archivelog is enabled:

      sqlplus > archive log list
      Sample output returned:
      Database log mode Archive log Mode
      Automatic archival Enabled
      Archive destination USE_DB_RECOVERY_FILE_DEST
      Oldest online log sequence 33
      Next log sequence to archive 35
      Current log sequence 35
    2. Enable archivelog mode:

      sqlplus > shutdown immediate
      sqlplus > startup mount
      sqlplus > alter database archivelog;
      sqlplus > alter database open;
    3. Disable archivelog mode (for clean up later)

      sqlplus > shutdown immediate
      sqlplus > startup mount
      sqlplus > alter database noarchivelog;
      sqlplus > alter database open;
  4. Enable logging:

    1. Check if logging is enabled:

      sqlplus > SELECT supplemental_log_data_min, force_logging FROM v$database;
    2. Enable logging:

      sqlplus > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
      sqlplus > ALTER DATABASE FORCE LOGGING;
    3. Disable logging (for cleanup later)

      sqlplus > ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
      sqlplus > ALTER DATABASE NO FORCE LOGGING;
  5. Create a database administrator user that has full Oracle Data Pump privileges for initial load to be performed. A user that has the DATAPUMP_EXP_FULL_DATABASE role is required for the export operation at the source database. This user is selected as database administrator when you create Database connections with the source databases.

    See Oracle Data Pump in the Oracle Database Utilities guide for more information.

  6. In the PDB being exported, if there is any dependency created on local objects in the C## user's schema, then they would fail to be imported in the target Autonomous Database. Exclude the problematic schema from the migration job.

  7. If you are using Object Storage as a data transfer medium, ensure that an export Directory Object exists and is usable by Data Pump to store generated dump files.

    • The directory object is a file path on the source database server file system. The name needs to comply with Oracle Database directory object rules. See CREATE DIRECTORY in Oracle Database SQL Language Reference for details.

    • The export Directory Object must be owned by same OS user who owns the database Oracle home.

    • This step is not required if you are using a database link transfer medium.

  8. If you plan to transfer data using a database link, then you must set up SSL encryption on the source database. Using Data Pump with a database link to an Autonomous Database target requires that the source database have SSL encryption enabled. Creating a database link from an Autonomous Database Shared Infrastructure target to a source database with a private IP requires assistance from Oracle Support.

    See Configuring Transport Layer Security Authentication in Oracle Database Security Guide for more information.

  9. For online logical migrations, if you plan to run migrations with replication, enable GoldenGate Replication:

    1. In a multitenant environment, if you are migrating a PDB, enable GoldenGate Replication on the CDB.

      sqlplus > ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
    2. Apply the mandatory RDBMS patches on the source database, based on your source database version:

      • Oracle Database 11.2:

        My Oracle Support note Oracle GoldenGate -- Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1) recommends the following updates:

        Database PSU 11.2.0.4.210720 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS

        OGG RDBMS patch 32248879 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.201020 FOR BUGS 32048478 20448066 - This patch contains mandatory fix for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES

      • Oracle Database 12.1.0.2 or later

        My Oracle Support note Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1) lists the additional RDBMS patches needed on top of the latest DBBP/RU for Oracle Database 12c and later if using Oracle GoldenGate.

Preparing the Target Database for Migration

Before you can migrate data with Oracle Cloud Infrastructure Database Migration, manually configure your target database as described here.

  • To configure an Autonomous database as a target for migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
  • To configure a non-Autonomous, single-tenant (non-CDB) as a target for migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
  • To configure a non-Autonomous, multi-tenant (CDB) as a target for migration, run the following SQL commands:
    -- Connect to CDB and run:
     
    -- Global Names
    ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
     
    -- User system for Datapump
    ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK SCOPE=BOTH;
Additional Configurations for Preparing the Target Database for Online Migration

Before you can migrate data with Oracle Cloud Infrastructure Database Migration, perform additional configurations for your target database for online migration as described here.

You must unlock the ggadmin user from the Oracle Cloud Infrastructure Console by performing the following steps:
  1. Follow step 1 through step 3 mentioned in Manage Users and User Roles on Autonomous Database - Connecting with Database Actions.
  2. Turn off the Account is locked toggle.
  3. Provide a password with its corresponding confirmation.
  • Alternatively, to configure an Autonomous database as a target for online migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- Create GoldenGate user if doesn't exist
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
 
-- Or unlock it if exists
ALTER USER GGADMIN IDENTIFIED BY ggadmin_pwd ACCOUNT UNLOCK;
  • To configure a non-Autonomous, single-tenant (non-CDB) as a target for online migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
 
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
 
-- Create GoldenGate nonCDB user
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA 100M ON GG_ADMIN;
GRANT CONNECT TO GGADMIN;
GRANT RESOURCE TO GGADMIN;
GRANT CREATE TO GGADMIN;
GRANT SELECT_CATALOG_ROLE TO GGADMIN;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN; 
GRANT ALTER SYSTEM TO GGADMIN; 
GRANT ALTER USER TO GGADMIN; 
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN; 
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN; 
GRANT SELECT ANY DICTIONARY TO GGADMIN; 
GRANT SELECT ANY TRANSACTION TO GGADMIN; 
GRANT INSERT ANY TABLE TO GGADMIN; 
GRANT UPDATE ANY TABLE TO GGADMIN; 
GRANT DELETE ANY TABLE TO GGADMIN; 
GRANT LOCK ANY TABLE TO GGADMIN; 
GRANT CREATE ANY TABLE TO GGADMIN; 
GRANT CREATE ANY INDEX TO GGADMIN; 
GRANT CREATE ANY CLUSTER TO GGADMIN; 
GRANT CREATE ANY INDEXTYPE TO GGADMIN; 
GRANT CREATE ANY OPERATOR TO GGADMIN; 
GRANT CREATE ANY PROCEDURE TO GGADMIN; 
GRANT CREATE ANY SEQUENCE TO GGADMIN; 
GRANT CREATE ANY TRIGGER TO GGADMIN; 
GRANT CREATE ANY TYPE TO GGADMIN; 
GRANT CREATE ANY SEQUENCE TO GGADMIN; 
GRANT CREATE ANY VIEW TO GGADMIN; 
GRANT ALTER ANY TABLE TO GGADMIN; 
GRANT ALTER ANY INDEX TO GGADMIN; 
GRANT ALTER ANY CLUSTER TO GGADMIN; 
GRANT ALTER ANY INDEXTYPE TO GGADMIN; 
GRANT ALTER ANY OPERATOR TO GGADMIN; 
GRANT ALTER ANY PROCEDURE TO GGADMIN; 
GRANT ALTER ANY SEQUENCE TO GGADMIN; 
GRANT ALTER ANY TRIGGER TO GGADMIN; 
GRANT ALTER ANY TYPE TO GGADMIN; 
GRANT ALTER ANY SEQUENCE TO GGADMIN; 
GRANT CREATE DATABASE LINK TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO GGADMIN; 
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN');
  • To configure a non-Autonomous, multi-tenant (CDB) as a target for online migration, run the following SQL commands:
    -- Connect to CDB and run:
     
    -- Global Names
    ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
     
    -- User system for Datapump
    ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK CONTAINER=ALL;
     
    -- Create GoldenGate PDB User
    ALTER SESSION SET CONTAINER = v_pdb_name;
    CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
    CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
    GRANT CONNECT TO GGADMIN CONTAINER=CURRENT;
    GRANT RESOURCE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE TO GGADMIN CONTAINER=CURRENT;
    GRANT SELECT_CATALOG_ROLE TO GGADMIN CONTAINER=CURRENT;
    GRANT DV_GOLDENGATE_ADMIN TO GGADMIN CONTAINER=CURRENT;
    GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER SYSTEM TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER USER TO GGADMIN CONTAINER=CURRENT;
    GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
    GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
    GRANT SELECT ANY DICTIONARY TO GGADMIN CONTAINER=CURRENT;
    GRANT SELECT ANY TRANSACTION TO GGADMIN CONTAINER=CURRENT;
    GRANT INSERT ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT UPDATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT DELETE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT LOCK ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY INDEX TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY TYPE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE ANY VIEW TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY TABLE TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY INDEX TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY TYPE TO GGADMIN CONTAINER=CURRENT;
    GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
    GRANT CREATE DATABASE LINK TO GGADMIN CONTAINER=CURRENT;
    GRANT EXECUTE ON dbms_lock TO GGADMIN CONTAINER=CURRENT;
    EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN',CONTAINER=>'CURRENT');
Use Case for Preparing the Target Database for Migration

Following is a sample use case for preparing a target database for migration.

  1. Create an Autonomous Database. If the target autonomous database is already present you can skip this step.

  2. Check the GLOBAL_NAMES parameter. If it's set to true, change it to false.

    sqlplus > show parameter global
    NAME                                 TYPE    VALUE
    ------------------------------------ ------- ------------------------------
    global_names                         boolean TRUE
    
    sqlplus > alter system set global_names=false
  3. Create a database administrator user that has full Oracle Data Pump privileges for initial load to be performed. A user that has the DATAPUMP_IMP_FULL_DATABASE role is required for the export operation at the target database. This user is selected as database administrator when you create Database connections with the target databases.

    See Oracle Data Pump in the Oracle Database Utilities guide for more information.

Preparing MySQL Databases for Migration

If you want to perform online MySQL migrations, prepare your source and target databases for replication as follows:

  • Source database requirements:
    Depending on your source server configuration, following modes are supported:

    Note:

    The above steps are applicable only when you perform an online migration.
  • Target database requirements:

Note:

For online migrations, if you define a different replication user, then ensure that it has the privileges defined here. However, if you do not define the user, ensure that the main user for the connection has these privileges.