5 Managing Migrations

When you create a migration with Oracle Cloud Infrastructure Database Migration, you specify how the migration should run, select the source and target databases, and then configure the data transport settings. Optionally, you can configure advanced GoldenGate and Data Pump settings in the migration using the Database Migration console.

Creating Migrations

A migration contains the parameter settings for running a migration job with Oracle Cloud Infrastructure Database Migration.

The following procedure explains how to create migrations, which contain the settings for running migration jobs with Database Migration. You can create multiple migration resources with different parameter settings to test different scenarios.

Creating a Migration

  1. Log in to the Console as a user with permissions to access Database Migration.

  2. Open the navigation menu. Under Database Migrations click Migrations. A list of the migration resources in the currently selected compartment is displayed.

  3. Click Create migration.

    This opens the Create migration wizard.

  4. In the Add details step, configure the following settings, then click Next.

    • Name: Enter a unique name for the migration.

      On completion of the Create Migration wizard, the name you enter here is displayed in the list of migrations on the Migrations page.

    • Compartment: Select the compartment in which the Database Migration service is hosted.

    • Choose the source database connection type.

      Direct connection to source database: Leave this option selected if the source database is directly accessible from the cloud. You can do online and offline migrations with this option.

      No direct connection to source database: Lets you select a Database Migration agent to use as a bridge to the source database.

      You must install and register an agent before configuring a migration with no direct connection to the source database. The link to download the agent is located on the Migrations page above the list.

      See Managing Agents for details.

      Note:

      This option only allows offline migrations. With no direct connection to the source database you won't be able to configure GoldenGate replication.
    • If you selected No direct connection to source database:, select a Migration Agent using the drop-down list.

    • Vault in Compartment: Select the compartment containing the security vault.

      Database Migration uses the OCI Vault to store user secrets such as passwords, wallets, and keys, and encrypts them with the user-supplied encryption key.

      If the vault is in another compartment, you can use Change Compartment to select it.

    • Encryption key in Compartment: Select the keystore that you configured in the vault.

      Only AES algorithm type keys are supported.

  5. In the Select databases step, enter the following information, then click Next.

    Enter the following information in the Source database box.

    • Database connection in Compartment: Select the source database connection entry.

      If the source database is a PDB, make sure you selected the PDB database connection in the drop-down, not the CDB connection.

      Do not choose an Autonomous Database connection, as Autonomous is not supported as a source database.

    • Database is pluggable database (PDB): If the source database is a PDB, check this box so you can also enter the CDB details.

      Container database connection in Compartment: If the source database is a PDB, select the CDB you selected here. The CDB connection is not required if you are doing an offline migration.

    Enter the following information in the Target database box.

    • Database connection in Compartment: Select the target database connection.

  6. In the Migration options step, select one of the following transfer mediums based on your requirement for your migration:

    • Select an Initial load option:

      Data Pump via database link: Enable this option to use a direct SQL*Net connection between the source and target databases. Note that using Data Pump with a database link to Autonomous Database targets requires that the source database be set up with SSL encryption.

      Note:

      If your source database is Oracle Database Standard Edition 2, select the Datapump via database link: option as the transfer medium. Encryption for the exported Datapump dumps is not available for the object storage or file storage transfer mediums.

      Data Pump via object storage: This option lets Data Pump temporarily store the exported database in an Object Storage bucket. If this option is enabled, also configure the following settings.

      • Amazon S3 bucket: Enter the details for the Amazon S3 bucket. This option is only shown if the source database connection is of type Amazon RDS.

        The bucket Name must be between 3 and 63 characters, and can consist only of lower case letters, numbers, dots (.), and hyphens (-). It must begin and end with a letter or number.

        The Region must be in the same region as the RDS Oracle database. For example us-east-1

        For more information about the Amazon RDS source database use case, see Migrating Databases from Amazon Web Services RDS to Oracle Autonomous Database.

      • Export directory object: Enter the file Name and Path to the directory object that will be used by Data Pump export on the source database server file system. Database Migration handles the directory object creation for you.

        The name must comply with Oracle Database directory object rules. See CREATE DIRECTORY in Oracle Database SQL Language Reference.

      Data Pump via file storage: Enable this option to use a shared File Storage Service (FSS) mount as the transfer medium between source and target databases.
      • Network access to your file system is provided through a mount target (OCID provided in the OCI Console).
      • Your file system must have at least one export in the mount target used to mount and use the file system.
      • When creating the file system using the OCI Console, the workflow also creates a mount target and an associated export.
      • Exports control how NFS clients access file systems when they connect to a mount target.

      Note:

      Following are the prerequisites to use this option:
      • You must Create a File System and specify the mount target and the export path.
      • You must have the shared file system mounted on both the source and target database hosts.
      • Supported source database: This transfer medium is supported for all the supported on-premises database cases excluding Amazon RDS.
      • Supported target database : This transfer medium is supported for both non-ADB and ADB target databases (All Oracle Cloud Infrastructure co-managed and Exadata).
      • To perform the Data Pump export, you must have the read or write access to the shared mounts on the source host.
      • On the target database host, you must at least have read access to the shared mount to perform the Data Pump import.
      • See Mounting File Systems and Creating a Mount Target for more information.

      Enter the following information in the Source database box:

      • Export directory object name: Name of the export directory object
      • Export directory object path: The shared file system mount path on the source database with read and write access.
      • Source Database file system SSL Wallet Path

        Note:

        This field is displayed only when the SSH details are not provided during source database connection while performing migration to Autonomous database.

      If you enable this option when the target database is Autonomous database, then configure the following settings. Enter the following information in the Target database box:

      • Shared storage mount target in compartment: The file system mount target to be used in the target database.

        Note:

        You can use the mount targets only when the target database is Autonomous database. For any other databases, use the shared file system based mount point.
      • Import directory object name: The Data Pump import directory object name used to mount the shared file system.

      If you enable this option when the target database is non Autonomous database, then configure the following settings. Enter the following information in the Target database box:

      • Import directory object name: The Data Pump import directory object name used to mount the shared file system.
      • Import directory object path: The Data Pump import directory object name used to mount the shared file system.
    • Object storage bucket in Compartment: Select the object storage bucket. This bucket is used for any Cloud Premigration Advisor Tool reports, Database Migration, and Data Pump log storage, and Data Pump dump files.

      If the bucket is in a different compartment, click Change Compartment to look in another compartment.

  7. If the source or the target database is non-ADB, then the following fields are shown when the initial Data Pump load is performed via object storage or file storage:

    Enter the following information in the Source database box:

    • Export directory object name:
    • Export directory object path:
    • Source Database file system SSL Wallet Path

      Note:

      This field is displayed only when the SSH details are not provided during source database connection.

    Enter the following information in the Target database box:

    • Import directory object name
    • Import directory object path
    • Target Database file system SSL Wallet Path

      Note:

      This field is displayed only when the SSH details are not provided during target database connection.
    If your source or target is non-ADB and you do not provide the SSH details during the database connection, to achieve HTTPS connectivity, you must perform the following steps in the source and target database host:
    • Create SSL Wallet with Certificates
    • Set up Network ACL

    You can either download a pre-created wallet or manually create a wallet.

    To download a wallet:
    1. Download the wallet file.
    2. Unzip the certificate files to a directory on the file system of your database host.
    3. Enter this location in SSL Wallet Path when creating the migration.
    To create a wallet:
    1. Download the necessary certificates.
    2. Extract the files: $tar xvf dbc_certs.tar
    3. Create the wallet directory by running the following command:
      $ mkdir -p <wallet path>
      For example:
      $ mkdir -p /u01/app/oracle/myserverwallet
    4. Create an auto-login wallet by running the following command:
      orapki wallet create -wallet . -pwd <your_chosen_wallet_pw> -auto_login
    5. Import SSL certificates to a new auto-login wallet created under this new directory:
      for i in ls <location of cert files>/*cer
      do
      orapki wallet add -wallet . -trusted_cert -cert $i -pwd <SSL Wallet password>
      done

    For more information, see Create SSL Wallet with Certificates.

    The user performing the export or import requires the necessary network ACL to be granted to access the network from the source and target database host.

    In the following example, run the following commands as SYS if the export or import user is SYSTEM. If your database is multitenant, then perform the following actions in CDB$ROOT. Restrict the host as required.

    Security consideration: Do not allow a complete network access from the database. Restrict the host access to the required OCI object storage region. For example, https://objectstorage.us-ashburn-1.oraclecloud.com and ACL can be time restricted with relevant start_date and end_date arguments in DBMS_NETWORK_ACL_ADMIN.CREATE_ACL. For example:
    @$ORACLE_HOME/rdbms/admin/sqlsessstart.sql
    define clouduser=<user performing export at src or import at target e.g., SYSTEM>
    define sslwalletdir=< OCI wallet path e.g., /opt/oracle/dcs/commonstore/import_dmp/nossh_wallet>
    begin
    dbms_network_acl_admin.append_host_ace(
            host =>'*',
            lower_port => 443,
            upper_port => 443,
            ace => xs$ace_type(
                    privilege_list => xs$name_list('http', 'http_proxy'),
                    principal_name => upper('&clouduser'),
                    principal_type => xs_acl.ptype_db));
    dbms_network_acl_admin.append_wallet_ace(
        wallet_path => 'file:&sslwalletdir',
            ace => xs$ace_type(privilege_list =>
                    xs$name_list('use_client_certificates', 'use_passwords'),
                    principal_name => upper('&clouduser'),
                    principal_type => xs_acl.ptype_db));
    end;
    
    /
    @$ORACLE_HOME/rdbms/admin/sqlsessend.sql
    

    Once the connect privilege is granted, connect as the relevant user such as, SYSTEM and verify if the privilege is granted using the following query:

    COLUMN host FORMAT A30
    SELECT host, lower_port, upper_port, privilege, status FROM user_network_acl_privileges;
    

    For more information, see How To Set Network ACLs.

  8. If you want to create an online migration, check the Use online replication option to enable the replication of all data and metadata transactions from the source to the target database, committed after the initial load has begun. For additional optional configurations, see the Replication tab in the Show advanced options. Optionally, you can set some additional properties which can affect the performance of your online migration.

    Note:

    Oracle recommends using the default Use online replication option to perform an online replication.

    Note:

    Skip this step for offline (Data Pump only) migrations.
  9. Optionally, select Show Advanced Options to configure advanced Data Pump, validation, and Oracle GoldenGate settings.

    For details about these settings see Selecting Objects for Migration, Configuring Optional Initial Load Advanced Options, Configuring Validation Options, and Configuring Optional Replication Advanced Options.

  10. Click Create.

    The migration is loaded, and a new Migration Details page opens showing the information, metrics, and operations for the migration.

    The status of the creation operation is shown under the DM icon. When the status is Active, you can run migration jobs with the migration.

Selecting Objects for Migration

As part of the migration creation, you can specify objects to include or exclude. Alternatively, you can also perform the act of inclusion or exclusion of objects after a migration has been created, using the Selected Objects menu option.

When creating a migration, specify rules for selecting objects in the Advanced Settings on the Selected Objects tab.

Select the Use advanced editor toggle to add the objects you want to include or exclude in bulk as follows:


schema_name1,object_name1,TABLE,EXCLUDE
schema_name2,object_name2,TABLE,EXCLUDE
schema_name3,object_name3,TABLE,EXCLUDE

Add all the objects to include or exclude by listing the Object Owner, Object Name, Object Type , and Action (Include or Exclude), as shown in the above format (comma separated).

To exclude a table from replication, enter information in the following comma separated format:

schema_name1,object_name1,TABLE,EXCLUDE,EXCLUDEFROMREPLICATION

Note:

In the advanced editor:
  • Use a comma separator character (,) to separate each item for every inclusion/exclusion definition.
  • Use the escape character (\) if your schema or object name has a comma (,) character as part of its name.
  • You can add multibyte character (Unicode) names for schema or object names. For example, ƹ ƿschema,DŽobject,TABLE,EXCLUDE.
  • The maximum input size is 500 KB.

Alternatively, you can choose either Include or Exclude from the Action list to specify if a rule should include or exclude the specified database objects in the migration. You can either include or exclude objects in a migration, but you cannot do both.

If no rule is defined, all schemas and objects of the source database will be migrated, with exceptions explained in Objects and Schemas Excluded by Default below.

If you specify Include rules, the migration will only move the specified objects and their dependent objects; all other objects are automatically excluded.

When specifying Exclude rules, the migration will exclude the specified objects and their dependent objects; all other objects are included in the migration.

To create a rule, enter values for each of the following fields:

  • Object Owner specifies the owner of the selected database objects. When using Include rules, all rules must be for the same owner, and wild characters are not allowed.

  • Object Name specifies the name of selected database objects

  • Object Type specifies the type of selected database objects. You can select ALL to select objects of all types.

  • Replication only: You can select this toggle when you want to exclude the tables from replication. This option is enabled when the action is Exclude and the Object Type is TABLE. This ensures that object types such as ROWID columns, unsupported by Oracle GoldenGate, are not replicated during online migration.

You can filter Object Owner and Object Name fields using any valid pattern in Java class Pattern. For example, you can enter .* in the Object Name field to select objects of any name.

The objects included in a migration are also influenced by the Job Mode of the initial load, as explained in Configuring Optional Initial Load Advanced Options.

Please note the following restrictions:

  • When excluding an object in a specified schema, and an object of the same name exists in a different schema that is also part of the migration, the objects will not be excluded (that is, the rule is ignored). The exclusion can be accomplished by migrating the schemas in separate migrations.

  • When creating Include rules in Full job mode, only schema-level rules (Object Name is .* and Object Type is ALL) are allowed.

  • If an Include rule has .* in Object Name, no other rule for the same Object Type is allowed. If the rule has ALL as Object Type, no other rule for any type is allowed.

  • The Object type ALL is only allowed for schema-level rules (Object Name is .*).

  • If you define a rule with an Object owner pattern other than .* and the Object Name is .* then the Object type TABLE is not allowed.

  • Object-level rules (Object Name is any pattern other than .*) can only be used for the following object types: DIRECTORY, FUNCTION, JOB, MATERIALIZED_VIEW, PACKAGE, PROCEDURE, TRIGGER, SEQUENCE, TABLE. All other object types must be either included or excluded using the .* pattern in Object Name, and in addition for exclude, the owner should be .*

Examples

Example 1: Include all objects of schema MySchema

Action = Include

Object Owner Object Name Object Type
MySchema .* ALL

Example 2: Include all tables starting with PROD and procedure MYPROC of schema MySchema, including all dependent objects.

Action = Include

Object Owner Object Name Object Type
MySchema PROD.* TABLE
MySchema MYPROC PROCEDURE

Example 3: Exclude schemas starting with Experimental, the table MySchema.OldTable (also excluding all dependent objects) and all objects of type DB_LINK.

Note that MySchema.OldTable will not be excluded if a table called OldTable is present in a different schema that is also migrated.

Action = Exclude

Object Owner Object Name Object Type
Experimental.* .* ALL
MySchema OldTable TABLE
.* .* DB_LINK

Objects and Schemas Excluded by Default

The following object types are always excluded:

  • GoldenGate administrators: identified in DBA_GOLDENGATE_PRIVILEGES, including ggadmin and c##ggadmin users

  • If target is Autonomous Data Warehouse Shared Infrastructure: CLUSTER, DB_LINK, INDEXTYPE, STATISTICS

  • If target is Autonomous Data Warehouse Dedicated Infrastructure, Autonomous Transaction Processing Shared or Dedicated Infrastructure: CLUSTER, DB_LINK, STATISTICS

  • All other targets: STATISTICS

The following schemas are excluded by default:

  • Schema is marked as ORACLE_MAINTAINED in SYS.DBA_USERS on the source or target database

  • Schema is marked as excluded from export in SYS.KU_NOEXP_VIEW on the source database

  • Schema GGADMIN and C##GGADMIN

Configuring Optional Initial Load Advanced Options

Oracle Cloud Infrastructure Database Migration automatically sets optimal defaults for Oracle Data Pump parameters to achieve better performance and ensure security of data.

To further tune performance, change the export modes, or rename database objects, there are several Data Pump settings that you can configure in the Migration resource Advanced Settings, Initial Load tab.

  • Source data transfer mechanism: Type of dump transfer to use during Data Pump Export. The options are CURL or OCI_CLI. The default is CURL.
  • Target data transfer mechanism Type of dump transfer to use during Data Pump Import. The options are CURL or OCI_CLI. The default is OCI_CLI.
  • Job mode:

    • Full performs a full database export.
    • Schema (default) lets you specify a set of schemas to export.

    Specify schema objects for inclusion or exclusion in the Advanced Settings, Selected Objects tab. See Selecting Objects for Migration for details.

    See Oracle Data Pump Export Modes in Oracle Database Utilities guide for more information about the job modes.

  • Table exists action sets the Data Pump TABLE_EXISTS_ACTION parameter, which specifies the action to be performed when data is loaded into a preexisting table.

    • Skip (default) no changes to the preexisting table.

    • Truncate removes rows from a preexisting table before inserting rows from the Import. Note that if Truncate is specified on tables referenced by foreign key constraints, the truncate operation is changed to Replace.

    • Replace replaces preexisting tables with new definitions. Before creating the new table, the old table is dropped.

    • Append - new rows are added to the existing rows in the table

  • Cluster is enabled by default. When enabled, Data Pump workers are distributed among the instances (nodes) in a cluster (Oracle RAC) architecture.

    If this setting is not checked, all Data Pump workers are started on either the current instance or on an instance usable by the job.

  • Export parallelism degree sets the Data Pump export SET_PARALLEL degree parameter. This setting determines the maximum number of worker processes that can be used for the migration job. You use this parameter to adjust the amount of resources used for a job.

    By default, Database Migration sets source database export parallelism to (Sum of (2 x (no. of physical CPU) per node ) ) with Max 32 cap.

    See SET_PARALLEL Procedure in Oracle Database PL/SQL Packages and Types Reference for more details.

  • Import parallelism degree, similar to Export Parallelism Degree, sets the Data Pump import SET_PARALLEL degree parameter.

    By default, Database Migration sets import parallelism for Autonomous Database to the number of OCPUs.

  • Auto-create tablespaces: For ADB-Dedicated (ADB-D) and co-managed/non-ADB database targets, automatic tablespace creation is enabled by default. Database Migration validates whether automatic tablespace creation is supported on the specified target database. Oracle Autonomous Database Serverless targets are not supported.

    Database Migration automatically discovers the source database tablespaces associated with user schemas that are being migrated, and automatically creates them in the target database before the Data Pump import phase. Database Migration generates the DDL required to pre-create the tablespaces, creates the tablespaces on the target, and runs the generated DDL.

    With automatic tablespace creation enabled, Database Migration skips automatic creation for any tablespaces that are specified in the Metadata remaps section, or that already exist in the target database.

    Use big file: Autonomous Database systems support only BIGFILE tablespaces, so Database Migration enforces BIGFILE tablespace by default on Autonomous Database targets, and reports an error if SMALLFILE tablespaces are found. You can explicitly remap any SMALLFILE tablespaces instead.

    Extend size: enables tablespaces to AUTOEXTEND to avoid extend errors, with a default extend size of 500MB.

  • Remap target: When migrating to an Oracle Autonomous Database Serverless target, all tablespaces are automatically mapped to DATA. You can override this by explicitly mapping tablespaces to a different target in Metadata remaps.

  • Block size of target database: Optionally, when creating or updating a migration for ADB-Dedicated (ADB-D) and co-managed/non-ADB database targets, you can select the database block size for the tablespace as automatic tablespace creation is enabled by default.

    Currently, there are two possible values to select the target database block size: 8K or 16K.

  • Metadata remaps lets you rename database objects during a migration job. Select the object to rename under Type, then enter the Old Value and New Value.

    Supported objects are Datafile, Schema, Table, and Tablespace.

    When migrating to an Oracle Autonomous Database Serverless target, all tablespaces are automatically mapped to DATA. You can override this by explicitly mapping tablespaces to a different target.

    Quota grants for individual users to tablespaces are not remapped, so you must manually create these grants for tablespace DATA.

    To rename multiple objects, click + Another Metadata Remap.

Configuring Validation Options

Oracle Cloud Infrastructure Database Migration is integrated with the Oracle Cloud Pre-Migration Advisor (CPAT) tool. CPAT analyzes the source database during validation and advises you about database features and constructs that are problematic.

CPAT provides the following benefits:

  • Warns you about any features used by your database that aren't supported in the target environment

  • Makes suggestions for remedial changes and/or parameters to use for the Data Pump export and import operations

To configure CPAT settings:

When you are creating a migration you can configure CPAT settings in the Migration resource Advanced Settings, Validation tab.

Run CPAT during validation: Enables CPAT to run during a migration validation job, in the Validate Pre-migration Advisor phase

Continue CPAT validation on error: By default, a validation job stops running when CPAT finds an issue. When checked, if CPAT finds an error, the CPAT validation will continue to its conclusion.

This setting is useful if you want to proceed with a migration if error conditions have already been reviewed and the problematic objects have been excluded, because since CPAT does not review the exclusions list, it will still report any blocking issues on objects even if they are excluded.

These settings can be changed after the migration is created. See Editing a Migration.

See Cloud Premigration Advisor Tool Support in the Zero Downtime Migration documentation for more information about CPAT.

Configuring Optional Replication Advanced Options

In Oracle Cloud Infrastructure Database Migration, for online migrations using Oracle GoldenGate, you can configure some Oracle GoldenGate performance settings in the Migration resource Advanced Settings, Replication tab.

  • Acceptable lag (in seconds) specifies the amount of lag. The lag is the time taken to extract or apply the data from the time it was created on the source database. This parameter specifies the amount of lag, in seconds, that triggers Oracle GoldenGate end-to-end latency monitoring. Monitoring continues until the lag time is lower than the specified value. The maximum value is 30 seconds, and the minimum is 2 seconds. The default value is 30 seconds.

  • Extraction settings

    • Performance profile Sets the Oracle GoldenGate PERFORMANCEPROFILE parameter. Valid for GoldenGate Extract in Integrated Capture mode.

      • HIGH (default) for high volume use cases
      • MEDIUM
      • LOW RES to minimize resource usage for memory or resource constrained deployment

      This setting helps achieve better performance by tuning the group of Oracle GoldenGate parameters that affect performance. Once the performance profile is set up, this option automatically configures the relevant parameters to achieve the desired throughput and latency.

    • Transaction maximum duration specifies the length of time, in seconds, that a transaction can be open before Extract generates a warning message that the transaction is long-running. You can remove the value from this field if you don't want these error messages generated.

  • Replication settings

    • Performance profile simplifies the Replicat performance.
      • Use HIGH when you have no concurrent workload on target. When HIGH is set, set Replicat Mappers to 5 and Appliers to 2 * PDB CPU_COUNT .
      • Use LOW when you have a concurrent workload on target. When LOW is set, set Replicat Mappers to 4 and Appliers to PDB CPU_COUNT / 2 on Target system.
  • GoldenGate instance(Optional) Use Marketplace GoldenGate instance: Select this option if you want to perform replication using your own Marketplace GoldenGate instance provisioned by you in your tenancy.

    Note:

    Oracle recommends using the Use online replication default option. Select the Use Marketplace GoldenGate instance option only when you want to use your own Marketplace GoldenGate compute instance.
    Enter the following details:
    • GoldenGate instance OCID: The instance ID of the compute that is hosting the Marketplace GoldenGate.
    • GoldenGate hub URL: Enter a URL containing only the public host name or IP address of your Marketplace GoldenGate instance.
    • GoldenGate administrator username: Enter the user name for connecting to your Marketplace GoldenGate instance.
    • GoldenGate administrator password: Enter the password for connecting to your Marketplace GoldenGate instance.

    Note:

    You must use Marketplace as the default name for your GoldenGate deployment.

Viewing Migration Details

On the Migrations page of Oracle Cloud Infrastructure Database Migration service console, you can view a list of your migrations in a table, which includes the following information:

  • Name

  • State of the migration resource, which can be any one of the following:

    • Creating: The new migration resource is being created in OCI.

    • Updating: Changes to the migration resource are being registered in OCI.

    • Active: The migration resource has finished being created or updated and is ready for validation. A migration resource in this state can be validated but cannot run a migration job.

    • In Progress: A validation job or migration job is currently running on this migration resource.

    • Accepted: The migration resource has been validated and can run another validation job or a migration job.

    • Succeeded: A migration job using this resource has completed successfully. Once a migration resource has reached this state, jobs can no longer be run with it.

    • Canceled: A migration job using this resource was canceled. You can run a new job on a migration resource in this state.

    • Waiting: A migration job using this resource is waiting for user input. This state appears when a migration job is paused.

    • Needs Attention: A validation job or migration job using this resource has failed and is blocked. Note that you must cancel a job before you can rerun it.

    • Inactive: A fallback state for unexpected errors.

    • Deleting: This state appears when you delete a migration resource. The resource remains in this state until deletion is completed, at which point the resource is no longer listed in the console.

    • Failed: There are problems with the migration resource. This can happen during creation, update, and any issues other than job failures. You can review the migration resource work request to investigate the issue.

  • Last Migration shows the time stamp of the last job run with the migration

  • Created time stamp when the migration was created

Select a migration from the Migrations page to view its details.

On the Migration Details page you can view the migration information, including:

  • OCID: The resource's unique Oracle Cloud ID
  • Compartment: The compartment where the migration resource resides
  • Created: The date and time when the migration was created
  • Encryption Vault: The link takes you to the Vault Details page
  • Encryption Key: The link takes you to the Key Details page
  • Source Database: The link takes you to the Database Details page. You can select Test connection to test the connectivity of the database connection.
  • Target Database: The link takes you to the Database Details page. You can select Test connection to test the connectivity of the database connection.
  • Migration Type: Online or Offline
  • Replication: Enabled or Disabled
  • Validation: CPAT Enabled or CPAT Disabled

Below the migration details, you can view information about resources associated with the migration, such as Jobs (see Managing Migration Jobs), Excluded Objects (see below), Work Requests (see below), and Metrics (see Database Migration Metrics).

Excluded Objects

The Excluded objects list displays objects that are excluded from migration.

Oracle Maintained: objects owned by Oracle-maintained users (ORACLE_MAINTAINED = Y) are excluded from migration

Unsupported: objects not supported for migration by Oracle GoldenGate, such as those owned by the ggadmin and c##ggadmin users, are excluded from migration

User Excluded: objects explicitly excluded by rules configured in your migration Selected Objects.

See Selecting Objects for Migration for details about objects excluded by default and explicitly selecting objects for migration.

Work Requests

On the Migration Details page, under the migration information box you can find the Work Requests list. Work Requests lists any work requests sent to OCI to facilitate the creation, update, validation, cloning, or deletion of this resource. Click the work request to go to the Work Request Details page for more information about the work request.

Editing a Migration

You can modify some of the settings in a migration resource configuration in Oracle Cloud Infrastructure Database Migration.

In the list of migrations on the Migrations page, select the Name of the migration you want to edit.

Select Edit next to any of the following modifiable settings:

Source Database

In the Edit Source Database dialog, you can choose a different source database.

Valid selection of source databases is the same as for when you create a new migration resource; Non-autonomous non-CDBs or PDB/CDB combinations are supported for the source.

Target Database

In the Edit Target Database dialog, you can choose a different target database.

Valid selection of target databases is the same as for when you create a new migration resource; Autonomous Databases are supported for the target.

Migration Type

The Migration Type (Offline or Online) cannot be changed, but you can change the settings that are valid for the migration type originally configured.

In the Edit Initial Load Settings dialog, you can choose to change the following settings.

  • Initial Load: You can change the data transfer method to use Object Storage bucket or database link.

  • Object Storage Bucket: When your initial load data transfer method is Object Storage, you can change the bucket in which to store the Data Pump dumps.

  • Export Directory Object: When your initial load data transfer method is Object Storage, you can change the export directory object by specifying a new name and path.

  • Advanced Options: You can change the initial load advanced options. See Configuring Optional Initial Load Advanced Options for information about these settings.

Replication

In the Edit replication settings dialog, you can enable or disable online replication, and you can change the following GoldenGate settings.

  • Use online replication: Select this option if you want to enable the replication of all data and metadata transactions from the source to the target database committed after the initial load.
  • Acceptable lag (in seconds)
  • Extraction settings
  • Replication settings
  • GoldenGate instance:
    • (Optional). Use Marketplace GoldenGate instance: If you select this option, then enter the following details:
    • GoldenGate instance OCID: The instance ID of the compute that is hosting the Marketplace GoldenGate.
    • GoldenGate hub URL
    • GoldenGate administrator username
    • GoldenGate administrator password
  • See Configuring Optional Replication Advanced Options for information about these settings.

Encryption Key

In the Edit Encryption Key dialog, you can choose a different Vault, Encryption Key, and the compartment in which to create a Secret.

Validation

In the Edit Validation Settings dialog, you can enable or disable the use of Cloud Pre-Migration Advisor Tool (CPAT) during migration validation, and you can change whether CPAT validation continues on error.

Selected Objects

You can add or remove Include or Exclude database object rules in the Resources section, under Selected Objects, which is located below the Migration Information box.

To remove a rule, select the rule's checkbox and click Remove, or select the Remove action from the actions list.

To add or edit rules, click Add Objects. See Selecting Objects for Migration for information about configuring selected objects.

Cloning a Migration

To clone a migration:
  1. In the list of migrations on the Migrations page, select Clone from the Actions (three dots) menu of the migration you want to clone.
    You can also click Clone on the Migration Details page.
  2. In the Clone Migration dialog, enter a unique Name, then click Next to update any of the source or target database details for the clone, and click Clone on the final page of the dialog.

Moving a Migration

You can move a migration resource from one compartment to another.

To move a migration:

  1. In the list of migrations on the Migrations page, select Move Resource from the Actions (three dots) menu for the migration you want to move.
    You can also select Move Resource on the Migration Details page.
  2. In the Move Resource to a Different Compartment dialog, select the compartment to move the migration to from the dropdown.
  3. Click Move Resource.

After you move the migration to the new compartment, inherent policies apply immediately and may affect access to the migration through the Console. For more information, see Managing Compartments.

Deleting a Migration

Before you delete a migration, ensure that you carefully review any resources that reference the migration. If not, you could encounter errors.

To delete a migration:
  1. In the list of migrations on the Migrations page, select Delete from the Actions (three dots) menu of the database you want to delete.
    You can also click Delete from the More Actions menu on the Migration Details page.
  2. In the Delete dialog, click Delete.

Managing Tags for Migrations

Tags help you locate resources within your tenancy. In Oracle Cloud Infrastructure Database Migration, you can add and view a migration's tags from the Migrations page and from the Migration Details page.

On the Migrations page, from the migration's Actions (three dots) menu, select Add Tags or View Tags.

On the Migration Details page, you can select Add Tags in the More Actions menu above the Migration Information box, or click the Tags tab to view and edit tags.

Learn more about tagging in Managing Tags and Tag Namespaces.

Using the Migration API

You can use the following operations to manage migration resources:

For information about using the API and signing requests, see REST APIs and Security Credentials. For more information about SDKs, see Software Development Kits and Command Line Interface.