Mirror data with ZeroETL mirror pipelines

Learn to mirror data from Autonomous Transaction Processing to Autonomous Data Warehouse using ZeroETL mirror pipelines.

Before you begin

Ensure that you:
  • Review and configure row uniqueness across database tables to avoid primary key issues with ZeroETL mirroring. Primary keys uniquely identifies a record or row in a database table. See Ensuring row uniqueness in source and target tables.
  • As a user with admin privileges for the target database, grant the following privileges to the GGADMIN user for ZeroETL mirroring:
    GRANT DWROLE TO GGADMIN; GRANT DATAPUMP_CLOUD_EXP TO GGADMIN; GRANT DATAPUMP_CLOUD_IMP TO GGADMIN; GRANT EXECUTE ON DBMS_CLOUD_ADMIN TO GGADMIN; GRANT CREATE DATABASE LINK TO GGADMIN;
  • Check DBA_GOLDENGATE_SUPPORT_MODE view on the source database that displays information about the level of Oracle GoldenGate capture process support for the tables in the database.
  • Add the minimum required policies for GoldenGate pipelines.
  • The source and target connections you're using for the pipeline have Dedicated endpoints.

Task 1: Create connections

First, create connections to your source and target databases. OCI GoldenGate Pipelines currently supports Oracle Autonomous Database connections.

Task 2: Create the pipeline

To create a pipeline:
  1. On the OCI GoldenGate Overview page, in the GoldenGate menu, click Pipelines.

    Alternatively, you can click Data Fabric, and then Create pipeline.

  2. On the Pipelines page, click Create pipeline.
  3. In the Create pipeline panel, complete the fields as follows, and then click Create pipeline:
    1. For Name, enter a name for the pipeline.
    2. (Optional) For Description, enter a description to help distinguish this pipeline from others.
    3. For Compartment, select the compartment in which to create the pipeline.
    4. Select a license type.
    5. Select your source and target connections from the dropdowns.

      Note:

      Pipelines currently only support connections with dedicated endpoints.
    6. (Optional) Click Show advanced options to configure Process options:
      • Copy existing data before ongoing replication: Select this option to perform an initial load of the data. For existing tables, you can choose to:
        • Truncate: Deletes existing rows and then loads rows from the source.
        • Replace: Drops the existing table and then creates and loads it from the source.
        • Append: Leaves existing rows unchanged and loads rows from the source.
        • Skip: Leaves the table as is and moves on to the next object.
      • Replace schema changes (DDL): When selected, choose what action to take upon existing schemas when the process encounters an error:
        • Terminate: Roll back the transaction and stop processing.
        • Discard: Log the error to the discard file and continue processing.
        • Ignore: Ignore the error and continue processing.
      • Action upon DML error: Select the action to take upon existing data when the process encounters a DML error:
        • Terminate
        • Discard
        • Ignore
      • Restart after failure: Select this option to enable autorestart in the event the process stops for any reason.
      • Start pipeline using default mapping: Starts the pipeline immediately after creation with the default 1:1 mapping rules. If not selected, you can configure the rule mappings after creation and then manually start the pipeline.

Task 3: Add Mapping rules

Mapping rules let you change the default one-to-one mapping of source tables and schemas to the target.

To preview, add, or remove mapping rules:
  1. On the Pipeline information page, select Mapping rules.
  2. In the Mapping rules list you can review or edit the current mapping rules and:
    • Preview: Review how your mapping rules affect the schemas and tables included in the replication.
    • Add: Identify tables and schemas to include or exclude. You can use the fully qualified object name or wildcards in place of any part of the object name. For more information, see Using Wildcards in Database Object Names.
    • Remove: Remove mapping rules.

Task 4: Enable logging

To enable logging in the pipeline:
  1. On the Pipelines page, select the pipeline you want to enable logging for.
  2. On the Pipelines details page, select Logs.
  3. In the list of Pipelines, from the Action menu, for the pipeline you want to enable, select Enable log.
  4. In the Enable log panel:
    1. For Log name, enter a name.
    2. From the Compartment dropdown, select a compartment.
    3. For Log group, you can:
      • Select a group from the dropdown
      • Create a new group
      • Leave it blank, and a default group is automatically assigned
  5. For Log retention, select the number of months from the dropdown.
  6. Click Enable log.
Wait for the status to become Active.

Task 5: Start the pipeline

To start the pipeline:
  1. From the Actions menu on the pipeline's details page, select Start.
  2. On the pipeline's details page, select Initialization.

    The Initialization steps displays the current status of the pipeline steps. For each step, you can select View details from its Actions menu and review corresponding messages.

  3. After the Pipeline's status is Active (Running), select Runtime.

    Runtime processes display the state and latency of the Capture and Apply processes.