Create pipeline resources

Learn to create resources necessary to use Pipelines.

Create connections

OCI GoldenGate Pipelines currently only supports Oracle Autonomous Databases with dedicated endpoints. For best results, use Autonomous Transaction Processing as the source database and Autonomous Data Warehouse as the target database.

Before you create a pipeline, ensure that you first create connections to your source and target databases. See Connect to Oracle Autonomous Databases.

Create pipelines

Before you begin:
Ensure that you:
  • Review and configure row uniqueness across database tables to avoid primary key issues with ZeroETL mirroring. Primary keys uniquely identifies a record or row in a database table. See Ensuring row uniqueness in source and target tables.
  • As a user with admin privileges for the target database, grant the following privileges to the GGADMIN user for ZeroETL mirroring:
    GRANT DWROLE TO GGADMIN;
    GRANT DATAPUMP_CLOUD_EXP TO GGADMIN;
    GRANT DATAPUMP_CLOUD_IMP TO GGADMIN;
    GRANT EXECUTE ON DBMS_CLOUD_ADMIN TO GGADMIN;
    GRANT CREATE DATABASE LINK TO GGADMIN;
  • Check DBA_GOLDENGATE_SUPPORT_MODE view on the source database that displays information about the level of Oracle GoldenGate capture process support for the tables in the database.
  • Add the minimum required policies for GoldenGate pipelines.
  • The source and target connections you're using for the pipeline have Dedicated endpoints.
To create a pipeline:
  1. On the OCI GoldenGate Overview page, in the GoldenGate menu, click Pipelines.

    Alternatively, you can click Data Fabric, and then Create pipeline.

  2. On the Pipelines page, click Create pipeline.
  3. In the Create pipeline panel, complete the fields as follows, and then click Create pipeline:
    1. For Name, enter a name for the pipeline.
    2. (Optional) For Description, enter a description to help distinguish this pipeline from others.
    3. For Compartment, select the compartment in which to create the pipeline.
    4. Select a license type.
    5. Select your source and target connections from the dropdowns.

      Note:

      Pipelines currently only support connections with dedicated endpoints.
    6. (Optional) Click Show advanced options to configure Process options:
      • Copy existing data before ongoing replication: Select this option to perform an initial load of the data. For existing tables, you can choose to:
        • Truncate: Deletes existing rows and then loads rows from the source.
        • Replace: Drops the existing table and then creates and loads it from the source.
        • Append: Leaves existing rows unchanged and loads rows from the source.
        • Skip: Leaves the table as is and moves on to the next object.
      • Replace schema changes (DDL): When selected, choose what action to take upon existing schemas when the process encounters an error:
        • Terminate: Roll back the transaction and stop processing.
        • Discard: Log the error to the discard file and continue processing.
        • Ignore: Ignore the error and continue processing.
      • Action upon DML error: Select the action to take upon existing data when the process encounters a DML error:
        • Terminate
        • Discard
        • Ignore
      • Restart after failure: Select this option to enable autorestart in the event the process stops for any reason.
      • Start pipeline using default mapping: Starts the pipeline immediately after creation with the default 1:1 mapping rules. If not selected, you can configure the rule mappings after creation and then manually start the pipeline.

Add mapping rules

To preview, add, or remove mapping rules:
  1. On the Pipeline information page, select Mapping rules.
  2. In the Mapping rules list you can review or edit the current mapping rules and:
    • Preview: Review how your mapping rules affect the schemas and tables included in the replication.
    • Add: Identify tables and schemas to include or exclude. You can use the fully qualified object name or wildcards in place of any part of the object name. For more information, see Using Wildcards in Database Object Names.
    • Remove: Remove mapping rules.

Enable pipeline logs

  1. On the Pipelines page, select the pipeline for which you want to enable logging.
  2. On the Pipelines details page, select Logs.
  3. In list of pipeline logs, select Enable log from the Critical Events Actions menu.
  4. In the Enable log panel:
    1. For Log name, enter a name.
    2. From the Compartment dropdown, select a compartment.
    3. For Log group, you can:
      • Select a group from the dropdown
      • Create a new group
      • Leave it blank, and a default group is automatically assigned
    4. For Log retention, select the number of months from the dropdown.
    5. Click Enable log.
Wait for the status to become Active.

Start the pipeline

To start the pipeline:
  1. From the Actions menu on the pipeline's details page, select Start.
  2. On the pipeline's details page, select Initialization.

    The Initialization steps displays the current status of the pipeline steps. For each step, you can select View details from its Actions menu and review corresponding messages.

  3. After the Pipeline's status is Active (Running), select Runtime.

    Runtime processes display the state and latency of the Capture and Apply processes.

Known issues

ZeroETL Mirror Pipeline Apply process failure

If your pipeline Apply process fails with the following error in OCI Logging:
Error mapping from ADMIN.DBTOOLS$EXECUTION_HISTORY to <ADB ID>.ADMIN.DBTOOLS$EXECUTION_HISTORY.

Workaround: Create an Exclude rule with ADMIN.*, and then restart the Pipeline.

Oracle Data Pump fails if the target database time zone file version is lower than the source database's

The pipeline option, "Copy existing data to target database before starting replication" uses Oracle Data Pump. Oracle Data Pump fails if the target database's time zone file version is lower than that of the source database.

Workaround: To work around this, you must upgrade the target database's time zone version. See Manage time zone file updates on Autonomous Database.