在相同區域中的雲端資料庫之間複寫資料

瞭解如何設定 Oracle Cloud Infrastructure GoldenGate ,在兩個自治式資料庫之間複製資料。

概觀

Oracle Cloud Infrastructure GoldenGate 可讓您複寫相同區域內支援的資料庫。下列步驟會引導您使用 Oracle Data Pump 建立目標資料庫,並將資料從來源複製到目標。

此快速入門也以 LiveLab 形式提供:檢視研討會

same-region.png 的說明如下
same-region.png 圖解描述

開始之前

您必須具有下列項目才能繼續:

工作 1:設定環境

  1. 建立資料複製部署
  2. 建立來源 Oracle Autonomous Transaction Processing (ATP) 連線
  3. 建立目標 Autonomous Data Warehouse (ADW) 連線
  4. 指定部署的連線
  5. 使用 Autonomous Database SQL 工具啟用補充記錄日誌:
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
  6. 在 SQL 工具中執行下列查詢,以確保來源資料庫中所有表格的 support_mode=FULL
    
    select * from DBA_GOLDENGATE_SUPPORT_MODE where owner = 'SRC_OCIGGLL';

任務 2:建立整合擷取

整合式擷取會擷取來源資料庫的持續變更。

  1. 在「部署詳細資訊」頁面上,按一下啟動主控台
  2. 如有必要,請為使用者名稱輸入 oggadmin 以及您在建立部署時使用的密碼,然後按一下登入
  3. 新增交易資料與檢查點表格:
    1. 開啟導覽功能表,然後按一下資料庫連線 (DB Connections)
    2. 按一下連線到資料庫 SourceATP
    3. 在導覽功能表中,按一下 Trandata ,然後按一下新增 Trandata (加號圖示)。
    4. 結構名稱中,輸入 SRC_OCIGGLL,然後按一下提交
    5. 若要驗證,請在「搜尋」欄位中輸入 SRC_OCIGGLL,然後按一下搜尋
    6. 開啟導覽功能表,然後按一下資料庫連線 (DB Connections)
    7. 按一下連線到資料庫 TargetADW
    8. 在導覽功能表中,按一下檢查點,然後按一下新增檢查點 (加號圖示)。
    9. 對於檢查點表格,輸入 "SRCMIRROR_OCIGGLL"."CHECKTABLE",然後按一下提交
  4. 新增 Extract

    附註:

    如需可用於指定來源表格之參數的詳細資訊,請參閱其他擷取參數選項
    在「擷取參數」頁面上,在 EXTTRAIL <trail-name> 下附加下列行:
    -- Capture DDL operations for listed schema tables
    ddl include mapped
    
    -- Add step-by-step history of 
    -- to the report file. Very useful when troubleshooting.
    ddloptions report 
    
    -- Write capture stats per table to the report file daily.
    report at 00:01 
    
    -- Rollover the report file weekly. Useful when IE runs
    -- without being stopped/started for long periods of time to
    -- keep the report files from becoming too large.
    reportrollover at 00:01 on Sunday 
    
    -- Report total operations captured, and operations per second
    -- every 10 minutes.
    reportcount every 10 minutes, rate 
    
    -- Table list for capture
    table SRC_OCIGGLL.*;
  5. 檢查長時間執行的交易。在您的來源資料庫上執行下列命令檔:
    select start_scn, start_time from gv$transaction where start_scn < (select max(start_scn) from dba_capture);

    如果查詢傳回任何資料列,則您必須找到交易的 SCN,然後確認或倒回交易。

作業 3:使用 Oracle Data Pump 匯出資料 (ExpDP)

使用 Oracle Data Pump (ExpDP) 將資料從來源資料庫匯出至 Oracle Object Store。

  1. 建立 Oracle Object Store 儲存桶

    記下用於「匯出」和「匯入」命令檔的命名空間和儲存桶名稱。

  2. 建立認證記號,然後將記號字串複製並貼到文字編輯器,供日後使用。
  3. 在您的來源資料庫中建立證明資料,將 <user-name><token> 取代為您的 Oracle Cloud 帳戶使用者名稱和您在上一個步驟中建立的權杖字串:
    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL(
        credential_name => 'ADB_OBJECTSTORE', 
        username => '<user-name>',
        password => '<token>'
      );
    END;
  4. 在您的來源資料庫中執行下列命令檔以建立「匯出資料」工作。請務必相應地取代物件存放區 URI 中的 <region><namespace><bucket-name>SRC_OCIGGLL.dmp -- 在此命令檔執行時所建立的檔案。
    DECLARE
    ind NUMBER;              -- Loop index
    h1 NUMBER;               -- Data Pump job handle
    percent_done NUMBER;     -- Percentage of job complete
    job_state VARCHAR2(30);  -- To keep track of job state
    le ku$_LogEntry;         -- For WIP and error messages
    js ku$_JobStatus;        -- The job status from get_status
    jd ku$_JobDesc;          -- The job description from get_status
    sts ku$_Status;          -- The status object returned by get_status
    
    BEGIN
    -- Create a (user-named) Data Pump job to do a schema export.
    h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'SRC_OCIGGLL_EXPORT','LATEST');
    
    -- Specify a single dump file for the job (using the handle just returned
    -- and a directory object, which must already be defined and accessible
    -- to the user running this procedure.
    DBMS_DATAPUMP.ADD_FILE(h1,'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket-name>/o/SRC_OCIGGLL.dmp','ADB_OBJECTSTORE','100MB',DBMS_DATAPUMP.KU$_FILE_TYPE_URIDUMP_FILE,1);
    
    -- A metadata filter is used to specify the schema that will be exported.
    DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''SRC_OCIGGLL'')');
    
    -- Start the job. An exception will be generated if something is not set up properly.
    DBMS_DATAPUMP.START_JOB(h1);
    
    -- The export job should now be running. In the following loop, the job
    -- is monitored until it completes. In the meantime, progress information is displayed.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
      dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error + dbms_datapump.ku$_status_job_status + dbms_datapump.ku$_status_wip,-1,job_state,sts);
      js := sts.job_status;
    
    -- If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
      dbms_output.put_line('*** Job percent done = ' || to_char(js.percent_done));
      percent_done := js.percent_done;
    end if;
    
    -- If any work-in-progress (WIP) or error messages were received for the job, display them.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
      le := sts.wip;
    else
      if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
      then
        le := sts.error;
      else
        le := null;
      end if;
    end if;
    if le is not null
    then
      ind := le.FIRST;
      while ind is not null loop
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      end loop;
    end if;
      end loop;
    
      -- Indicate that the job finished and detach from it.
      dbms_output.put_line('Job has completed');
      dbms_output.put_line('Final job state = ' || job_state);
      dbms_datapump.detach(h1);
    END;

作業 4:使用 Oracle Data Pump 建立目標資料庫 (ImpDP)

使用 Oracle Data Pump (ImpDP) 從從來源資料庫匯出的 SRC_OCIGGLL.dmp 將資料匯入目標資料庫。

  1. 在您的目標資料庫中建立證明資料,以存取 Oracle 物件存放區 (使用前一節中的相同資訊)。
    BEGIN
      DBMS_CLOUD.CREATE_CREDENTIAL( 
        credential_name => 'ADB_OBJECTSTORE',
        username => '<user-name>',
        password => '<token>'
      );
    END;
  2. 在目標資料庫中執行下列命令檔,從 SRC_OCIGGLL.dmp 匯入資料。請務必相應地取代物件存放區 URI 中的 <region><namespace><bucket-name>
    DECLARE
    ind NUMBER;  -- Loop index
    h1 NUMBER;  -- Data Pump job handle
    percent_done NUMBER;  -- Percentage of job complete
    job_state VARCHAR2(30);  -- To keep track of job state
    le ku$_LogEntry;  -- For WIP and error messages
    js ku$_JobStatus;  -- The job status from get_status
    jd ku$_JobDesc;  -- The job description from get_status
    sts ku$_Status;  -- The status object returned by get_status
    BEGIN
    
    -- Create a (user-named) Data Pump job to do a "full" import (everything
    -- in the dump file without filtering).
    h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'SRCMIRROR_OCIGGLL_IMPORT');
    
    -- Specify the single dump file for the job (using the handle just returned)
    -- and directory object, which must already be defined and accessible
    -- to the user running this procedure. This is the dump file created by
    -- the export operation in the first example.
    
    DBMS_DATAPUMP.ADD_FILE(h1,'https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<bucket-name>/o/SRC_OCIGGLL.dmp','ADB_OBJECTSTORE',null,DBMS_DATAPUMP.KU$_FILE_TYPE_URIDUMP_FILE);
    
    
    -- A metadata remap will map all schema objects from SRC_OCIGGLL to SRCMIRROR_OCIGGLL.
    DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','SRC_OCIGGLL','SRCMIRROR_OCIGGLL');
    
    -- If a table already exists in the destination schema, skip it (leave
    -- the preexisting table alone). This is the default, but it does not hurt
    -- to specify it explicitly.
    DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
    
    -- Start the job. An exception is returned if something is not set up properly.
    DBMS_DATAPUMP.START_JOB(h1);
    
    -- The import job should now be running. In the following loop, the job is
    -- monitored until it completes. In the meantime, progress information is
    -- displayed. Note: this is identical to the export example.
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
      dbms_datapump.get_status(h1,
        dbms_datapump.ku$_status_job_error +
        dbms_datapump.ku$_status_job_status +
        dbms_datapump.ku$_status_wip,-1,job_state,sts);
        js := sts.job_status;
    
      -- If the percentage done changed, display the new value.
      if js.percent_done != percent_done
      then
        dbms_output.put_line('*** Job percent done = ' ||
        to_char(js.percent_done));
        percent_done := js.percent_done;
      end if;
    
      -- If any work-in-progress (WIP) or Error messages were received for the job, display them.
      if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
      then
        le := sts.wip;
      else
        if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
        then
          le := sts.error;
        else
          le := null;
        end if;
      end if;
      if le is not null
      then
        ind := le.FIRST;
        while ind is not null loop
          dbms_output.put_line(le(ind).LogText);
          ind := le.NEXT(ind);
        end loop;
      end if;
    end loop;
    
    -- Indicate that the job finished and gracefully detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(h1);
    END;

作業 5:新增和執行非整合式 Replicat

  1. 新增並執行 Replicat
    參數檔畫面上,以下列指令碼取代 MAP *.*, TARGET *.*;
    -- Capture DDL operations for listed schema tables
    --
    ddl include mapped
    --
    -- Add step-by-step history of ddl operations captured
    -- to the report file. Very useful when troubleshooting.
    --
    ddloptions report
    --
    -- Write capture stats per table to the report file daily.
    --
    report at 00:01
    --
    -- Rollover the report file weekly. Useful when PR runs
    -- without being stopped/started for long periods of time to
    -- keep the report files from becoming too large.
    --
    reportrollover at 00:01 on Sunday
    --
    -- Report total operations captured, and operations per second
    -- every 10 minutes.
    --
    reportcount every 10 minutes, rate
    --
    -- Table map list for apply
    --
    DBOPTIONS ENABLE_INSTANTIATION_FILTERING;
    MAP SRC_OCIGGLL.*, TARGET SRCMIRROR_OCIGGLL.*;

    附註:

    DBOPTIONS ENABLE_INSTATIATION_FILTERING 可對使用 Oracle Data Pump 匯入的表格啟用 CSN 篩選。如需詳細資訊,請參閱 DBOPTIONS 參照
  2. 對來源資料庫執行插入:
    1. 返回 Oracle Cloud 主控台,然後使用導覽功能表來瀏覽回 Oracle DatabaseAutonomous Transaction ProcessingSourceATP
    2. 在「來源可承諾量明細」頁面上,按一下工具,然後按一下資料庫動作
    3. 在「研習」詳細資訊中使用「來源 ATP」資料庫證明資料登入「資料庫」動作,然後按一下 SQL
    4. 輸入下列插入,然後按一下執行命令檔
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1000,'Houston',20,743113);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1001,'Dallas',20,822416);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1002,'San Francisco',21,157574);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1003,'Los Angeles',21,743878);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1004,'San Diego',21,840689);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1005,'Chicago',23,616472);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1006,'Memphis',23,580075);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1007,'New York City',22,124434);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1008,'Boston',22,275581);
      Insert into SRC_OCIGGLL.SRC_CITY (CITY_ID,CITY,REGION_ID,POPULATION) values (1009,'Washington D.C.',22,688002);
    5. 在 OCI GoldenGate Deployment Console 中,按一下擷取名稱 (UAEXT) ,然後按一下統計資料。確認 SRC_OCIGGLL.SRC_CITY 列有 10 個插入。
    6. 返回「總覽」畫面,按一下 Replicat name (REP) ,然後按一下「統計資料」。確認列出的 SRCMIRROR_OCIGGLL.SRC_CITY 有 10 個插入

作業 6:監視與維護處理

  1. 監督複製處理作業
  2. 管理歷程檔