E Preparing DBFS for an Active-Active Configuration

This appendix contains steps to configure Oracle GoldenGate to function within an active-active bidirectional or multi-directional environment where Oracle Database File System (DBFS) is in use on both (or all) systems.

Topics:

Supported Operations and Prerequisites

This topic lists what is supported by Oracle GoldenGate for DBFS.

Oracle GoldenGate for DBFS supports the following:

  • Supported DDL (like TRUNCATE or ALTER) on DBFS objects except for CREATE statements on the DBFS objects. CREATE on DBFS must be excluded from the configuration, as must any schemas that will hold the created DBFS objects. The reason to exclude CREATES is that the metadata for DBFS must be properly populated in the SYS dictionary tables (which itself is excluded from Oracle GoldenGate capture by default).

  • Capture and replication of DML on the tables that underlie the DBFS file system.

The procedures that follow assume that Oracle GoldenGate is configured properly to support active-active configuration. This means that it must be:

  • Installed according to the instructions in this guide.

  • Configured according to the instructions in the Oracle GoldenGate Windows and UNIX Administrator's Guide.

Applying the Required Patch

Apply the Oracle DBFS patch for bug-9651229 on both databases.

To determine if the patch is installed, run the following query:

connect / as sysdba
select  procedure_name
from    dba_procedures
where   object_name = 'DBMS_DBFS_SFS_ADMIN'
and procedure_name = 'PARTITION_SEQUENCE';

The query should return a single row. Anything else indicates that the proper patched version of DBFS is not available on your database.

Examples Used in these Procedures

The following procedures assume two systems and configure the environment so that DBFS users on both systems see the same DBFS files, directories, and contents that are kept in synchronization with Oracle GoldenGate.

It is possible to extend these concepts to support three or more peer systems.

Partitioning the DBFS Sequence Numbers

DBFS uses an internal sequence-number generator to construct unique names and unique IDs.

These steps partition the sequences into distinct ranges to ensure that there are no conflicts across the databases. After this is done, further DBFS operations (both creation of new file systems and subsequent file system operations) can be performed without conflicts of names, primary keys, or IDs during DML propagation.

  1. Connect to each database as sysdba.

    Issue the following query on each database.

    select last_number
    from dba_sequences
    where sequence_owner = 'SYS'
    and sequence_name = 'DBFS_SFS_$FSSEQ'
    
  2. From this query, choose the maximum value of LAST_NUMBER across both systems, or pick a high value that is significantly larger than the current value of the sequence on either system.
  3. Substitute this value ("maxval" is used here as a placeholder) in both of the following procedures. These procedures logically index each system as myid=0 and myid=1.

    Node1

    declare
    begin
    dbms_dbfs_sfs_admin.partition_sequence(nodes => 2, myid => 0, newstart => :maxval);
    commit;
    end;
    /
    

    Node 2

    declare
    begin
    dbms_dbfs_sfs_admin.partition_sequence( nodes => 2, myid => 1, newstart => :maxval);
    commit;
    end;
    /

    Note:

    Notice the difference in the value specified for the myid parameter. These are the different index values.

    For a multi-way configuration among three or more databases, you could make the following alterations:

    • Adjust the maximum value that is set for maxval upward appropriately, and use that value on all nodes.

    • Vary the value of myid in the procedure from 0 for the first node, 1 for the second node, 2 for the third one, and so on.

  4. (Recommended) After (and only after) the DBFS sequence generator is partitioned, create a new DBFS file system on each system, and use only these file systems for DML propagation with Oracle GoldenGate. See Configuring the DBFS file system.

Note:

DBFS file systems that were created before the patch for bug-9651229 was applied or before the DBFS sequence number was adjusted can be configured for propagation, but that requires additional steps not described in this document. If you must retain old file systems, open a service request with Oracle Support.

Configuring the DBFS file system

To replicate DBFS file system operations, use a configuration that is similar to the standard bi-directional configuration for DML.

Some guidelines to follow while configuring Oracle GoldenGate for DBFS are:

  • Use matched pairs of identically structured tables.

  • Allow each database to have write privileges to opposite tables in a set, and set the other one in the set to read-only. For example:

    • Node1 writes to local table t1 and these changes are replicated to t1 on Node2.

    • Node2 writes to local table t2 and these changes are replicated to t2 on Node1.

    • On Node1, t2 is read-only. On Node2, t1 is read-only.

DBFS file systems make this kind of table pairing simple because:

  • The tables that underlie the DBFS file systems have the same structure.

  • These tables are modified by simple, conventional DML during higher-level file system operations.

  • The DBFS ContentAPI provides a way of unifying the namespace of the individual DBFS stores by means of mount points that can be qualified as read-write or read-only.

The following steps create two DBFS file systems (in this case named FS1 and FS2) and set them to be read-write or read, as appropriate.

  1. Run the following procedure to create the two file systems. (Substitute your store names for FS1 and FS2.)
  2. Run the following procedure to give each file system the appropriate access rights. (Substitute your store names for FS1 and FS2.)

    In this example, note that on Node 1, store FS1 is read-write and store FS2 is read-only, while on Node 2 the converse is true: store FS1 is read-only and store FS2 is read-write.

    Note also that the read-write store is mounted as local and the read-only store is mounted as remote. This provides users on each system with an identical namespace and identical semantics for read and write operations. Local path names can be modified, but remote path names cannot.

Example E-1

declare
dbms_dbfs_sfs.createfile system('FS1');
dbms_dbfs_sfs.createfile system('FS2');
 
dbms_dbfs_content.registerStore('FS1',
'posix', 'DBMS_DBFS_SFS');
dbms_dbfs_content.registerStore('FS2',
'posix', 'DBMS_DBFS_SFS');
commit;
end;
/

Example E-2 Node 1

declare
dbms_dbfs_content.mountStore('FS1', 'local');
dbms_dbfs_content.mountStore('FS2', 'remote',
read_only => true);
commit;
end;
/

Example E-3 Node 2

declare
dbms_dbfs_content.mountStore('FS1', 'remote',
read_only => true);
dbms_dbfs_content.mountStore('FS2', 'local');
commit;
end;
/

Mapping Local and Remote Peers Correctly

The names of the tables that underlie the DBFS file systems are generated internally and dynamically.

Continuing with the preceding example, there are:

  • Two nodes (Node 1 and Node 2 in the example).

  • Four stores: two on each node (FS1 and FS2 in the example).

  • Eight underlying tables: two for each store (a table and a ptable). These tables must be identified, specified in Extract TABLE statements, and mapped in Replicat MAP statements.

  1. To identify the table names that back each file system, issue the following query. (Substitute your store names for FS1 and FS2.)

    The output looks like the following examples.

  2. Identify the tables that are locally read-write to Extract by creating the following TABLE statements in the Extract parameter files. (Substitute your pluggable database names, schema names, and table names as applicable.)
  3. Link changes on each remote file system to the corresponding local file system by creating the following MAP statements in the Replicat parameter files. (Substitute your pluggable database, schema and table names.)

    This mapping captures and replicates local read-write source tables to remote read-only peer tables:

    • file system changes made to FS1 on Node 1 propagate to FS1 on Node 2.

    • file system changes made to FS2 on Node 2 propagate to FS2 on Node1.

    Changes to the file systems can be made through the DBFS ContentAPI (package DBMS_DBFS_CONTENT) of the database or through dbfs_client mounts and conventional file systems tools.

    All changes are propagated in both directions.

    • A user at the virtual root of the DBFS namespace on each system sees identical content.

    • For mutable operations, users use the /local sub-directory on each system.

    • For read operations, users can use either of the /local or /remote sub-directories, depending on whether they want to see local or remote content.

Example E-4

select fs.store_name, tb.table_name, tb.ptable_name
from table(dbms_dbfs_sfs.listTables) tb,
table(dbms_dbfs_sfs.listfile systems) fs
where   fs.schema_name = tb.schema_name
and fs.table_name = tb.table_name
and fs.store_name in ('FS1', 'FS2')
;

Example E-5 Example output: Node 1 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_100   SFS$_FSTP_100
FS2            SFS$_FST_118   SFS$_FSTP_118

Example E-6 Example output: Node 2 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_101   SFS$_FSTP_101
FS2            SFS$_FST_119   SFS$_FSTP_119

Example E-7 Node1

TABLE [container.]schema.SFS$_FST_100
TABLE [container.]schema.SFS$_FSTP_100;

Example E-8 Node2

TABLE [container.]schema.SFS$_FST_119
TABLE [container.]schema.SFS$_FSTP_119;

Example E-9 Node1

MAP [container.]schema.SFS$_FST_119, TARGET [container.]schema.SFS$_FST_118;
MAP [container.]schema.SFS$_FSTP_119, TARGET [container.]schema.SFS$_FSTP_118

Example E-10 Node2

MAP [container.]schema.SFS$_FST_100, TARGET [container.]schema.SFS$_FST_101;MAP [container.]schema.SFS$_FSTP_100, TARGET [container.]schema.SFS$_FSTP_101;