Supported Operations and Prerequisites
This topic lists what is supported by Oracle GoldenGate for DBFS.
Oracle GoldenGate for DBFS supports the following:
Supported DDL (like
ALTER) on DBFS objects except for
CREATEstatements on the DBFS objects.
CREATEon DBFS must be excluded from the configuration, as must any schemas that will hold the created DBFS objects. The reason to exclude
CREATESis that the metadata for DBFS must be properly populated in the SYS dictionary tables (which itself is excluded from Oracle GoldenGate capture by default).
Capture and replication of DML on the tables that underlie the DBFS file system.
The procedures that follow assume that Oracle GoldenGate is configured properly to support active-active configuration. This means that it must be:
Installed according to the instructions in this guide.
Configured according to the instructions in the Oracle GoldenGate Windows and UNIX Administrator's Guide.
Applying the Required Patch
Apply the Oracle DBFS patch for bug-9651229 on both databases.
To determine if the patch is installed, run the following query:
connect / as sysdba select procedure_name from dba_procedures where object_name = 'DBMS_DBFS_SFS_ADMIN' and procedure_name = 'PARTITION_SEQUENCE';
The query should return a single row. Anything else indicates that the proper patched version of DBFS is not available on your database.
Examples Used in these Procedures
The following procedures assume two systems and configure the environment so that DBFS users on both systems see the same DBFS files, directories, and contents that are kept in synchronization with Oracle GoldenGate.
It is possible to extend these concepts to support three or more peer systems.
Partitioning the DBFS Sequence Numbers
DBFS uses an internal sequence-number generator to construct unique names and unique IDs.
These steps partition the sequences into distinct ranges to ensure that there are no conflicts across the databases. After this is done, further DBFS operations (both creation of new file systems and subsequent file system operations) can be performed without conflicts of names, primary keys, or IDs during DML propagation.
- Connect to each database as
Issue the following query on each database.
select last_number from dba_sequences where sequence_owner = 'SYS' and sequence_name = 'DBFS_SFS_$FSSEQ'
- From this query, choose the maximum value of
LAST_NUMBERacross both systems, or pick a high value that is significantly larger than the current value of the sequence on either system.
- Substitute this value ("
maxval" is used here as a placeholder) in both of the following procedures. These procedures logically index each system as
declare begin dbms_dbfs_sfs_admin.partition_sequence(nodes => 2, myid => 0, newstart => :maxval); commit; end; /
declare begin dbms_dbfs_sfs_admin.partition_sequence( nodes => 2, myid => 1, newstart => :maxval); commit; end; /
Notice the difference in the value specified for the
myidparameter. These are the different index values.
For a multi-way configuration among three or more databases, you could make the following alterations:
Adjust the maximum value that is set for
maxvalupward appropriately, and use that value on all nodes.
Vary the value of
myidin the procedure from 0 for the first node, 1 for the second node, 2 for the third one, and so on.
- (Recommended) After (and only after) the DBFS sequence generator is partitioned, create a new DBFS file system on each system, and use only these file systems for DML propagation with Oracle GoldenGate. See Configuring the DBFS file system.
DBFS file systems that were created before the patch for bug-9651229 was applied or before the DBFS sequence number was adjusted can be configured for propagation, but that requires additional steps not described in this document. If you must retain old file systems, open a service request with Oracle Support.
Configuring the DBFS file system
To replicate DBFS file system operations, use a configuration that is similar to the standard bi-directional configuration for DML.
Some guidelines to follow while configuring Oracle GoldenGate for DBFS are:
Use matched pairs of identically structured tables.
Allow each database to have write privileges to opposite tables in a set, and set the other one in the set to read-only. For example:
Node1 writes to local table
t1and these changes are replicated to
Node2 writes to local table
t2and these changes are replicated to
t2is read-only. On Node2,
DBFS file systems make this kind of table pairing simple because:
The tables that underlie the DBFS file systems have the same structure.
These tables are modified by simple, conventional DML during higher-level file system operations.
The DBFS ContentAPI provides a way of unifying the namespace of the individual DBFS stores by means of mount points that can be qualified as read-write or read-only.
The following steps create two DBFS file systems (in this case named
FS2) and set them to be read-write or read, as appropriate.
- Run the following procedure to create the two file systems. (Substitute your store names for
- Run the following procedure to give each file system the appropriate access rights. (Substitute your store names for
In this example, note that on Node 1, store
FS1is read-write and store
FS2is read-only, while on Node 2 the converse is true: store
FS1is read-only and store
Note also that the read-write store is mounted as local and the read-only store is mounted as remote. This provides users on each system with an identical namespace and identical semantics for read and write operations. Local path names can be modified, but remote path names cannot.
declare dbms_dbfs_sfs.createfile system('FS1'); dbms_dbfs_sfs.createfile system('FS2'); dbms_dbfs_content.registerStore('FS1', 'posix', 'DBMS_DBFS_SFS'); dbms_dbfs_content.registerStore('FS2', 'posix', 'DBMS_DBFS_SFS'); commit; end; /
Example F-2 Node 1
declare dbms_dbfs_content.mountStore('FS1', 'local'); dbms_dbfs_content.mountStore('FS2', 'remote', read_only => true); commit; end; /
Example F-3 Node 2
declare dbms_dbfs_content.mountStore('FS1', 'remote', read_only => true); dbms_dbfs_content.mountStore('FS2', 'local'); commit; end; /
Mapping Local and Remote Peers Correctly
The names of the tables that underlie the DBFS file systems are generated internally and dynamically.
Continuing with the preceding example, there are:
Two nodes (Node 1 and Node 2 in the example).
Four stores: two on each node (
FS2in the example).
Eight underlying tables: two for each store (a table and a ptable). These tables must be identified, specified in Extract
TABLEstatements, and mapped in Replicat
- To identify the table names that back each file system, issue the following query. (Substitute your store names for
The output looks like the following examples.
- Identify the tables that are locally read-write to Extract by creating the following
TABLEstatements in the Extract parameter files. (Substitute your pluggable database names, schema names, and table names as applicable.)
- Link changes on each remote file system to the corresponding local file system by creating the following
MAPstatements in the Replicat parameter files. (Substitute your pluggable database, schema and table names.)
This mapping captures and replicates local read-write source tables to remote read-only peer tables:
file system changes made to
FS1on Node 1 propagate to
FS1on Node 2.
file system changes made to
FS2on Node 2 propagate to
Changes to the file systems can be made through the DBFS ContentAPI (package
DBMS_DBFS_CONTENT) of the database or through
dbfs_clientmounts and conventional file systems tools.
All changes are propagated in both directions.
A user at the virtual root of the DBFS namespace on each system sees identical content.
For mutable operations, users use the
/localsub-directory on each system.
For read operations, users can use either of the
/remotesub-directories, depending on whether they want to see local or remote content.
select fs.store_name, tb.table_name, tb.ptable_name from table(dbms_dbfs_sfs.listTables) tb, table(dbms_dbfs_sfs.listfile systems) fs where fs.schema_name = tb.schema_name and fs.table_name = tb.table_name and fs.store_name in ('FS1', 'FS2') ;
Example F-5 Example output: Node 1 (Your Table Names Will Be Different.)
STORE NAME TABLE_NAME PTABLE_NAME ------------- ------------- ------------- FS1 SFS$_FST_100 SFS$_FSTP_100 FS2 SFS$_FST_118 SFS$_FSTP_118
Example F-6 Example output: Node 2 (Your Table Names Will Be Different.)
STORE NAME TABLE_NAME PTABLE_NAME ------------- ------------- ------------- FS1 SFS$_FST_101 SFS$_FSTP_101 FS2 SFS$_FST_119 SFS$_FSTP_119
Example F-7 Node1
schema.SFS$_FST_100 TABLE [
Example F-8 Node2
schema.SFS$_FST_119 TABLE [
Example F-9 Node1
schema.SFS$_FST_119, TARGET [
schema.SFS$_FST_118; MAP [
schema.SFS$_FSTP_119, TARGET [
Example F-10 Node2
schema.SFS$_FST_100, TARGET [
schema.SFS$_FSTP_100, TARGET [