Bi-Directional Replication

In a bi-directional configuration, there are Extract and Replicat processes on both the source and target systems to support the replication of transactional changes on each system to the other system. To support this configuration, each Extract must be able to filter the transactions applied by the local Replicat, so that they are not recaptured and sent back to their source in a continuous loop. Additionally, AUTO_INCREMENT columns must be set so that there is no conflict between the values on each system.

Topics:

Prerequisites for Bidirectional Replication

Topics:

Enable Bi-Directional Loop Detection

Loop detection is a requirement for bi-directional implementations of Oracle GoldenGate, so that an Extract for one source database does not recapture transactions sent by a Replicat from another source database.

With the CDC Extract capture method, by default, any transaction committed by a Replicat into a database where an Extract is configured, will recapture that transaction from the Replicat as long as supplemental logging is enabled for those tables that the Replicat is delivering to.

In order to ignore recapturing transactions that are applied by a Replicat, you must use the TRANLOGOPTIONS FILTERTABLE parameter for the CDC Extract. The table used as the filtering table will be the Oracle GoldenGate checkpoint table that you must create for the Replicat.

Note:

Only Classic and Coordinated Replicats support bi-directional and multi-directional replication, Parallel Replicat does not support this.

To create a Filter Table and enable Supplemental Logging:

  1. On each source database, ensure that a checkpoint table for use by Replicats has been created. For example:

    ADD CHECKPOINTTABLE ggadmin.oggcheck

  2. Enable supplemental logging for the the checkpoint table. For example:

    ADD TRANDATA ggadmin.ggcheckpoint ALLCOLS

  3. Ensure that the Replicat is created with the checkpoint table information.

    ADD REPLICAT reptgt1, EXTTRAIL ./dirdat/e2, CHECKPOINTTABLE ggadmin.ggcheckpoint

  4. Configure each Extract with the IGNOREREPLICATES (on by default) and FILTERTABLE parameters, using the Replicat’s checkpoint table for the filtering table.

    TRANLOGOPTIONS IGNOREREPLICATES TRANLOGOPTIONS FILTERTABLE ggadmin.ggcheckpoint

    Note:

    Oracle GoldenGate for PostgreSQL supports only one FILTERTABLE statement per Extract, so for multi-directional implementations, ensure each Replicat uses the same checkpoint table in the database that they deliver to.

Considerations for an Active-Active Configuration

The following considerations apply in an active-active configuration. In addition, review the Oracle GoldenGate installation and configuration document for your type of database to see if there are any other limitations or requirements to support a bi-directional configuration.

Application Design

When using Active-Active replication, the time zones must be the same on both systems so that timestamp-based conflict resolution and detection can operate.

Active-active replication is not recommended for use with commercially available packaged business applications, unless the application is designed to support it. Among the obstacles that these applications present are:

  • Packaged applications might contain objects and data types that are not supported by Oracle GoldenGate.

  • They might perform automatic DML operations that you cannot control, but which will be replicated by Oracle GoldenGate and cause conflicts when applied by Replicat.

  • You probably cannot control the data structures to make modifications that are required for active-active replication.

Keys

For accurate detection of conflicts, all records must have a unique, not-null identifier. If possible, create a primary key. If that is not possible, use a unique key or create a substitute key with a KEYCOLS option of the MAP and TABLE parameters. In the absence of a unique identifier, Oracle GoldenGate uses all of the columns that are valid in a WHERE clause, but this will degrade performance if the table contains numerous columns.

To maintain data integrity and prevent errors, the following must be true of the key that you use for any given table:

  • contain the same columns in all of the databases where that table resides.

  • contain the same values in each set of corresponding rows across the databases.

Database-Generated Values

Do not replicate database-generated sequential values, such as Oracle sequences, in a bi-directional configuration. The range of values must be different on each system, with no chance of overlap. For example, in a two-database environment, you can have one server generate even values, and the other odd. For an n-server environment, start each key at a different value and increment the values by the number of servers in the environment. This method may not be available to all types of applications or databases. If the application permits, you can add a location identifier to the value to enforce uniqueness.

Database Configuration

One of the databases must be designated as the trusted source. This is the primary database and its host system from which the other database is derived in the initial synchronization phase and in any subsequent resynchronizations that become necessary. Maintain frequent backups of the trusted source data.

Preventing Data Looping

In a bidirectional configuration, SQL changes that are replicated from one system to another must be prevented from being replicated back to the first system. Otherwise, it moves back and forth in an endless loop, as in this example:

  1. A user application updates a row on system A.

  2. Extract extracts the row on system A and sends it to system B.

  3. Replicat updates the row on system B.

  4. Extract extracts the row on system B and sends it back to system A.

  5. The row is applied on system A (for the second time).

  6. This loop continues endlessly.

To prevent data loopback, you may need to provide instructions that:

  • prevent the capture of SQL operations that are generated by Replicat, but enable the capture of SQL operations that are generated by business applications if they contain objects that are specified in the Extract parameter file.

  • identify local Replicat transactions, in order for the Extract process to ignore them.

Identifying Replicat Transactions

To configure Extract to identify Replicat transactions, follow the instructions for the database from which Extract will capture data.

Topics:

DB2 z/OS

Identify the Replicat user name by using the following parameter statement in the Extract parameter file.

TRANLOGOPTIONS EXCLUDEUSER user

This parameter statement marks all DDL and DML transactions that are generated by this user as Replicat transactions. The user name is included in the transaction record that is read by Extract.

MySQL

Identify the name of the Replicat checkpoint table by using the following parameter statement in the Extract parameter file.

TRANLOGOPTIONS FILTERTABLE table_name

Replicat writes a checkpoint to the checkpoint table at the end of each of its transactions as part of its checkpoint procedure. (This is the table that is created with the ADD CHECKPOINTTABLE command.) Because every Replicat transaction includes a write to this table, it can be used to identify Replicat transactions in a bidirectional configuration. FILTERTABLE identifies the name of the checkpoint table, so that Extract ignores transactions that contain any operations on it.

PostgreSQL and SQL Server

Identify the name of the Replicat checkpoint table by using the following parameter statement in the Extract parameter file and ensure that the Replicat checkpoint table has been enabled for supplemental logging with the ADD TRANDATA command.

TRANLOGOPTIONS FILTERTABLE table_name

Replicat writes a checkpoint to the checkpoint table at the end of each of its transactions as part of its checkpoint procedure. (This is the table that is created with the ADD CHECKPOINTTABLE command). Because every Replicat transaction includes a write to this table, it can be used to identify Replicat transactions in a bi-directional configuration. FILTERTABLE identifies the name of the checkpoint table, so that Extract ignores transactions that contain any operations on it.

Oracle

There are multiple ways to identify Replicat transaction in an Oracle environment. When Replicat is in classic or integrated mode, you use the following parameters:

  • Replicats set a tag of 00 by default. Use DBOPTIONS with the SETTAG option in the Replicat parameter file to change the tag that Replicat sets. Replicat tags the transactions being applied with the specified value, which identifies those transactions in the redo stream. Valid values are a single TAG value consisting of hexadecimal digits.

  • Use the TRANLOGOPTIONS parameter with the EXCLUDETAG option in the Extract parameter file. The logmining server associated with that Extract excludes redo that is tagged with the SETTAG value.

    The following shows how SETTAG can be set in the Replicat parameter file:

    DBOPTIONS SETTAG 0935

    The following shows how EXCLUDETAG can be set in the Extract parameter file:

    TRANLOGOPTIONS EXCLUDETAG 0935

    If you are excluding multiple tags, each must have a separate TRANLOGOPTIONS EXCLUDETAG statement specified.

You can also use the transaction name or USERID of the Replicat user to identify Replicat transactions. You can choose which of these to ignore when you configure Extract.

Preventing the Capture of Replicat Operations

Depending on which database you are using, you may or may not need to provide explicit instructions to prevent the capture of Replicat operations.

Oracle: Preventing the Capture of Replicat Transactions

To prevent the capture of SQL that is applied by Replicat to an Oracle database, use the TRANLOGOPTIONS parameter with the EXCLUDETAG tag option. This parameter directs the Extract process to ignore transactions that are tagged with the specified redo tag.

See Identifying Replicat Transactions to set the tag value. This is the recommended approach for Oracle.

Non-Oracle Database: Preventing Capture of Replicat Transactions

To prevent the capture of SQL that is applied by Replicat to other database types, use the following parameters:

  • GETAPPLOPS | IGNOREAPPLOPS: Controls whether or not data operations (DML) produced by business applications except Replicat are included in the content that Extract writes to a specific trail or file.

  • GETREPLICATES | IGNOREREPLICATES: Controls whether or not DML operations produced by Replicat are included in the content that Extract writes to a specific trail or file.

Manage Conflicts

Uniform conflict-resolution procedures must be in place on all systems in an active-active configuration. Conflicts should be identified immediately and handled with as much automation as possible; however, different business applications will present their own unique set of requirements in this area.

Because Oracle GoldenGate is an asynchronous solution, conflicts can occur when modifications are made to identical sets of data on separate systems at (or almost at) the same time. Conflicts occur when the timing of simultaneous changes results in one of these out-of-sync conditions:

  • A uniqueness conflict occurs when Replicat applies an insert or update operation that violates a uniqueness integrity constraint, such as a PRIMARY KEY or UNIQUE constraint. An example of this conflict type is when two transactions originate from two different databases, and each one inserts a row into a table with the same primary key value.

  • An update conflict occurs when Replicat applies an update that conflicts with another update to the same row. Update conflicts happen when two transactions that originate from different databases update the same row at nearly the same time. Replicat detects an update conflict when there is a difference between the old values (the before values) that are stored in the trail record and the current values of the same row in the target database.

  • A delete conflict occurs when two transactions originate at different databases, and one deletes a row while the other updates or deletes the same row. In this case, the row does not exist to be either updated or deleted. Replicat cannot find the row because the primary key does not exist.

For example, UserA on DatabaseA updates a row, and UserB on DatabaseB updates the same row. If UserB's transaction occurs before UserA's transaction is synchronized to DatabaseB, there will be a conflict on the replicated transaction.

A more complicated example involves three databases and illustrates a more complex ordering conflict. Assume three databases A, B, and C. Suppose a user inserts a row at database A, which is then replicated to database B. Another user then modifies the row at database B, and the row modification is replicated to database C. If the row modification from B arrives at database C before the row insert from database A, C will detect a conflict.

Where possible, try to minimize or eliminate any chance of conflict. Some ways to do so are:

  • Configure the applications to restrict which columns can be modified in each database. For example, you could limit access based on geographical area, such as by allowing different sales regions to modify only the records of their own customers. As another example, you could allow a customer service application on one database to modify only the NAME and ADDRESS columns of a customer table, while allowing a financial application on another database to modify only the BALANCE column. In each of those cases, there cannot be a conflict caused by concurrent updates to the same record.

  • Keep synchronization latency low. If UserA on DatabaseA and UserB on DatabaseB both update the same rows at about the same time, and UserA's transaction gets replicated to the target row before UserB's transaction is completed, conflict is avoided. See Managing Conflicts for suggestions on improving the performance of the Oracle GoldenGate processes.

To avoid conflicts, replication latency must be kept as low as possible. When conflicts are unavoidable, they must be identified immediately and resolved with as much automation as possible, either through the Oracle GoldenGate Conflict Detection and Resolution (CDR) feature, or through methods developed on your own. Custom methods can be integrated into Oracle GoldenGate processing through the SQLEXEC and user exit functionality. See Manual Conflict Detection and Resolution for more information about using Oracle GoldenGate to handle conflicts.

For Oracle database, the automatic Conflict Detection Resolution (CDR) feature exists. To know more, see Automatic Conflict Detection and Resolution.

MySQL: Bi-Directional Replication

In a bidirectional configuration, there are Extract and Replicat processes on both the source and target systems to support the replication of transactional changes on each system to the other system. To support this configuration, each Extract must be able to filter the transactions applied by the local Replicat, so that they are not recaptured and sent back to their source in a continuous loop. Additionally, AUTO_INCREMENT columns must be set so that there is no conflict between the values on each system.

  1. Configure Oracle GoldenGate for high availability or active-active replication according to the instructions in the Propagating DDL in Active-Active (Bidirectional) Configurations.
  2. To filter out Replicat operations in a bi-directional configuration so that the applied operations are not captured and looped back to the source again, take the following steps on each MySQL database:
    • Configure each Replicat process to use a checkpoint table. Replicat writes a checkpoint to this table at the end of each transaction. You can use one global checkpoint table or one per Replicat process. See Oracle GoldenGate Checkpoint Tables.

    • Specify the name of the checkpoint table with the FILTERTABLE option of the TRANLOGOPTIONS parameter in the Extract parameter file. The Extract process will ignore transactions that end with an operation to the specified table, which should only be those of Replicat.

      Note:

      Although optional for other supported databases as a means of enhancing recovery, the use of a checkpoint table is required for MySQL when using bidirectional replication (and likewise, will enhance recovery).

      If using a parallel Replicat in a bidirectional replication, then multiple filter tables are supported using the TRANLOGOPTIONS FILTERTABLE option. Multiple filter tables allow the TRANLOGOPTIONS FILTERTABLE to be specified multiple times with different table names or wildcards.

      You can include single or multiple TRANLOGOPTIONS FILTERTABLE entries in the Extract parameter file. In the following example, multiple TRANLOGOPTIONS FILTERTABLEentries are included in the Extract parameter file with explicit object names and wildcards.
      TRANLOGOPTIONS FILTERTABLE ggs.chkpt2
      TRANLOGOPTIONS FILTERTABLE ggs.chkpt_RABC_*
  3. Edit the MySQL server configuration file to set the auto_increment_increment and auto_increment_offset parameters to avoid discrepancies that could be caused by the bi-directional operations. The following illustrates these parameters, assuming two servers: ServerA and ServerB.

    ServerA:

    auto-increment-increment = 2
    auto-increment-offset = 1
    

    ServerB:

    auto-increment-increment = 2
    auto-increment-offset = 2

PostgreSQL: Bi-Directional Replication

In a bidirectional configuration, there are Extract and Replicat processes on both the source and target systems to support the replication of transactional changes on each system to the other system. To support this configuration, each Extract must be able to filter the transactions applied by the local Replicat, so that they are not recaptured and sent back to their source in a continuous loop.

  1. Configure Oracle GoldenGate for high availability or active-active replication according to the instructions in the Propagating DDL in Active-Active (Bidirectional) Configurations.
  2. To filter out Replicat operations in a bi-directional configuration so that the applied operations are not captured and looped back to the source again, take the following steps on each PostgreSQL database:
    • Configure each Replicat process to use a checkpoint table. Replicat writes a checkpoint to this table at the end of each transaction. You can use one global checkpoint table or one per Replicat process.

    • Specify the name of the checkpoint table with the FILTERTABLE option of the TRANLOGOPTIONS parameter in the Extract parameter file. The Extract process will ignore transactions that end with an operation to the specified table, which should only be those of Replicat.

      If using a parallel Replicat in a bidirectional replication, then multiple filter tables are supported using the TRANLOGOPTIONS FILTERTABLE option. Multiple filter tables allow the TRANLOGOPTIONS FILTERTABLE to be specified multiple times with different table names or wildcards.

      You can include single or multiple TRANLOGOPTIONS FILTERTABLE entries in the Extract parameter file. In the following example, multiple TRANLOGOPTIONS FILTERTABLEentries are included in the Extract parameter file with explicit object names and wildcards.
      TRANLOGOPTIONS FILTERTABLE ggs.chkpt2
      TRANLOGOPTIONS FILTERTABLE ggs.chkpt_RABC_*

Preparing DBFS for an Active-Active Configuration

Learn the steps to configure Oracle GoldenGate to function within an active-active bidirectional or multi-directional environment where Oracle Database File System (DBFS) is in use on both (or all) systems.

Topics:

Supported Operations and Prerequisites

This topic lists what is supported by Oracle GoldenGate for DBFS.

Oracle GoldenGate for DBFS supports the following:

  • Supported DDL (like TRUNCATE or ALTER) on DBFS objects except for CREATE statements on the DBFS objects. CREATE on DBFS must be excluded from the configuration, as must any schemas that will hold the created DBFS objects. The reason to exclude CREATES is that the metadata for DBFS must be properly populated in the SYS dictionary tables (which itself is excluded from Oracle GoldenGate capture by default).

  • Capture and replication of DML on the tables that underlie the DBFS file system.

The procedures that follow assume that Oracle GoldenGate is configured properly to support active-active configuration. This means that it must be:

  • Installed according to the instructions in this guide.

  • Configured according to the instructions in the Oracle GoldenGate Windows and UNIX Administrator's Guide.

Applying the Required Patch

Apply the Oracle DBFS patch for bug-9651229 on both databases.

To determine if the patch is installed, run the following query:

connect / as sysdba
select  procedure_name
from    dba_procedures
where   object_name = 'DBMS_DBFS_SFS_ADMIN'
and procedure_name = 'PARTITION_SEQUENCE';

The query should return a single row. Anything else indicates that the proper patched version of DBFS is not available on your database.

Examples Used in these Procedures

The following procedures assume two systems and configure the environment so that DBFS users on both systems see the same DBFS files, directories, and contents that are kept in synchronization with Oracle GoldenGate.

It is possible to extend these concepts to support three or more peer systems.

Partitioning the DBFS Sequence Numbers

DBFS uses an internal sequence-number generator to construct unique names and unique IDs.

These steps partition the sequences into distinct ranges to ensure that there are no conflicts across the databases. After this is done, further DBFS operations (both creation of new file systems and subsequent file system operations) can be performed without conflicts of names, primary keys, or IDs during DML propagation.

  1. Connect to each database as sysdba.

    Issue the following query on each database.

    SELECT LAST_NUMBER
    FROM DBA_SEQUENCES
    WHERE SEQUENCE_OWNER = 'SYS'
    AND SEQUENCE_NAME = 'DBFS_SFS_$FSSEQ'
  2. From this query, choose the maximum value of LAST_NUMBER across both systems, or pick a high value that is significantly larger than the current value of the sequence on either system.
  3. Substitute this value ("maxval" is used here as a placeholder) in both of the following procedures. These procedures logically index each system as myid=0 and myid=1.

    Node1

    Node 2

    DECLARE
    BEGIN
    DBMS_DBFS_SFS_ADMIN.PARTITION_SEQUENCE(NODES => 2, MYID => 0, NEWSTART => :MAXVAL);
    COMMIT;
    END;
    /

    Note:

    Notice the difference in the value specified for the myid parameter. These are the different index values.

    For a multi-way configuration among three or more databases, you could make the following alterations:

    • Adjust the maximum value that is set for maxval upward appropriately, and use that value on all nodes.

    • Vary the value of myid in the procedure from 0 for the first node, 1 for the second node, 2 for the third one, and so on.

  4. (Recommended) After (and only after) the DBFS sequence generator is partitioned, create a new DBFS file system on each system, and use only these file systems for DML propagation with Oracle GoldenGate. See Configuring the DBFS file system.

Note:

DBFS file systems that were created before the patch for bug-9651229 was applied or before the DBFS sequence number was adjusted can be configured for propagation, but that requires additional steps not described in this document. If you must retain old file systems, open a service request with Oracle Support.

Configuring the DBFS file system

To replicate DBFS file system operations, use a configuration that is similar to the standard bi-directional configuration for DML.

Some guidelines to follow while configuring Oracle GoldenGate for DBFS are:

  • Use matched pairs of identically structured tables.

  • Allow each database to have write privileges to opposite tables in a set, and set the other one in the set to read-only. For example:

    • Node1 writes to local table t1 and these changes are replicated to t1 on Node2.

    • Node2 writes to local table t2 and these changes are replicated to t2 on Node1.

    • On Node1, t2 is read-only. On Node2, t1 is read-only.

DBFS file systems make this kind of table pairing simple because:

  • The tables that underlie the DBFS file systems have the same structure.

  • These tables are modified by simple, conventional DML during higher-level file system operations.

  • The DBFS ContentAPI provides a way of unifying the namespace of the individual DBFS stores by means of mount points that can be qualified as read-write or read-only.

The following steps create two DBFS file systems (in this case named FS1 and FS2) and set them to be read-write or read, as appropriate.

  1. Run the following procedure to create the two file systems. (Substitute your store names for FS1 and FS2.)
  2. Run the following procedure to give each file system the appropriate access rights. (Substitute your store names for FS1 and FS2.)

    In this example, note that on Node 1, store FS1 is read-write and store FS2 is read-only, while on Node 2 the converse is true: store FS1 is read-only and store FS2 is read-write.

    Note also that the read-write store is mounted as local and the read-only store is mounted as remote. This provides users on each system with an identical namespace and identical semantics for read and write operations. Local path names can be modified, but remote path names cannot.

Example 11-8

DECLARE
DBMS_DBFS_SFS.CREATEFILE SYSTEM('FS1');
DBMS_DBFS_SFS.CREATEFILE SYSTEM('FS2');
 
DBMS_DBFS_CONTENT.REGISTERSTORE('FS1',
'POSIX', 'DBMS_DBFS_SFS');
DBMS_DBFS_CONTENT.REGISTERSTORE('FS2',
'POSIX', 'DBMS_DBFS_SFS');
COMMIT;
END;
/

Example 11-9 Node 1

DECLARE
DBMS_DBFS_CONTENT.MOUNTSTORE('FS1', 'LOCAL');
DBMS_DBFS_CONTENT.MOUNTSTORE('FS2', 'REMOTE',
READ_ONLY => TRUE);
COMMIT;
END;
/

Example 11-10 Node 2

DECLARE
DBMS_DBFS_CONTENT.MOUNTSTORE('FS1', 'REMOTE',
READ_ONLY => TRUE);
DBMS_DBFS_CONTENT.MOUNTSTORE('FS2', 'LOCAL');
COMMIT;
END;
/

Mapping Local and Remote Peers Correctly

The names of the tables that underlie the DBFS file systems are generated internally and dynamically.

Continuing with the preceding example, there are:

  • Two nodes (Node 1 and Node 2 in the example).

  • Four stores: two on each node (FS1 and FS2 in the example).

  • Eight underlying tables: two for each store (a table and a ptable). These tables must be identified, specified in Extract TABLE statements, and mapped in Replicat MAP statements.

  1. To identify the table names that back each file system, issue the following query. (Substitute your store names for FS1 and FS2.)

    The output looks like the following examples.

  2. Identify the tables that are locally read-write to Extract by creating the following TABLE statements in the Extract parameter files. (Substitute your pluggable database names, schema names, and table names as applicable.)
  3. Link changes on each remote file system to the corresponding local file system by creating the following MAP statements in the Replicat parameter files. (Substitute your pluggable database, schema and table names.)

    This mapping captures and replicates local read-write source tables to remote read-only peer tables:

    • file system changes made to FS1 on Node 1 propagate to FS1 on Node 2.

    • file system changes made to FS2 on Node 2 propagate to FS2 on Node1.

    Changes to the file systems can be made through the DBFS ContentAPI (package DBMS_DBFS_CONTENT) of the database or through dbfs_client mounts and conventional file systems tools.

    All changes are propagated in both directions.

    • A user at the virtual root of the DBFS namespace on each system sees identical content.

    • For mutable operations, users use the /local sub-directory on each system.

    • For read operations, users can use either of the /local or /remote sub-directories, depending on whether they want to see local or remote content.

Example 11-11

select fs.store_name, tb.table_name, tb.ptable_name
from table(dbms_dbfs_sfs.listTables) tb,
table(dbms_dbfs_sfs.listfile systems) fs
where   fs.schema_name = tb.schema_name
and fs.table_name = tb.table_name
and fs.store_name in ('FS1', 'FS2')
;

Example 11-12 Example output: Node 1 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_100   SFS$_FSTP_100
FS2            SFS$_FST_118   SFS$_FSTP_118

Example 11-13 Example output: Node 2 (Your Table Names Will Be Different.)

STORE NAME     TABLE_NAME     PTABLE_NAME
-------------  -------------  -------------  
FS1            SFS$_FST_101   SFS$_FSTP_101
FS2            SFS$_FST_119   SFS$_FSTP_119

Example 11-14 Node1

TABLE [container.]schema.SFS$_FST_100
TABLE [container.]schema.SFS$_FSTP_100;

Example 11-15 Node2

TABLE [container.]schema.SFS$_FST_119
TABLE [container.]schema.SFS$_FSTP_119;

Example 11-16 Node1

MAP [container.]schema.SFS$_FST_119, TARGET [container.]schema.SFS$_FST_118;
MAP [container.]schema.SFS$_FSTP_119, TARGET [container.]schema.SFS$_FSTP_118

Example 11-17 Node2

MAP [container.]schema.SFS$_FST_100, TARGET [container.]schema.SFS$_FST_101;MAP [container.]schema.SFS$_FSTP_100, TARGET [container.]schema.SFS$_FSTP_101;